UK GP surgery praised for using AI to boost efficiency and patient care

UK Health Minister Karin Smyth praised St George’s Surgery in Weston-super-Mare for utilising AI to enhance efficiency. Serving nearly 14,000 patients, the surgery uses AI to automate note-taking and letter drafting, reducing administrative burdens on staff.

It has been reported that, in June of 2025, St George’s Surgery handled over 9,000 appointments, with more than half booked and held on the same day. As part of the UK’s 10-Year Health Plan, the government stated it aims to expand AI adoption in healthcare, potentially freeing up the capacity of over 2,000 full-time GPs.

Andy Carpenter, Digital Director at Mendip Vale Medical Group, highlighted that AI is helping to manage growing patient demand, increase face-to-face time with GPs, and maintain strong data protection standards. Health Minister Karin Smyth also stressed the need for safe, well-regulated AI in healthcare, noting its practical uses, such as remote monitoring of vaccine fridge temperatures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Android spyware posing as antivirus

LunaSpy is a new Android spyware campaign disguised as an antivirus or banking protection app. It spreads via messenger links and fake channels, tricking users into installing what appears to be a helpful security tool.

Once installed, the app mimics a real scanner, shows fake threat detections and operates unnoticed. In reality, it monitors everything on the device and sends sensitive data to attackers.

Active since at least February 2025, LunaSpy spreads through hijacked contact accounts and emerging Telegram channels. It poses as legitimate software to build trust before beginning surveillance.

Android users must avoid apps from unofficial links, scrutinise messenger invites, and only install from trusted stores. Reliable antivirus software and cautious permission granting provide essential defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia tackles online scams with AI and new cyber guidelines

Cybercrime involving financial scams continues to rise in Malaysia, with 35,368 cases reported in 2024, a 2.53 per cent increase from the previous year, resulting in losses of RM1.58 billion.

The situation remains severe in 2025, with over 12,000 online scam cases recorded in the first quarter alone, involving fake e-commerce offers, bogus loans, and non-existent investment platforms. Losses during this period reached RM573.7 million.

Instead of waiting for the situation to worsen, the Digital Ministry is rolling out proactive safeguards. These include new AI-related guidelines under development by the Department of Personal Data Protection, scheduled for release by March 2026.

The documents will cover data protection impact assessments, automated decision-making, and privacy-by-design principles.

The ministry has also introduced an official framework for responsible AI use in the public sector, called GPAISA, to ensure ethical compliance and support across government agencies.

Additionally, training initiatives such as AI Untuk Rakyat and MD Workforce aim to equip civil servants and enforcement teams with skills to handle AI and cyber threats.

In partnership with CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, the ministry is also creating an AI-powered application to verify digital images and videos.

Instead of relying solely on manual analysis, the tool will help investigators detect online fraud, identity forgery, and synthetic media more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft offers $5 million for cloud and AI vulnerabilities

Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.

Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.

Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.

Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.

The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.

Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New malware steals 200,000 passwords and credit card details through fake software

Hackers are now using fake versions of familiar software and documents to spread a new info-stealing malware known as PXA Stealer.

First discovered by Cisco Talos, the malware campaign is believed to be operated by Vietnamese-speaking cybercriminals and has already compromised more than 4,000 unique IP addresses across 62 countries.

Instead of targeting businesses alone, the attackers are now focusing on ordinary users in countries including the US, South Korea, and the Netherlands.

PXA Stealer is written in Python and designed to collect passwords, credit card data, cookies, autofill information, and even crypto wallet details from infected systems.

It spreads by sideloading malware into files like Microsoft Word executables or ZIP archives that also contain legitimate-looking programs such as Haihaisoft PDF Reader.

The malware uses malicious DLL files to gain persistence through the Windows Registry and downloads additional harmful files via Dropbox. After infection, it uses Telegram to exfiltrate stolen data, which is then sold on the dark web.

Once activated, the malware even attempts to open a fake PDF in Microsoft Edge, though the file fails to launch and shows an error message — by that point, it has already done the damage.

To avoid infection, users should avoid clicking unknown email links and should not open attachments from unfamiliar senders. Instead of saving passwords and card details in browsers, a trusted password manager is a safer choice.

Although antivirus software remains helpful, hackers in the campaign have used sophisticated methods to bypass detection, making careful online behaviour more important than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Late-stage GenAI deals triple, Ireland sees growing interest

According to EY Ireland, global investment in generative AI surged to $49.2bn in the first half of 2025, eclipsing the full-year total for 2024. Despite a drop in deals, total value doubled year-on-year, reflecting a pivot towards more mature and revenue-focused ventures.

Average late-stage deal size has more than tripled to $1.55bn, while early and seed-stage activity has stagnated or declined. Landmark rounds from OpenAI, xAI, Anthropic, and Databricks drove much of the volume, alongside a notable $3.3bn agentic AI acquisition by Capgemini.

Ireland remains a strong adopter of AI, with 63% of startups using the technology. Yet funding gaps persist, particularly between €1m and €10m, posing challenges for growth-stage firms despite a strong local talent base.

Sprout Social’s acquisition of Irish analytics firm NewsWhip, though not part of the H1 figures, points to growing international interest in Irish AI capabilities. Meanwhile, US firms still dominate global deal value, capturing 97%, with the Middle East rising fast and Europe trailing at just 2%.

EY forecasts that sector-specific GenAI platforms, especially in cybersecurity and compliance, will become the next magnet for venture capital through late 2025 and beyond.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US launches $100 million cybersecurity grant for states

The US government has unveiled more than $100 million in funding to help local and tribal communities strengthen their cybersecurity defences.

The announcement came jointly from the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Emergency Management Agency (FEMA), both part of the Department of Homeland Security.

Instead of a single pool, the funding is split into two distinct grants. The State and Local Cybersecurity Grant Program (SLCGP) will provide $91.7 million to 56 states and territories, while the Tribal Cybersecurity Grant Program (TCGP) allocates $12.1 million specifically for tribal governments.

These funds aim to support cybersecurity planning, exercises and service improvements.

CISA’s acting director, Madhu Gottumukkala, said the grants ensure communities have the tools needed to defend digital infrastructure and reduce cyber risks. The effort follows a significant cyberattack on St. Paul, Minnesota, which prompted a state of emergency and deployment of the National Guard.

Officials say the funding reflects a national commitment to proactive digital resilience instead of reactive crisis management. Homeland Security leaders describe the grant as both a strategic investment in critical infrastructure and a responsible use of taxpayer funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!