AI-powered Copilot Health platform introduced by Microsoft

Microsoft has introduced Copilot Health, a new feature that uses AI to help users interpret personal health data and prepare for medical consultations.

The tool will operate as a separate and secure environment within Microsoft’s Copilot ecosystem, allowing users to combine health records, wearable data, and medical history into a single profile. The system then uses AI to analyse patterns and generate personalised insights intended to support conversations with healthcare professionals.

Microsoft said the feature aims to help people better understand existing medical information rather than replace clinical care. Users can review trends such as sleep patterns, activity levels, and vital signs gathered from wearable devices, alongside test results and visit summaries from healthcare providers.

Copilot Health can integrate data from more than 50 wearable devices, including systems connected through platforms such as Apple Health, Fitbit, and Oura. The platform can also access health records from over 50,000 US hospitals and provider organisations through HealthEx, as well as laboratory test results from Function.

According to Microsoft, the system builds on ongoing research into medical AI systems, including work on the Microsoft AI Diagnostic Orchestrator (MAI-DxO). The company said future publications will explore how such systems could assist in analysing complex medical cases.

Privacy and security are central elements of the design. Microsoft stated that Copilot Health data and conversations are stored separately from standard Copilot interactions and protected through encryption and access controls. The company also noted that health information used in the service will not be used to train AI models.

Development of the system involves Microsoft’s internal clinical team and an external advisory group of more than 230 physicians from 24 countries. The company said Copilot Health has also achieved ISO/IEC 42001 certification, a standard focused on the governance of AI management systems.

The feature is being introduced through a phased rollout, beginning with a waitlist for early users who will help shape the service as it develops.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU privacy watchdogs warn over US plans to expand traveller data collection

European privacy authorities have raised concerns about proposed changes to the Electronic System for Travel Authorisation that could require travellers to the US to disclose extensive personal information, including social media activity.

The European Data Protection Board, which coordinates national data protection authorities across the EU, sent a letter to the European Commission asking whether the institution plans to intervene or respond to the updated requirements.

A proposal that would apply to visitors entering the US through the visa-waiver programme for short stays of up to 90 days.

Under the proposed changes, travellers may be required to provide details about their social media accounts covering the previous five years.

Authorities could also request personal data about family members, including addresses, phone numbers and dates of birth, information that privacy regulators argue is unrelated to travel authorisation.

Watchdogs also questioned how EU citizens could exercise their data protection rights once such information is transferred to US authorities, particularly regarding storage periods and potential misuse.

Parallel negotiations between the EU and the US have also attracted attention.

Discussions around a potential Enhanced Border Security Partnerships framework could allow US authorities to seek access to biometric databases held by European countries, including facial scans and fingerprint records.

European privacy regulators warned that such measures could raise significant concerns regarding fundamental rights and personal data protection for travellers from the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU platform law expands data access rights

European regulators are examining how the Digital Markets Act interacts with the General Data Protection Regulation across major digital platforms. The EU rules apply to designated gatekeepers that operate core platform services used by millions of users.

Policy specialists in the EU say the Digital Markets Act complements GDPR protections by strengthening user control over personal data. The framework also supports rights related to data access, portability and transparency for both consumers and business users.

The regulatory overlap affects areas including consent requirements, third-party software installation and interoperability between services. Authorities are also coordinating enforcement between competition and data protection regulators.

Analysts say the combined application of both laws could reshape the responsibilities of major technology platforms. Policymakers aim to increase user choice while reinforcing safeguards for the integrity and confidentiality of personal data in the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents face growing prompt injection risks

AI developers are working on new defences against prompt-injection attacks that aim to manipulate AI agents. Security specialists warn that attackers are increasingly using social engineering techniques to influence AI systems that interact with online content.

Researchers say AI agents that browse the web or handle user tasks face growing risks from hidden instructions embedded in emails or websites. Experts in the US note that attackers often attempt to trick AI into revealing sensitive information.

Engineers are responding by designing systems that limit the impact of manipulation attempts. Developers in the US say AI tools must include safeguards preventing sensitive data from being transmitted without user approval.

Security teams are also introducing technologies that detect risky actions and prompt users for confirmation. Specialists argue that strong system design and user oversight will remain essential as AI agents gain more autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven adaptive malware highlights new cyber threat landscape

Google’s cybersecurity division, Mandiant, has warned about the growing threat of AI-driven adaptive malware, highlighting how AI is reshaping the cyber threat landscape.

According to a recent report, adaptive malware can modify its behaviour and code in response to the environment it encounters, thereby evading traditional security tools. By analysing the security systems protecting a target, the malware can rewrite parts of its code to bypass detection.

Unlike traditional malware, which typically follows fixed instructions, adaptive malware can adjust its behaviour during an attack. This capability makes it more difficult for conventional cybersecurity tools to detect and block malicious activity.

Mandiant noted that such malware is increasingly associated with advanced persistent threat (APT) groups that conduct long-term, targeted cyber operations. These groups often pursue espionage objectives or financial gain while maintaining prolonged access to compromised systems.

AI is also being used to automate elements of cyberattacks. Machine learning algorithms allow malicious software to anticipate defensive measures and adjust its behaviour in real time. In some cases, attackers are integrating AI into broader automated attack chains. AI-driven malware can gather information, adapt its strategy, and continue operating with minimal human intervention.

Security researchers say autonomous AI agents may be capable of managing multiple stages of an attack, including reconnaissance, exploitation, and persistence, while remaining undetected.

To address these evolving threats, Mandiant recommends that organisations strengthen their cybersecurity strategies by deploying advanced detection and response tools, including AI-based systems that can identify anomalous behaviour. As AI capabilities continue to develop, cybersecurity experts say understanding adaptive malware and automated attack techniques will be essential for organisations seeking to protect their systems and data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!