AI-powered Copilot Health platform introduced by Microsoft

Microsoft has introduced Copilot Health, a new feature that uses AI to help users interpret personal health data and prepare for medical consultations.

The tool will operate as a separate and secure environment within Microsoft’s Copilot ecosystem, allowing users to combine health records, wearable data, and medical history into a single profile. The system then uses AI to analyse patterns and generate personalised insights intended to support conversations with healthcare professionals.

Microsoft said the feature aims to help people better understand existing medical information rather than replace clinical care. Users can review trends such as sleep patterns, activity levels, and vital signs gathered from wearable devices, alongside test results and visit summaries from healthcare providers.

Copilot Health can integrate data from more than 50 wearable devices, including systems connected through platforms such as Apple Health, Fitbit, and Oura. The platform can also access health records from over 50,000 US hospitals and provider organisations through HealthEx, as well as laboratory test results from Function.

According to Microsoft, the system builds on ongoing research into medical AI systems, including work on the Microsoft AI Diagnostic Orchestrator (MAI-DxO). The company said future publications will explore how such systems could assist in analysing complex medical cases.

Privacy and security are central elements of the design. Microsoft stated that Copilot Health data and conversations are stored separately from standard Copilot interactions and protected through encryption and access controls. The company also noted that health information used in the service will not be used to train AI models.

Development of the system involves Microsoft’s internal clinical team and an external advisory group of more than 230 physicians from 24 countries. The company said Copilot Health has also achieved ISO/IEC 42001 certification, a standard focused on the governance of AI management systems.

The feature is being introduced through a phased rollout, beginning with a waitlist for early users who will help shape the service as it develops.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cambridge researchers warn AI toys misread children’s emotions

AI toys for young children may misread emotions and respond inappropriately, according to a study by researchers at the University of Cambridge. Developmental psychologists observed interactions between children aged three to five and conversational AI-powered toys.

Findings showed the toys often struggled with pretend play and emotional cues. In several cases, children attempted to express sadness or initiate imaginative scenarios, while the AI responded with unrelated or overly scripted replies, leaving emotional signals unrecognised.

Researchers warned that such limitations could affect children’s emotional development and imaginative play. Early years practitioners also raised concerns about how toy-collected conversation data may be used and whether children could start treating the devices as trusted companions.

The study calls for stronger regulation and the introduction of safety certification for AI toys aimed at young children. Toy developer Curio stated that improving AI interactions and maintaining parental controls remain priorities as the technology continues to develop.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes in campaign ads expose limits of Texas election law

AI-generated political advertisements are becoming increasingly visible in Texas election campaigns, highlighting gaps in existing laws designed to regulate deepfakes in political messaging.

Texas was the first state in the United States to adopt legislation restricting the use of deepfakes in campaign advertisements. However, the law applies only to state-level races. It does not cover federal contests, including the US Senate race that has dominated advertising spending in Texas and featured several AI-generated campaign ads.

Some lawmakers and experts warn that the growing use of AI-generated political content could complicate election campaigns. During recent primary contests, campaign advertisements featuring manipulated or synthetic images of political figures circulated widely across media platforms.

State Senator Nathan Johnson, who has proposed legislation to strengthen the state’s rules regarding deepfakes, said the rapid evolution of AI technology makes the issue increasingly urgent. Johnson argues that voters should be able to make decisions based on accurate information rather than manipulated media.

The current Texas law, adopted in 2019, contains several limitations. It only applies to video content, requires proof of intent to deceive or harm a candidate, and covers material distributed within 30 days of an election. Critics say these restrictions make the law difficult to enforce and limit its practical impact.

Lawmakers from both parties attempted to address some of these issues during the most recent legislative session. Proposed reforms included removing the 30-day restriction, requiring clear disclosure when AI is used in political advertising, and allowing candidates to pursue legal action to block misleading ads. Although both chambers of the Texas legislature passed versions of the legislation, the proposals ultimately failed to become law.

Supporters of stricter regulation argue that the rapid advancement of generative AI tools is making it harder to distinguish synthetic media from authentic content. Some political leaders warn that increasingly realistic deepfakes could eventually influence election outcomes.

Others, however, caution that regulating political content raises constitutional concerns. Some lawmakers argue that many AI-generated political ads resemble satire or parody, forms of political speech protected by the First Amendment.

At the federal level, regulation of congressional campaign advertising falls under the Federal Election Commission’s authority. In 2024, the agency declined to begin a formal rulemaking process on AI-generated political ads, leaving states and policymakers to continue debating how to address the emerging issue.

Experts warn that as AI tools continue to improve, distinguishing authentic political messaging from deepfakes and other forms of synthetic content will likely become more complex.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI is helping close the heart health gap in remote Australian communities

Google has launched a new AI-powered initiative aimed at reducing heart disease risk in rural Australia, where people living in remote communities are 60% more likely to die from heart disease than those in metropolitan areas.

The programme, a first for the Asia-Pacific region, is backed by a $1 million AUD investment from Google Australia’s Digital Future Initiative and brings together Wesfarmers Health, SISU Health, the Victor Chang Cardiac Research Institute, and Latrobe Health Services.

At the centre of the initiative is Google for Health’s Population Health AI (PHAI), an advanced analytics tool that analyses aggregated and de-identified datasets, including clinical records, air quality, pollen levels, and geographic data, to identify hidden health risks at a community level.

The aim is to help health organisations move away from reactive treatment towards proactively managing chronic condition risks tailored to specific towns or postcodes.

SISU Health will use PHAI insights to guide the delivery of over 50,000 new health screenings across remote areas, combining geographic AI analysis with on-the-ground community care. Google described the goal as ensuring every Australian has access to personalised care regardless of where they live.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI in precision oncology faces a trust and safety challenge

A narrative review published in the Journal of Hematology & Oncology examined how generative AI tools could support oncologists in precision cancer care.

In this increasingly data-intensive field, clinicians must cross-reference genomic sequencing results, patient records, imaging findings, and a rapidly expanding body of biomedical literature to inform their decisions.

Researchers found promising results for AI-assisted clinical trial matching and diagnostic report drafting, but also highlighted significant risks that make unsupervised deployment dangerous.

On the positive side, the AI tool TrialGPT demonstrated 87.3% agreement with expert assessments when matching patients to clinical trials, while reducing processing time by an average of 42.6%.

Meanwhile, the vision-language model Flamingo-CXR matched or exceeded the performance of board-certified radiologists in 94% of chest X-ray cases with no clinically relevant findings.

Researchers cautioned, however, that clinically significant errors appeared in 24.8% of evaluated imaging reports, whether AI- or human-generated, underscoring the need for combined oversight.

The review’s authors advocate for ‘Human-in-the-Loop’ workflows, in which human experts review all AI outputs before clinical implementation, and for Retrieval-Augmented Generation techniques that force AI systems to draw on current medical guidelines rather than relying solely on their base training data.

The key conclusion is that AI should function as an assistant to oncologists, not as an autonomous decision maker.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G expansion strengthens digital connectivity across Algeria

Ooredoo Algeria has expanded its 5G network to all provinces in Algeria. The operator announced that 5G services are now present in every wilaya, although coverage within each province remains limited as the network continues to expand.

The deployment is progressing ahead of the national rollout schedule. The expansion goes beyond the initial timeline set by the Postal and Electronic Communications Regulatory Authority (ARPCE), which required operators to begin with pilot deployments in a limited number of provinces.

Algeria’s national plan foresees a gradual expansion of 5G coverage. Under the regulatory roadmap, telecom operators must extend services to additional provinces each year until 2031, when all wilayas are expected to reach at least 50% 5G coverage.

Telecom operators are accelerating 5G deployment nationwide. Alongside Ooredoo Algeria, operators such as Mobilis and Djezzy launched 5G services in December 2025, with Djezzy also reporting progress ahead of schedule.

The expansion of 5G infrastructure is expected to support the digital economy. Operators say improved connectivity could help accelerate business digitalisation, strengthen digital services, and support broader economic development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT researchers outline future of AI and physical sciences

AI and the mathematical and physical sciences are entering a new phase of collaboration that could accelerate technological progress and scientific discovery. Researchers increasingly see the relationship as a two-way exchange rather than a one-sided use of AI tools.

A 2025 MIT workshop brought together experts from astronomy, chemistry, materials science, mathematics and physics to examine the future of this collaboration.

Discussions resulted in a white paper published in Machine Learning: Science and Technology, outlining strategies for research institutions and funding bodies.

Participants agreed that stronger computing infrastructure, shared data resources and cross-disciplinary research methods are essential for progress. Scientists also improve AI by analysing neural networks, identifying principles and developing new algorithms.

Researchers highlighted the growing importance of so-called ‘centaur scientists’- specialists trained in both AI and traditional scientific disciplines. Universities, including MIT, are expanding interdisciplinary programmes and research initiatives to train experts who can work across AI and scientific fields.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Leading tech companies deepen AI competition with new capabilities

AI competition among leading AI developers intensified in early 2026 as major companies expanded their models, platforms, and partnerships. Companies including Google, OpenAI, Anthropic, and xAI are introducing new capabilities and integrating AI systems into broader ecosystems.

Google has continued to expand its Gemini model family with updates to Gemini 3.1 Pro and 3.1 Flash, designed to support complex tasks across applications. The company is also integrating Gemini into services such as Docs, Sheets, Slides, and Drive, allowing users to generate documents and analyse data across multiple Google services.

Gemini has also been embedded into the Chrome browser and integrated with Samsung’s Galaxy devices, expanding its distribution across consumer platforms as AI competition among major developers accelerates.

Anthropic has focused on advancing the Claude model family while positioning the system for enterprise and professional use. Recent updates include Claude Sonnet 4.6, which introduces improvements in reasoning and coding capabilities alongside an expanded context window currently in beta. The company has also launched a limited preview of the Claude Marketplace, allowing organisations to use third-party tools built on Claude through partnerships with several software companies.

OpenAI has continued to update ChatGPT with the release of the GPT-5 series, including GPT-5.2 and GPT-5.4. The newer models combine reasoning, coding, and agent-based workflows, while also introducing computer-use capabilities that allow the system to interact with applications directly.

OpenAI has also introduced additional services, including ChatGPT Health and integrations designed to assist with spreadsheet modelling and data analysis, further intensifying AI competition across enterprise and consumer tools.

Meanwhile, xAI has expanded development of its Grok models while increasing computing infrastructure. The company has reported growth in Grok usage through integration with the X platform and other applications. Recent announcements include upgrades to Grok’s voice and multimodal capabilities, as well as continued training of future models.

Across the industry, developers are increasingly positioning their systems not only as conversational assistants but also as tools integrated into enterprise workflows, creative production, and software development. New releases in 2026 reflect a broader shift toward multimodal systems, agent-based capabilities, and deeper integration with existing digital platforms, highlighting how AI competition is shaping the next phase of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT dynamic visual explanations introduce interactive learning tools

OpenAI has introduced a new ChatGPT feature called dynamic visual explanations, allowing users to interact with mathematical and scientific concepts through real-time visuals.

Instead of relying solely on text explanations or static diagrams, the feature enables users to manipulate formulas and variables and immediately see how those changes affect results. For example, when exploring the Pythagorean theorem, users can adjust the triangle’s sides and see the hypotenuse update instantly.

To use the tool, users can ask ChatGPT questions such as ‘What is a lens equation?’ or ‘How can I find the area of a circle?’ The chatbot responds with both a written explanation and an interactive visual module that users can manipulate directly.

The feature currently supports more than 70 topics in mathematics and science. The topics include binomial squares, Charles’ law, compound interest, Coulomb’s law, exponential decay, Hooke’s law, kinetic energy, linear equations, and Ohm’s law.

OpenAI says it plans to expand the range of topics over time. The feature is already available to all logged-in ChatGPT users. The launch marks a shift in how ChatGPT supports learning. Instead of simply providing answers, the tool now encourages users to explore underlying concepts by experimenting with interactive models.

AI tools have become increasingly common in education, although their role remains widely debated. Some educators worry that students may become overly dependent on AI tools, while others see them as valuable learning aids.

According to OpenAI, more than 140 million people use ChatGPT every week to help with subjects such as mathematics and science, which many learners find challenging. Other technology companies are also experimenting with similar tools. Google’s Gemini introduced interactive diagrams and visual explanations last year.

The new feature joins several other ChatGPT learning tools, including study mode, which guides users through problems step by step, and QuizGPT, which allows users to create flashcards and test themselves before exams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!