Europe urged to seize AI opportunity through action

Europe faces a pivotal moment to lead in AI, potentially boosting GDP by over €1.2 trillion, according to Google’s Kent Walker. Urgent action is needed to close the gap between ambition and implementation.

Complex EU regulations, with over 100 new digital rules since 2019, hinder businesses, costing an estimated €124 billion annually. Simplifying these, as suggested by Mario Draghi’s report, could unlock €450 billion in AI-driven growth.

Focused, balanced policies must prioritise real-world AI impacts without stifling progress.

Skilling Europe’s workforce is crucial for AI adoption, with only 14% of EU firms using generative AI compared to 83% in China. Google’s initiatives, like its €15 million AI Opportunity Fund, support digital training. Public-private partnerships can scale these efforts, creating new job categories.

Scaling AI demands secure, dependable tools and ongoing momentum. Google’s AlphaFold and GNoME fuel advances in biology and materials science, while partnerships with European companies safeguard data sovereignty. Joint efforts will help Europe lead globally in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Portugal to bring AI into bureaucracy to save time

The Portuguese government is preparing to bring AI into public administration to accelerate licensing procedures and cut delays, according to State Reform Minister Gonçalo Matias.

Speaking at a World Tourism Day conference in Tróia, he said AI can play a key role in streamlining decision-making while maintaining human oversight at the final stage.

Matias explained that the reform will reallocate staff from routine tasks to work of higher value, while introducing a system of prior notifications.

Under the plan, citizens and businesses in Portugal will be allowed to begin most activities without a licence, with tacit approval granted if the administration fails to respond within set deadlines.

The minister said the reforms will be tied to strict accountability measures, emphasising a ‘trust contract’ between citizens, businesses and the public administration. He argued the initiative will not only speed up processes but also foster greater efficiency and responsibility across government services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung joins OpenAI for AI data centre push

Samsung Electronics, alongside OpenAI, has signed a letter of intent to collaborate on AI data centre infrastructure. The partnership leverages Samsung’s expertise in semiconductors, cloud services, and shipbuilding. Combining these strengths aims to accelerate advancements in global AI technology.

Samsung Electronics will provide energy-efficient DRAM for OpenAI’s Stargate, meeting a projected demand of 900,000 wafers monthly. Advanced chip packaging and heterogeneous integration further enhance Samsung’s ability to deliver tailored semiconductor solutions for AI workflows.

Samsung SDS will design and operate Stargate AI data centres while offering enterprise AI services, including ChatGPT integration for Korean businesses. Meanwhile, Samsung C&T and Samsung Heavy Industries will explore floating data centres to address land scarcity and reduce emissions.

Signed in Seoul, the agreement positions Samsung to support Korea’s ambition to rank among the top three AI nations globally. Broader adoption of ChatGPT within Samsung’s operations will also drive workplace AI transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram head explains why ads feel like eavesdropping

Adam Mosseri has denied long-standing rumours that the platform secretly listens to private conversations to deliver targeted ads. In a video he described as ‘myth busting’, Mosseri said Instagram does not use the phone’s microphone to eavesdrop on users.

He argued that such surveillance would not only be a severe breach of privacy but would also quickly drain phone batteries and trigger visible microphone indicators.

Instead, Mosseri outlined four reasons why adverts may appear suspiciously relevant: online searches and browsing history, the influence of friends’ online behaviour, rapid scrolling that leaves subconscious impressions, and plain coincidence.

According to Mosseri, Instagram users may mistake targeted advertising for surveillance because algorithms incorporate browsing data from advertisers, friends’ interests, and shared patterns across users.

He stressed that the perception of being overheard is often the result of ad targeting mechanics rather than eavesdropping.

Despite his explanation, Mosseri admitted the rumour is unlikely to disappear. Many viewers of his video remained sceptical, with some comments suggesting his denial only reinforced their suspicions about how social media platforms operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft boosts productivity with AI-powered subscriptions

Microsoft has enhanced its Microsoft 365 subscriptions by deeply integrating Copilot, its AI assistant, into apps like Word, Excel, and Outlook. A new Microsoft 365 Premium plan, priced at £19.99 monthly, combines advanced AI features with productivity tools.

The plan targets professionals, entrepreneurs, and families seeking to streamline tasks efficiently.

Microsoft 365 Personal and Family subscribers gain higher usage limits for Copilot features like image generation and deep research at no extra cost. Copilot Chat, now available across these apps, assists with drafting, analysis, and automation.

These updates aim to embed AI seamlessly into daily workflows. Samsung Electronics will provide energy-efficient DRAM for OpenAI’s Stargate, meeting a projected demand of 900,000 wafers monthly.

Meanwhile, Microsoft’s Frontier programme offers subscribers access to experimental AI tools, such as Office Agent, enhancing productivity. A global student offer provides free Microsoft 365 Personal for a year.

Fresh icons for Word, Excel, and other apps highlight Microsoft’s AI-driven evolution. Secure workplace AI use, backed by enterprise data protection, ensures compliance and safety. These innovations establish Microsoft 365 as a leader in AI-powered productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool detects tiny brain lesions, offering hope of epilepsy cure

Australian researchers have developed an AI tool that can identify tiny brain lesions in children with epilepsy, a breakthrough they say could enable faster diagnoses and pave the way for potential cures.

Scientists from the Murdoch Children’s Research Institute and The Royal Children’s Hospital designed the ‘AI epilepsy detective’ to detect lesions as small as a blueberry in up to 94 percent of cases. These cortical dysplasias are often invisible to doctors reviewing MRI scans, with around 80 percent of cases previously missed during human examination.

In a study published in Epilepsia, the team tested the tool on 71 children and 23 adults with focal epilepsy. Seventeen children were part of the test group, and 12 underwent surgery after the lesions were identified using the AI. Eleven are now seizure-free.

Lead researcher Dr Emma Macdonald-Laurs said earlier lesion identification can speed surgery referrals and improve outcomes. ‘Identifying the cause early lets us tailor treatment options and helps neurosurgeons plan and navigate surgery,’ she explained. ‘More accurate imaging allows neurosurgeons to develop a safer surgical roadmap and avoid removing healthy brain tissue.’

Brain lesions are one of the most common causes of drug-resistant seizures, yet they can be challenging to detect using conventional imaging techniques. The researchers now hope to expand the use of their AI tool across paediatric hospitals in Australia with additional funding.

One child, five-year-old Royal, experienced frequent seizures before doctors using the tool identified and removed the lesion responsible. His mother said he is seizure-free and has returned to his ‘calm, friendly, and patient’ self.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How OpenAI designs Sora’s recommendation feed for creativity and safety

OpenAI outlines the core principles behind Sora’s content feed in its Sora Feed Philosophy document. The company states that the feed is designed to spark creativity, foster connections, and maintain a safe user environment.

To achieve these goals, OpenAI says it prioritises creativity over passive consumption. The ranking is steered not simply for engagement, but to encourage active participation. Users can also influence what they see via steerable ranking controls.

Another guiding principle is putting users in control. For instance, parental settings let caretakers turn off feed personalisation or continuous scroll for teen accounts.

OpenAI also emphasises connection. The feed is biassed toward content from people you know or connect with, rather than purely global content, so the experience feels more communal.

In terms of safety and expression, OpenAI embeds guardrails at the content creation level. Because every post is generated within Sora, the system can block disallowed content before it appears.

The feed layers additional filtering, removing or deprioritising harmful or unsafe material (e.g. violent, sexual, hate, self-harm content). At the same time, the design aims not to over-censor, allowing space for genuine expression and experimentation.

On how the feed works, OpenAI says it considers signals like user activity (likes, comments, remixes), location data, ChatGPT history (unless turned off), engagement metrics, and author-level data (e.g. follower counts). Safety signals also weigh in to suppress or filter content flagged as inappropriate.

OpenAI describes the feed as a ‘living, breathing’ system. It expects to update and refine algorithms based on user behaviour and feedback while staying aligned with its founding principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft evolves Sentinel into agentic defence platform

Microsoft is transforming Sentinel from a traditional SIEM into a unified defence platform for the agentic AI era. It now incorporates features such as a data lake, semantic graphs and a Model Context Protocol (MCP) server to enable intelligent agents to reason over security data.

Sentinel’s enhancements allow defenders to combine structured, semi-structured data into vectorised, graph-based relationships. With that, AI agents grounded in Security Copilot and custom tools can automate triage, correlate alerts, reason about attack paths, and initiate response actions, while keeping human oversight.

The platform supports extensibility through open agent APIs, enabling partners and organisations to deploy custom agents through the MCP server.

Microsoft also adds protections for AI agents, such as prompt-injection resilience, task adherence controls, PII guardrails, and identity controls for agent estates. The evolution aims to shift cybersecurity from reactive to predictive operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok 4 launches on Azure with advanced reasoning features

Microsoft has announced that Grok 4, the latest large language model from Elon Musk’s xAI, is now available in Azure AI Foundry. The collaboration aims to deliver frontier-level reasoning capabilities with enterprise-grade safety and control.

Grok 4 features a 128,000-token context window, integrated web search, and native tool use. According to Microsoft, it excels at first-principles reasoning, handling complex tasks in science, maths, and logic. The model was trained on xAI’s Colossus supercomputer.

Azure says the model can analyse long documents, code repositories, and academic texts simultaneously, reducing the need to split inputs. It also incorporates external data for real-time responses, though Microsoft cautions that outputs should be verified against reliable sources.

The platform includes Azure AI Content Safety by default, and Microsoft stresses responsible use with ongoing monitoring. Pricing starts at $5.5 per million input tokens and $27.5 per million output tokens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot