In 2025 alone, €1.6 billion is being committed to AI in Germany as part of its AI action plan.
The budget, managed by the Federal Ministry of Research, Technology and Space, has grown more than twentyfold since 2017, underlining Berlin’s ambition to position the country as a European hub for AI.
However, experts warn that the financial returns remain uncertain. Rainer Rehak of the Weizenbaum Institute argues that AI lacks a clear business model, calling the current trend an ‘investment game’ fuelled by speculation.
He cautioned that if real profits do not materialise, the sector could face a bubble similar to past technology hype cycles. Even OpenAI chief Sam Altman has warned of unsustainable levels of investment in AI.
Germany faces significant challenges in computing capacity. A study by the eco Internet Industry Association found that the country’s infrastructure may only expand to 3.7 gigawatts by 2030, while demand from industry could exceed 12 gigawatts.
Deloitte forecasts a capacity gap of around 50% within five years, with the US already maintaining more than twenty times Germany’s capacity. Without massive new investments in data centres, Germany risks lagging further behind.
Some analysts believe the country needs a different approach. Professor Oliver Thomas of Osnabrück University argues that while large-scale AI models are struggling to find profitability, small and medium-sized enterprises could unlock practical applications.
He advocates for speeding up the cycle from research to commercialisation, ensuring that AI is integrated into industry more quickly.
Germany has a history of pioneering research in fields such as computer technology, MP3, and virtual and augmented reality, but much of the innovation was commercialised abroad.
Thomas suggests focusing less on ‘made in Germany’ AI models and more on leveraging existing technologies from global providers, while maintaining digital sovereignty through strong policy frameworks.
Looking ahead, experts see AI becoming deeply integrated into the workplace. AI assistants may soon handle administrative workflows, organise communications, and support knowledge-intensive professions.
Small teams equipped with these tools could generate millions in revenue, reshaping the country’s economic landscape.
Germany’s heavy spending signals a long-term bet on AI. But with questions about profitability, computing capacity, and competition from the US, the path forward will depend on whether investments can translate into sustainable business models and practical use cases across the economy.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.
Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.
The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.
Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.
Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.
The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.
Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.
The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.
A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.
Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.
Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Adam Mosseri has denied long-standing rumours that the platform secretly listens to private conversations to deliver targeted ads. In a video he described as ‘myth busting’, Mosseri said Instagram does not use the phone’s microphone to eavesdrop on users.
He argued that such surveillance would not only be a severe breach of privacy but would also quickly drain phone batteries and trigger visible microphone indicators.
Instead, Mosseri outlined four reasons why adverts may appear suspiciously relevant: online searches and browsing history, the influence of friends’ online behaviour, rapid scrolling that leaves subconscious impressions, and plain coincidence.
According to Mosseri, Instagram users may mistake targeted advertising for surveillance because algorithms incorporate browsing data from advertisers, friends’ interests, and shared patterns across users.
He stressed that the perception of being overheard is often the result of ad targeting mechanics rather than eavesdropping.
Despite his explanation, Mosseri admitted the rumour is unlikely to disappear. Many viewers of his video remained sceptical, with some comments suggesting his denial only reinforced their suspicions about how social media platforms operate.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft is transforming Sentinel from a traditional SIEM into a unified defence platform for the agentic AI era. It now incorporates features such as a data lake, semantic graphs and a Model Context Protocol (MCP) server to enable intelligent agents to reason over security data.
Sentinel’s enhancements allow defenders to combine structured, semi-structured data into vectorised, graph-based relationships. With that, AI agents grounded in Security Copilot and custom tools can automate triage, correlate alerts, reason about attack paths, and initiate response actions, while keeping human oversight.
The platform supports extensibility through open agent APIs, enabling partners and organisations to deploy custom agents through the MCP server.
Microsoft also adds protections for AI agents, such as prompt-injection resilience, task adherence controls, PII guardrails, and identity controls for agent estates. The evolution aims to shift cybersecurity from reactive to predictive operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.
The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.
Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.
The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.
Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is moving from theory to practice in healthcare. Hospitals and clinics are adopting AI to improve diagnostics, automate routine tasks, support overworked staff, and cut costs. A recent GoodFirms survey shows strong confidence that AI will become essential to patient care and health management.
Survey findings reveal that nearly all respondents believe AI will transform healthcare. Robotic surgery, predictive analytics, and diagnostic imaging are gaining momentum, while digital consultations and wearable monitors are expanding patient access.
AI-driven tools are also helping reduce human errors, improve decision-making, and support clinicians with real-time insights.
Even so, the direction is clear: AI is set to be a defining force in healthcare’s future, enabling more efficient, accurate, and equitable systems worldwide.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Caltech physicists have developed a groundbreaking neutral-atom quantum computer, trapping 6,100 caesium atoms as qubits in a single array. Published in Nature, the achievement marks the largest such system to date, surpassing previous arrays limited to hundreds of qubits.
The system maintains exceptional stability, with qubits coherent for 13 seconds and single-qubit operations achieving 99.98% accuracy. Using optical tweezers, researchers move atoms with precision while maintaining their superposition state, essential for reliable quantum computing.
The milestone highlights neutral-atom systems as strong contenders in quantum computing, offering dynamic reconfigurability compared to rigid hardware. The ability to rearrange qubits during computations paves the way for advanced error correction in future systems.
As global efforts intensify to scale quantum machines, Caltech’s work sets a new benchmark. The team aims to advance entanglement for full-scale computations, bringing practical quantum solutions closer for fields like chemistry and materials science.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The University of Pennsylvania’s engineering team has made a breakthrough that could bring the quantum internet much closer to practical use. Researchers have demonstrated that quantum and classical networks can share the same backbone by transmitting quantum signals over standard fibre optic infrastructure using the same Internet Protocol (IP) that powers today’s web.
Their silicon photonics ‘Q-Chip’ achieved over 97% fidelity in real-world field tests, showing that the quantum internet does not necessarily require building entirely new networks from scratch.
That result, while highly technical, has far-reaching implications. Beyond physics and computer science, it raises urgent questions for governance, national infrastructures, and the future of digital societies.
Quantum signals were transmitted as packets with classical headers readable by conventional routers, while the quantum information itself remained intact.
Noise management
The chip corrected disturbances by analysing the classical header without disturbing the quantum payload. An interesting fact is that the test ran on a Verizon fibre link between two buildings, not just in a controlled lab.
That fact makes the experiment different from earlier advances focusing mainly on quantum key distribution (QKD) or specialised lab setups. It points toward a future in which quantum networking and classical internet coexist and are managed through similar protocols.
Implications for governance and society
Government administration
Governments increasingly rely on digital infrastructure to deliver services, store sensitive records, and conduct diplomacy. The quantum internet could provide secure e-government services resistant to espionage or tampering, protected digital IDs and voting systems, reinforcing democratic integrity, and classified communication channels that even future quantum computers cannot decrypt.
That positions quantum networking as a sovereignty tool, not just a scientific advance.
Healthcare
Health systems are frequent targets of cyberattacks. Quantum-secured communication could protect patient records and telemedicine platforms, enable safe data sharing between hospitals and research centres, support quantum-assisted drug discovery and personalised medicine via distributed quantum computing.
Here, the technology directly impacts citizens’ trust in digital health.
Critical infrastructure and IT systems
National infrastructures, such as energy grids, financial networks, and transport systems, could gain resilience from quantum-secured communication layers.
In addition, quantum-enhanced sensing could provide more reliable navigation independent of GPS, enable early-warning systems for earthquakes or natural disasters, and strengthen resilience against cyber-sabotage of strategic assets.
Citizens and everyday services
For ordinary users, the quantum internet will first be invisible. Their emails, bank transactions, and medical consultations will simply become harder to hack.
Over time, however, quantum-secured platforms may become a market differentiator for banks, telecoms, and healthcare providers.
Citizens and universities may gain remote access to quantum computing resources, democratising advanced research and innovation.
Building a quantum-ready society
The Penn experiment matters because it shows that quantum internet infrastructure can evolve on top of existing systems. For policymakers, this raises several urgent points.
Standardisation
International bodies (IETF, ITU-T, ETSI) will need to define packet structures, error correction, and interoperability rules for quantum-classical networks.
Strategic investment
Countries face a decision whether to invest early in pilot testbeds (urban campuses, healthcare systems, or government services).
Cybersecurity planning
Quantum internet deployment should be aligned with the post-quantum cryptography transition, ensuring coherence between classical and quantum security measures.
Public trust
As with any critical infrastructure, clear communication will be needed to explain how quantum-secured systems benefit citizens and why governments are investing in them.
Key takeaways for policymakers
Quantum internet is governance, not just science. The Penn breakthrough shows that quantum signals can run on today’s networks, shifting the conversation from pure research to infrastructure and policy planning.
Governments should treat the quantum internet as a strategic asset, protecting national administrations, elections, and critical services from future cyber threats.
Early adoption in health systems could secure patient data, telemedicine, and medical research, strengthening public trust in digital services.
International cooperation (IETF, ITU-T, ETSI) will be needed to define protocols, interoperability, and security frameworks before large-scale rollouts.
Policymakers should align quantum network deployment with the global transition to post-quantum encryption, ensuring coherence across digital security strategies.
Governments could start with small-scale testbeds (smart cities, e-government nodes, or healthcare networks) to build expertise and shape standards from within.
Why does it matter?
The University of Pennsylvania’s ‘Q-Chip’ is a proof-of-concept that quantum and classical networks can speak the same language. While technical challenges remain, especially around scaling and quantum repeaters, the political and societal questions can no longer be postponed.
The quantum internet is not just a scientific project. It is emerging as a strategic infrastructure for the digital state of the future. Governments, regulators, and international organisations must begin preparing today so that tomorrow’s networks deliver speed and efficiency, trust, sovereignty, and resilience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Imgur has cut off access for UK users after regulators warned its parent company, MediaLab AI, of a potential fine over child data protection.
Visitors to the platform since 30 September have been met with a notice saying that content is unavailable in their region, with embedded Imgur images on other sites also no longer visible.
The UK’s Information Commissioner’s Office (ICO) began investigating the platform in March, questioning whether it complied with data laws and the Children’s Code.
The regulator said it had issued MediaLab with a notice of intent to fine the company following provisional findings. Officials also emphasised that leaving the UK would not shield Imgur from responsibility for any past breaches.
Some users speculated that the withdrawal was tied to new duties under the Online Safety Act, which requires platforms to check whether visitors are over 18 before allowing access to harmful content.
However, both the ICO and Ofcom stated that Imgur decided on a commercial choice. Other MediaLab services, such as Kik Messenger, continue to operate in the UK with age verification measures in place.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!