A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.
Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.
AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.
OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nearly half of EU adults lack basic digital skills, yet most jobs demand them. Eurostat reports only 56% have at least basic proficiency. EU Code Week spotlights the urgency for digital literacy and inclusion.
The Digital Education Action Plan aims to modernise curricula, improve infrastructure, and train teachers. EU policymakers target 80% of adults with basic skills by 2030. Midway progress suggests stronger national action is still required.
Progress remains uneven across regions, with rural connectivity still lagging in places. Belgium began a school smartphone ban across Flanders from 1 September to curb distractions. Educators now balance classroom technology with attention and safety.
Brussels proposed a Union of Skills strategy to align education and competitiveness. The EU also earmarked fresh funding for AI, cybersecurity, and digital skills. Families and schools are urged to develop unplugged problem-solving alongside classroom learning.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The case arose from a German labour dispute where an employer accessed a former worker’s eBay account to prove alleged misconduct. The national court asked the CJEU whether evidence gathered unlawfully could still be lawfully processed in judicial proceedings.
The Advocate General stated that GDPR principles, including storage limitation and lawfulness, apply equally to courts. Yet no absolute ban prevents judges from handling unlawfully obtained data if national law provides safeguards consistent with the EU rights.
EU law leaves rules on evidence admissibility to member states, provided fairness, proportionality, and necessity are respected. The opinion emphasises that courts must balance privacy rights with their duty to determine the truth.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Irish presidential candidate Catherine Connolly condemned a deepfake AI video that falsely announced her withdrawal from the race. The clip, designed to resemble an RTÉ News broadcast, spread online before being reported and removed from major social media platforms.
Connolly said the video was a disgraceful effort to mislead voters and damage democracy. Her campaign team filed a complaint with the Irish Electoral Commission and requested that all copies be clearly labelled as fake.
Experts at Dublin City University identified slight distortions in speech and lighting as signs of AI manipulation. They warned that the rapid spread of synthetic videos underscores weak content moderation by online platforms.
Connolly urged the public not to share the clip and to respond through civic participation. Authorities are monitoring digital interference as Ireland prepares for its presidential vote on Friday.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio, have signed a joint statement urging a global slowdown in the development of artificial superintelligence.
The open letter warns that unchecked progress could lead to human economic displacement, loss of freedom, and even extinction.
An appeal that follows growing anxiety that the rush toward machines surpassing human cognition could spiral beyond human control. Alan Turing predicted as early as the 1950s that machines might eventually dominate by default, a view that continues to resonate among AI researchers today.
Despite such fears, global powers still view the AI race as essential for national security and technological advancement.
Tech firms like Meta are also exploiting the superintelligence label to promote their most ambitious models, while leaders such as OpenAI’s Sam Altman and Microsoft’s Mustafa Suleyman have previously acknowledged the existential risks of developing systems beyond human understanding.
The statement calls for an international prohibition on superintelligence research until there is a broad scientific consensus on safety and public approval.
Its signatories include technologists, academics, religious figures, and cultural personalities, reflecting a rare cross-sector demand for restraint in an era defined by rapid automation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s ChatGPT could soon face the EU’s strictest platform regulations under the Digital Services Act (DSA), after surpassing 120 million monthly users in Europe.
A milestone that places OpenAI’s chatbot above the 45 million-user threshold that triggers heightened oversight.
The DSA imposes stricter obligations on major platforms such as Meta, TikTok, and Amazon, requiring greater transparency, risk assessments, and annual fees to fund EU supervision.
The European Commission confirmed it has begun assessing ChatGPT’s eligibility for the ‘very large online platform’ status, which would bring the total number of regulated platforms to 26.
OpenAI reported that its ChatGPT search function alone had 120.4 million monthly active users across the EU in the six months ending 30 September 2025. Globally, the chatbot now counts around 700 million weekly users.
If designated under the DSA, ChatGPT would be required to curb illegal and harmful content more rigorously and demonstrate how its algorithms handle information, marking the EU’s most direct regulatory test yet for generative AI.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has launched its Cloud Sovereignty Framework to assess the independence of cloud services. The initiative defines clear criteria and scoring methods for evaluating how providers meet EU sovereignty standards.
Under the framework, the Sovereign European Assurance Level, or SEAL, will rank services by compliance. Assessments cover strategic, legal, operational, and technological aspects, aiming to strengthen data security and reduce reliance on foreign systems.
Officials say the framework will guide both public authorities and private companies in choosing secure cloud options. It also supports the EU’s broader goal of achieving technological autonomy and protecting sensitive information.
The Commission’s move follows growing concern over extra-EU data transfers and third-country surveillance. Industry observers view it as a significant step toward Europe’s ambition for trusted, sovereign digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cloudflare’s chief executive Matthew Prince has urged the UK regulator to curb Google’s AI practices. He met with the Competition and Markets Authority (CMA) in London to argue that Google’s bundled crawlers give it excessive power.
Prince said Google uses the same web crawler to gather data for both search and AI products. Blocking the crawler, he added, can also disrupt advertising systems, leaving websites financially exposed.
Cloudflare, which supplies network services to most major AI companies, has proposed separating Google’s AI and search crawlers. Prince believes the change would create fairer access to online content for smaller AI developers.
He also provided data to the UK CMA showing why rivals cannot easily replicate Google’s infrastructure. Media groups have echoed his concerns, warning that Google’s dominance risks deepening inequalities across the AI ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
JLR’s cyberattack is pegged at £1.9bn, the UK’s costliest on record. Production paused for five weeks from 1 September across Solihull, Halewood, and Wolverhampton. CMC says 5,000 firms were hit, with full recovery expected by January 2026.
JLR is restoring manufacturing in phases and declined to comment on the estimate. UK dealer systems were intermittently down, orders were cancelled or delayed, and suppliers faced uncertainty. More than half of the losses fall on JLR; the remainder hits its supply chain and local economies.
The CMC classed the incident as Category 3 on its five-level scale. Chair Ciaran Martin warned organisations to harden critical networks and plan for disruption. The CMC’s assessment draws on public data, surveys, and interviews rather than on disclosed forensic evidence.
Researchers say costs hinge on the attack type, which JLR has not confirmed. Data theft is faster to recover than ransomware; wiper malware would be worse. A claimed hacker group linked to earlier high-profile breaches is unverified.
The CMC’s estimate excludes any ransom, which could add tens of millions of dollars. Earlier this year, retail hacks at M&S, the Co-op, and Harrods were tagged Category 2. Those were pegged at £270m–£440m, below the £506m cited by some victims.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.
France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.
Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!