A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.
Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.
AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.
OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.
Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.
Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.
France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.
Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.
Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.
YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.
The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.
YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.
Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.
Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.
Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.
Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.
The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Dutch officials will study how the gaming platform affects young users, focusing on safety, mental health, and privacy. The assessment aims to identify both the benefits and risks of Roblox. Authorities say the findings will help guide new policies and support parents in protecting their children online.
Roblox has faced mounting criticism over unsafe content and the presence of online predators. Reports of games containing violent or sexual material have raised alarms among parents and child protection groups.
The US state of Louisiana recently sued Roblox, alleging that it enabled systemic child exploitation through negligence. Dutch experts argue that similar concerns justify a thorough review in the Netherlands.
Previous Dutch investigations have examined platforms such as Instagram, TikTok, and Snapchat under similar children’s rights frameworks. Policymakers hope the Roblox review will set clearer standards for digital child safety across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.
At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’
However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.
As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.
The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.
Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.
Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Penn State have developed an AI model that measures children’s bite rate during meals, aiming to address a key risk factor for obesity. Eating quickly hinders fullness signals and, combined with larger bites, increases the risk of obesity.
The AI system, named ByteTrack, was trained using over 1,400 minutes of video from a study of 94 children aged seven to nine. It recognises children’s faces with 97% accuracy and detects bites about 70% as successfully as humans.
Although the system requires further refinement, the pilot study shows promise for large-scale research and potential real-world applications. With further training, ByteTrack could become a smartphone app alerting children when they eat too quickly to encourage healthier habits.
The research was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of General Medical Sciences, and Penn State’s computational and clinical research institutes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Renew Europe is urging the European Commission to deploy its legal tools, including the Digital Services Act (DSA), GDPR and the AI Act, to curb ‘addictive design’ and protect young people’s mental health, as evidence from the Commission’s Joint Research Centre shows intensive social media use among adolescents.
Momentum is building across Brussels and the Member States. The EU digital ministers endorsed the ‘Jutland Declaration’ on child safety online. The push comes after von der Leyen’s call for tougher limits on children’s social media use in her State of the Union address and the Commission’s publication of DSA guidelines for platforms on minor protection.
Renew wants clearer rules against dark patterns and mandatory child-safe defaults such as limiting night-time notifications, switching off autoplay, banning screenshots of minors’ content, and removing filters linked to body-image risks.
The group also calls for robust, privacy-preserving age checks and regular updates to DSA guidance, alongside stronger enforcement powers for the national Digital Services Coordinators. Further action may come via the Digital Fairness Act, now out for consultation until 24 October 2025, an act targeting addictive design and misleading influencer practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!