Japan pushes domestic AI to boost national security

Japan will prioritise home-grown AI technology in its new national strategy, aiming to strengthen national security and reduce dependence on foreign systems. The government says developing domestic expertise is essential to prevent overreliance on US and Chinese AI models.

Officials revealed that the plan will include better pay and conditions to attract AI professionals and foster collaboration among universities, research institutes and businesses. Japan will also accelerate work on a next-generation supercomputer to succeed the current Fugaku model.

Prime Minister Shigeru Ishiba has said Japan must catch up with global leaders such as the US and reverse its slow progress in AI development. Not a lot of people in Japan reported using generative AI last year, compared with nearly 70 percent in the United States and over 80 percent in China.

The government’s strategy will also address the risks linked to AI, including misinformation, disinformation and cyberattacks. Officials say the goal is to make Japan the world’s most supportive environment for AI innovation while safeguarding security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amazon expands Project Kuiper with new satellite launches

Amazon’s Project Kuiper is moving ahead with its global satellite internet network, adding another 24 satellites to orbit as part of its ongoing deployment plan.

The latest mission, known as KF-03, is scheduled for today, launching on a SpaceX Falcon 9 rocket from Cape Canaveral Space Force Station in Florida.

The KF-03 launch will bring the total number of Kuiper satellites to 153, furthering the plan of Amazon to build a low Earth orbit constellation of more than 3,200 spacecraft.

Once deployed at an altitude of 289 miles, the satellites will undergo health checks before being raised to their operational orbit of 392 miles. The mission marks Amazon’s third collaboration with SpaceX as part of over 80 launches planned for the project.

Earlier missions in 2025 included deployments using both SpaceX Falcon 9 and ULA Atlas V rockets. The first launch in April carried 27 satellites, followed by additional missions in June, July, August and September.

Each operation has strengthened the foundation of Kuiper’s network, which aims to provide reliable internet connectivity to customers and communities worldwide.

Amazon’s Project Kuiper represents a major investment in global connectivity infrastructure, with its Kennedy Space Center facility in Florida supporting multiple launch campaigns simultaneously.

Once complete, the system is expected to compete with other satellite internet networks by expanding digital access across underserved regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy bans deepfake app that undresses people

Italy’s data protection authority has ordered an immediate suspension of the app Clothoff, which uses AI to generate fake nude images of real people. The company behind it, based in the British Virgin Islands, is now barred from processing personal data of Italian users.

The watchdog found that Clothoff enables anyone, including minors, to upload photos and create sexually explicit or pornographic deepfakes. The app fails to verify consent from those depicted and offers no warning that the images are artificially generated.

The regulator described the measure as urgent, citing serious risks to human dignity, privacy, and data protection, particularly for children and teenagers. It has also launched a wider investigation into similar so-called ‘nudifying’ apps that exploit AI technology.

Italian media have reported a surge in cases where manipulated images are used for harassment and online abuse, prompting growing social alarm. Authorities say they intend to take further steps to protect individuals from deepfake exploitation and strengthen safeguards around AI image tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Unapproved AI tools boom in UK workplaces

Microsoft research reveals 71% of UK employees use unapproved AI tools at work, with 51% doing so weekly, raising concerns about data privacy and cybersecurity risks. Organisations face heightened risks to data privacy and cybersecurity as sensitive information enters unregulated platforms.

Despite these dangers, awareness remains low, as only 32% express concern over data privacy and 29% over IT system vulnerabilities.

Workers favour Shadow AI for its simplicity, with 41% citing familiarity from personal use and 28% noting the absence of approved alternatives at their firms. Common applications include drafting communications (49%), creating reports or presentations (40%), and handling finance tasks (22%).

Generative AI assistants now permeate the workforce, saving an average of 7.75 hours weekly per user- equivalent to 12.1 billion hours annually across the economy, valued at £208 billion.

Sector leaders in IT, telecoms, sales, media, marketing, architecture, engineering, and finance report the highest adoption rates. Employees plan to redirect saved time towards better work-life balance (37%), skill development (31%), and more fulfilling tasks (28%).

Darren Hardman, CEO of Microsoft UK and Ireland, urges businesses to prioritise enterprise-grade tools that blend productivity with robust safeguards.

Optimism about AI has climbed, with 57% of staff feeling excited or confident- up from 34% in January 2025. Familiarity grows too, as confusion over starting points drops from 44% to 36%, and clarity on organisational AI strategies rises from 24% to 43%.

Frontier firms leading in adoption see twice the thriving rates, aligning with global trends where 82% of leaders deem 2025 pivotal for AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Purple Fest highlights AI for disabilities

Entrepreneurs at International Purple Fest in Goa, India, from 9 to 12 October 2025, showcased AI transforming assistive technologies. Innovations like conversational screen readers, adaptive dashboards, and real-time captioning empower millions with disabilities worldwide.

Designed with input from those with lived experience, these tools turn barriers into opportunities for learning, working, and leading independently.

Surashree Rahane, born with club foot and polymelia, founded Yearbook Canvas and champions inclusive AI. Collaborating with Newton School of Technology near New Delhi, she develops adaptive learning platforms tailored to diverse learners.

‘AI can democratise education,’ she stated, ‘but only if trained to avoid perpetuating biases.’ Her work addresses structural barriers like inaccessible systems and biased funding networks.

Prateek Madhav, CEO of AssisTech Foundation, described AI as ‘the great equaliser,’ creating jobs through innovations like voice-to-speech tools and gesture-controlled wheelchairs.

Ketan Kothari, a consultant at Xavier’s Resource Centre in Mumbai, relies on AI for independent work, using live captions and visual description apps. Such advancements highlight AI’s role in fostering agency and inclusion across diverse needs.

Hosted by Goa’s Department for Empowerment of Persons with Disabilities, UN India, and the Ministry of Social Justice, Purple Fest promotes universal design.

Tshering Dema from the UN Development Coordination Office noted that inclusion requires a global mindset shift. ‘The future of work must be co-designed with people,’ she said, reflecting a worldwide transition towards accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netherlands safeguards economic security through Nexperia intervention

The Dutch Minister of Economic Affairs has invoked the Goods Availability Act in response to serious governance issues at semiconductor manufacturer Nexperia.

The measure, announced on 30 September 2025, seeks to ensure the continued availability of the company’s products in the event of an emergency. Nexperia, headquartered in Nijmegen, will be allowed to maintain its normal production activities.

A decision that follows recent indications of significant management deficiencies and actions within Nexperia that could affect the safeguarding of vital technological knowledge and capacity in the Netherlands and across Europe.

Authorities view these capabilities as essential for economic security, as Nexperia supplies chips for the automotive sector and consumer electronics industries.

Under the order, the Minister of Economic Affairs may block or reverse company decisions considered harmful to Nexperia’s long-term stability or to the preservation of Europe’s semiconductor value chain.

The Netherlands government described the use of the Goods Availability Act as exceptional, citing the urgency and scale of the governance concerns.

Officials emphasised that the action applies only to Nexperia and does not target other companies, sectors, or countries. The decision may be contested through the courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ICE-tracking apps pulled from the App Store

Apple has taken down several mobile apps used to track US Immigration and Customs Enforcement (ICE) activity, sparking backlash from developers and digital rights advocates. The removals follow reported pressure from the US Department of Justice, which has cited safety and legal concerns.

One affected app, Eyes Up, was designed to alert users to ICE raids and detention locations. Its developer, identified only as Mark for safety reasons, said he believes the decision was politically motivated and vowed to contest it.

The takedown reflects a wider debate over whether app stores should host software linked to law enforcement monitoring or protest activity. Developers argue their tools support community safety and transparency, while regulators say such apps could risk interference with federal operations.

Apple has not provided detailed reasoning for its decision beyond referencing its developer guidelines. Google has also reportedly removed similar apps from its Play Store, citing policy compliance. Both companies face scrutiny over how content moderation intersects with political and civil rights issues.

Civil liberties groups warn that the decision could set a precedent limiting speech and digital activism in the US. The affected developers have said they will continue to distribute their apps through alternative channels while challenging the removals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot