Japan aims to boost public AI use

Japan has drafted a new basic programme aimed at dramatically increasing public use of AI, with a target of raising utilisation from 50% to 80%. The government hopes the policy will strengthen domestic AI capabilities and reduce reliance on foreign technologies.

To support innovation, authorities plan to attract roughly ¥1 trillion in private investment, funding research, talent development and the expansion of AI businesses into emerging markets. Officials see AI as a core social infrastructure that supports both intellectual and practical functions.

The draft proposes a unified AI ecosystem where developers, chip makers and cloud providers collaborate to strengthen competitiveness and reduce Japan’s digital trade deficit. AI adoption is also expected to extend across all ministries and government agencies.

Prime Minister Sanae Takaichi has pledged to make Japan the easiest country in the world for AI development and use. The Cabinet is expected to approve the programme before the end of the year, paving the way for accelerated research and public-private investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK lawmakers push for binding rules on advanced AI

Growing political pressure is building in Westminster as more than 100 parliamentarians call for binding regulation on the most advanced AI systems, arguing that current safeguards lag far behind industry progress.

A cross-party group, supported by former defence and AI ministers, warns that unregulated superintelligent models could threaten national and global security.

The campaign, coordinated by Control AI and backed by tech figures including Skype co-founder Jaan Tallinn, urges Prime Minister Keir Starmer to distance the UK from the US stance against strict federal AI rules.

Experts such as Yoshua Bengio and senior peers argue that governments remain far behind AI developers, leaving companies to set the pace with minimal oversight.

Calls for action come after warnings from frontier AI scientists that the world must decide by 2030 whether to allow highly advanced systems to self-train.

Campaigners want the UK to champion global agreements limiting superintelligence development, establish mandatory testing standards and introduce an independent watchdog to scrutinise AI use in the public sector.

Government officials maintain that AI is already regulated through existing frameworks, though critics say the approach lacks urgency.

Pressure is growing for new, binding rules on the most powerful models, with advocates arguing that rapid advances mean strong safeguards may be needed within the next two years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Starlink gains ground in South Korea’s telecom market

South Korea has gained nationwide satellite coverage as Starlink enters the market and expands the country’s already advanced connectivity landscape.

The service offers high-speed access through a dense LEO network and arrives with subscription options for households, mobile users and businesses.

Analysts see meaningful benefits for regions that are difficult to serve through fixed networks, particularly in mountainous areas and offshore locations.

Enterprise interest has grown quickly. Maritime operators moved first, with SK Telink and KT SAT securing contracts as Starlink went live. Large fleets will now adopt satellite links for navigation support, remote management and stronger emergency communication.

The technology has also reached the aviation sector as carriers under Hanjin Group plan to install Starlink across all aircraft, aiming to introduce stable in-flight Wi-Fi from 2026.

Although South Korea’s fibre and 5G networks offer far higher peak speeds, Starlink provides reliability where terrestrial networks cannot operate. Industry observers expect limited uptake from mainstream households but anticipate significant momentum in maritime transport, aviation, construction and energy.

An expansion in South Korea that marks one of Starlink’s most strategic Asia-Pacific moves, driven by industrial demand and early partnerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ireland and Australia deepen cooperation on online safety

Ireland’s online safety regulator has agreed a new partnership with Australia’s eSafety Commissioner to strengthen global approaches to digital harm. The Memorandum of Understanding (MoU) reinforces shared ambitions to improve online protection for children and adults.

The Irish and Australian plan to exchange data, expertise and methodological insights to advance safer digital platforms. Officials describe the arrangement as a way to enhance oversight of systems used to minimise harmful content and promote responsible design.

Leaders from both organisations emphasised the need for accountability across the tech sector. Their comments highlighted efforts to ensure that platforms embed user protection into their product architecture, rather than relying solely on reactive enforcement.

The MoU also opens avenues for collaborative policy development and joint work on education programs. Officials expect a deeper alignment around age assurance technologies and emerging regulatory challenges as online risks continue to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches nationwide AI initiative in Australia

OpenAI has launched OpenAI for Australia, a nationwide initiative to unlock the economic and societal benefits of AI. The program aims to support sovereign AI infrastructure, upskill Australians, and accelerate the country’s local AI ecosystem.

CEO Sam Altman highlighted Australia’s deep technical talent and strong institutions as key factors in becoming a global leader in AI.

A significant partnership with NEXTDC will see the development of a next-generation hyperscale AI campus and large GPU supercluster at Sydney’s Eastern Creek S7 site.

The project is expected to create thousands of jobs, boost local supplier opportunities, strengthen STEM and AI skills, and provide sovereign compute capacity for critical workloads.

OpenAI will also upskill more than 1.2 million Australians in collaboration with CommBank, Coles and Wesfarmers. OpenAI Academy will provide tailored modules to give workers and small business owners practical AI skills for confident daily use.

The nationwide rollout of courses is scheduled to begin in 2026.

OpenAI is launching its first Australian start-up program with local venture capital firms Blackbird, Square Peg, and AirTree to support home-grown innovation. Start-ups will receive API credits, mentorship, workshops, and access to Founder Day to accelerate product development and scale AI solutions locally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SAP elevates customer support with proactive AI systems

AI has pushed customer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that predicts issues, prevents failures and keeps critical systems running smoothly instead of relying on queues and manual intervention.

Major sales events, such as Cyber Week and Singles Day, demonstrated the impact of this shift, with uninterrupted service and significant growth in transaction volumes and order numbers.

Self-service now resolves most issues before they reach an engineer, as structured knowledge supports AI agents that respond instantly with a confidence level that matches human performance.

Tools such as the Auto Response Agent and Incident Solution Matching enable customers to retrieve solutions without having to search through lengthy documentation.

SAP has also prepared organisations scaling AI by offering support systems tailored for early deployment.

Engineers have benefited from AI as much as customers. Routine tasks are handled automatically, allowing experts to focus on problems that demand insight instead of administration.

Language optimisation, routing suggestions, and automatic error categorisation support faster and more accurate resolutions. SAP validates every AI tool internally before release, which it views as a safeguard for responsible adoption.

The company maintains that AI will augment staff rather than replace them. Creative and analytical work becomes increasingly important as automation handles repetitive tasks, and new roles emerge in areas such as AI training and data stewardship.

SAP argues that progress relies on a balanced relationship between human judgement and machine intelligence, strengthened by partnerships that turn enterprise data into measurable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CJEU tightens duties for online marketplaces

EU judges have ruled that online marketplaces must verify advertisers’ identities before publishing personal data. The judgment arose from a Romanian case involving an abusive anonymous advertisement containing sensitive information.

In this Romanian case, the Court found that marketplace operators influence the purposes and means of processing and therefore act as joint controllers. They must identify sensitive data before publication and ensure consent or another lawful basis exists.

Judges also held that anonymous users cannot lawfully publish sensitive personal data without proving the data subject’s explicit agreement. Platforms must refuse publication when identity checks fail or when no valid GDPR ground applies.

Operators must introduce safeguards to prevent unlawful copying of sensitive content across other websites. The Court confirmed that exemptions under E-commerce rules cannot override GDPR accountability duties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot