Japan and ASEAN agree to boost AI collaboration

Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to collaborate on developing new AI models and preparing related legislation. The cooperation was formalised in a joint statement at a digital ministers’ meeting in Hanoi on Thursday.

Proposed by Minister Hayashi, the initiative aims to boost regional AI capabilities amid US and Chinese competition. Japan emphasised its ongoing commitment to supporting ASEAN’s technological development.

The partnership follows last October’s Japan-ASEAN summit, where Prime Minister Takaichi called for joint research in semiconductors and AI. The agreement aims to foster closer innovation ties and regional collaboration in strategic technology sectors.

The collaboration will engage public and private stakeholders to promote research, knowledge exchange, and capacity-building across ASEAN. Officials expect the partnership to speed AI adoption while maintaining regional regulations and ethical standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SRB GDPR case withdrawn from EU court

A high-profile EU court case on pseudonymised data has ended without a final ruling. The dispute involved the Single Resolution Board and the European Data Protection Supervisor.

The case focused on whether pseudonymised opinions qualify as personal data under the GDPR. Judges were also asked to assess reidentification risks and notification duties.

After intervention by the Court of Justice of the European Union, the matter returned to the General Court. Both parties later withdrew the case, leaving no binding judgement.

Legal experts say the CJEU’s guidance continues to shape enforcement practice. Regulators are expected to reflect those principles in updated EU pseudonymisation guidelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces investigation over deepfake abuse claims

California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.

Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.

Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.

‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.

The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia marks 25 years with new global tech partnerships

Wikipedia marked its 25th anniversary by showcasing the rapid expansion of Wikimedia Enterprise and its growing tech partnerships. The milestone reflects Wikipedia’s evolution into one of the most trusted and widely used knowledge sources in the digital economy.

Amazon, Meta, Microsoft, Mistral AI, and Perplexity have joined the partner roster for the first time, alongside Google, Ecosia, and several other companies already working with Wikimedia Enterprise.

These organisations integrate human-curated Wikipedia content into search engines, AI models, voice assistants, and data platforms, helping deliver verified knowledge to billions of users worldwide.

Wikipedia remains one of the top ten most visited websites globally and the only one in that group operated by a non-profit organisation. With over 65 million articles in 300+ languages, the platform is a key dataset for training large language models.

Wikimedia Enterprise provides structured, high-speed access to this content through on-demand, snapshot, and real-time APIs, allowing companies to use Wikipedia data at scale while supporting its long-term sustainability.

As Wikipedia continues to expand into new languages and subject areas, its value for AI development, search, and specialised knowledge applications is expected to grow further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cerebras to supply large-scale AI compute for OpenAI

OpenAI has agreed to purchase up to 750 megawatts of computing power from AI chipmaker Cerebras over the next three years. The deal, announced on 14 January, is expected to be worth more than US$10 billion and will support ChatGPT and other AI services.

Cerebras will provide cloud services powered by its wafer-scale chips, which are designed to run large AI models more efficiently than traditional GPUs. OpenAI plans to use the capacity primarily for inference and reasoning models that require high compute.

Cerebras will build or lease data centres filled with its custom hardware, with computing capacity coming online in stages through 2028. OpenAI said the partnership would help improve the speed and responsiveness of its AI systems as user demand continues to grow.

The deal is also essential for Cerebras as it prepares for a second attempt at a public listing, following a 2025 IPO that was postponed. Diversifying its customer base beyond major backers such as UAE-based G42 could strengthen its financial position ahead of a potential 2026 flotation.

The agreement highlights the wider race among AI firms to secure vast computing resources, as investment in AI infrastructure accelerates. However, some analysts have warned that soaring valuations and heavy spending could resemble past technology bubbles.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gemini gains new features through Personal Intelligence

A new beta feature has been launched in the United States that lets users personalise the Gemini assistant by connecting Google apps such as Gmail, Photos, YouTube and Search. The tool, called Personal Intelligence, is designed to make the service more proactive and context-aware.

When enabled, Personal Intelligence allows Gemini to reason across a user’s emails, photos, and search history to answer questions or retrieve specific details. Google says users remain in control of which apps are connected and can turn the feature off at any time.

The company showed how Gemini can use connected data to offer tailored suggestions, such as identifying vehicle details from Photos or recommending trips based on past travel.

Google said the system includes privacy safeguards. Personal Intelligence is turned off by default, and Gemini does not train on users’ Gmail inboxes or photo libraries.

The beta is rolling out to Google AI Pro and AI Ultra subscribers in the US and will work across web, Android, and iOS. Google plans to expand access over time and bring the feature to more countries and users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!