Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces investigation over deepfake abuse claims

California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.

Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.

Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.

‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.

The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EMA and FDA set AI principles for medicine

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring.

The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions. EU guideline development is already underway, building on EMA’s 2024 AI reflection paper.

European Commissioner Olivér Várhelyi said the initiative demonstrates renewed EU-US cooperation and commitment to global innovation while maintaining patient safety.

AI adoption in medicine has grown rapidly in recent years. New pharmaceutical legislation and proposals, such as the European Commission’s Biotech Act, highlight AI’s potential to accelerate the development of safe and effective medicine.

A principles-based approach is seen as essential to manage risks while promoting innovation.

The EMA-FDA collaboration builds on prior bilateral work and aligns with EMA’s strategy to leverage data, digitalisation, and AI. Ethics and safety remain central, with a focus on international cooperation to enable responsible innovation in healthcare globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after global backlash

Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.

The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.

UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.

Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.

International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.

Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SEC chair looking ahead to the next phase of crypto regulation

SEC Chair Paul Atkins says US crypto market structure legislation is close to becoming law, with President Donald Trump expected to sign it soon. The move aims to end regulatory uncertainty and provide clear legal foundations for digital asset markets.

Atkins has openly backed Congress in defining the jurisdictional split between the Securities and Exchange Commission and the Commodity Futures Trading Commission, arguing that statutory clarity is essential for protecting investors and supporting institutional growth.

Supporters believe clear rules will replace enforcement-led interpretation and allow the sector to mature within established financial frameworks.

Progress is moving through Congress, with the Senate Banking Committee advancing the CLARITY Act while the Agriculture Committee continues negotiations. Despite disagreements and amendments, bipartisan support suggests the bill could reach the White House by the end of the first quarter.

Looking ahead, Atkins has linked the bill to long-term US competitiveness, stating that clear and principled regulation will encourage innovation and attract capital. Coordination between the SEC, CFTC and the White House is expected to be central to implementation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reaffirms commitment to Digital Markets Act enforcement

European Commission Executive Vice President Teresa Ribera has stated that the EU has a constitutional obligation under its treaties to uphold its digital rulebook, including the Digital Markets Act (DMA).

Speaking at a competition law conference, Ribera framed enforcement as a duty to protect fair competition and market balance across the bloc.

Her comments arrive amid growing criticism from US technology companies and political pressure from Washington, where enforcement of EU digital rules has been portrayed as discriminatory towards American firms.

Several designated gatekeepers have argued that the DMA restricts innovation and challenges existing business models.

Ribera acknowledged the right of companies to challenge enforcement through the courts, while emphasising that designation decisions are based on lengthy and open consultation processes. The Commission, she said, remains committed to applying the law effectively rather than retreating under external pressure.

Apple and Meta have already announced plans to appeal fines imposed in 2025 for alleged breaches of DMA obligations, reinforcing expectations that legal disputes around EU digital regulation will continue in parallel with enforcement efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok to be integrated into Pentagon networks as the US expands military AI strategy

The US Department of Defence plans to integrate Elon Musk’s AI tool Grok into Pentagon networks later in January, according to Defence Secretary Pete Hegseth.

The system is expected to operate across both classified and unclassified military environments as part of a broader push to expand AI capabilities.

Hegseth also outlined an AI acceleration strategy designed to increase experimentation, reduce administrative barriers and prioritise investment across defence technology.

An approach that aims to enhance access to data across federated IT systems, aligning with official views that military AI performance relies on data availability and interoperability.

The move follows earlier decisions by the Pentagon to adopt Google’s Gemini for an internal AI platform and to award large contracts to Anthropic, OpenAI, Google and xAI for agentic AI development.

Officials describe these efforts as part of a long-term strategy to strengthen US military competitiveness in AI.

Grok’s integration comes amid ongoing controversy, including criticism over generated imagery and previous incidents involving extremist and offensive content. Several governments and regulators have already taken action against the tool, adding scrutiny to its expanded role within defence systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK considers social media limits for youth

Keir Starmer has told Labour MPs that he is open to an Australian-style ban on social media for young people, following concerns about the amount of time children spend on screens.

The prime minister said reports of very young children using phones for hours each day have increased anxiety about the effects of digital platforms on under-16s.

Starmer previously opposed such a ban, arguing that enforcement would prove difficult and might instead push teenagers towards unregulated online spaces rather than safer platforms. Growing political momentum across Westminster, combined with Australia’s decision to act, has led to a reassessment of that position.

Speaking to MPs, Starmer said different enforcement approaches were being examined and added that phone use during school hours should be restricted.

UK ministers have also revisited earlier proposals aimed at reducing the addictive design of social media and strengthening safeguards on devices sold to teenagers.

Support for stricter measures has emerged across party lines, with senior figures from Labour, the Conservatives, the Liberal Democrats and Reform UK signalling openness to a ban.

A final decision is expected within months as ministers weigh child safety, regulation and practical implementation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!