EU considers stronger child protection in Digital Fairness Act

Capitals across the EU are being asked to discuss how stronger child protection measures should be incorporated into the upcoming Digital Fairness Act (DFA).

The initiative comes as policymakers attempt to address growing concerns about how online platforms expose minors to harmful content, manipulative design practices, and unsafe digital environments.

According to a document circulated during Cyprus’s Council presidency of the European Union, member states are expected to debate which concrete safeguards should be introduced as part of the broader consumer protection framework.

Officials are exploring whether new rules should require platforms to adopt stricter safeguards when designing digital services used by children.

The discussions are part of the European Union’s broader effort to strengthen digital governance and consumer protection across online platforms. Policymakers are increasingly focusing on how platform design, recommendation algorithms, and monetisation models may affect younger users.

The proposals could complement existing EU regulations targeting large digital platforms, while expanding protections specifically focused on minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI legal advice case asks whether ChatGPT crosses legal boundaries

A newly filed lawsuit against OpenAI raises a key issue: Does allowing generative AI systems like ChatGPT to provide legal advice violate laws that bar the unauthorised practice of law (UPL)? UPL means providing legal services, such as drafting filings or giving advice, without the required legal qualifications or a state licence.

The case claims an individual used ChatGPT to prepare legal filings in a dispute with Nippon Life Insurance, prompting the company to argue OpenAI should be held responsible for the outcome.

The lawsuit claims ChatGPT helped the user challenge a settled legal dispute. As a result, the company had to spend additional time and resources responding to filings produced with ChatGPT. The claim alleges tortious interference with a contract, which is the unlawful disruption of an existing agreement between two parties by causing one of the parties to breach or alter it.

Ultimately, this disrupted another party’s contractual relationship. The suit also claims unauthorised practice of law and abuse of the judicial process, which means using the legal system improperly to gain an advantage. It argues OpenAI should be liable because ChatGPT operates under its control. The dispute centres on whether AI systems should analyse disputes and offer legal advice like a lawyer.

Advocates argue the tools could widen access to legal advice. They could make legal support more accessible and affordable for those who cannot easily hire a lawyer. However, US legal frameworks restrict the provision of legal advice to licensed lawyers. The rules are designed to protect consumers and ensure professional accountability.

Critics argue that limiting legal advice to licensed lawyers preserves an expensive monopoly and hinders access to justice. AI-driven legal tools highlight this tension over the future of legal services.

The outcome of this lawsuit will likely hinge on whether AI-generated responses constitute intentional legal advice and if OpenAI can be held liable for such outputs. Even if it fails, the case foregrounds the broader debate about granting generative AI a legitimate role in legal guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT ‘adult mode’ launch delayed as OpenAI focuses on core improvements

OpenAI has postponed the launch of ChatGPT’s ‘adult mode’, a feature designed to let verified adult users access erotica and other mature content.

Teams are focusing on improving intelligence, personality and proactive behaviour instead of releasing the feature immediately.

A feature that was first announced by Sam Altman in October, with an initial December rollout, aiming to allow adults more freedom while maintaining safety for younger users.

The project faced an earlier delay as internal teams prioritised the core ChatGPT experience.

OpenAI stated it still supports the principle of treating adults like adults but warned that achieving the right experience will require more time. No new release date has been provided.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

The EU faces growing AI copyright disputes

Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.

One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.

Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.

Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Canada begin negotiations on a digital trade agreement

The European Commission and Canada have launched negotiations on a new Digital Trade Agreement to strengthen the rules governing cross-border digital commerce.

The initiative was announced in Toronto by the EU Trade Commissioner Maroš Šefčovič and Canadian International Trade Minister Maninder Sidhu.

An agreement that will expand the digital dimension of the existing Comprehensive Economic and Trade Agreement, which has already increased trade in goods and services between the two partners.

Officials say the new negotiations aim to create clearer rules for businesses and consumers engaging in cross-border digital transactions.

Proposals under discussion include promoting paperless trade systems, recognising electronic signatures and digital contracts, and prohibiting customs duties on electronic transmissions.

The agreement between the EU and Canada will also seek to prevent protectionist practices such as unjustified data localisation requirements or forced transfers of software source code.

European officials argue that the negotiations reflect a broader effort to develop international standards for digital trade governance while preserving governments’ ability to regulate emerging challenges in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI feature keeps Roblox chat respectful and flowing

Roblox Corporation has unveiled an AI-powered real-time chat rephrasing feature designed to maintain civility while keeping in-game conversations fluid. Previously, messages containing profanity were blocked with hashmarks, disrupting gameplay.

The new system automatically rephrases inappropriate language into more respectful alternatives while preserving the original meaning. Users in the chat are notified when their messages are rephrased, ensuring transparency.

The feature supports in-game chat between age-verified users and all languages via Roblox’s automatic translation. The company consulted its TEEN COUNCIL to design the system, ensuring it reflects how teens naturally communicate.

Earlier experiments with real-time warnings and notifications reduced filtered messages and abuse reports by 5–6%, indicating the approach’s effectiveness.

Roblox is also enhancing its text filters to detect complex attempts to bypass Community Standards, such as leet-speak or symbols. Testing shows a 20-fold reduction in missed cases involving the sharing of personal information, such as social handles or phone numbers.

These upgrades represent a significant step toward safer, more natural in-game chat.

The company plans to continue refining these tools, aiming to minimise disruptions further while promoting civil communication. Users can expect iterative improvements and additional controls in the future to enhance chat safety and overall user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Gemini leads latest ORCA benchmark on AI maths accuracy

A new round of the ORCA (Omni Research on Calculation in AI) benchmark reveals significant progress in how leading AI chatbots handle real-world mathematical problems, while also highlighting persistent limitations in reliability and consistency.

The latest results show Google’s Gemini 3 Flash moving clearly ahead of competing systems, correctly answering nearly three-quarters of the 500 practical questions used in the benchmark.

Our readers may recall that the platform previously analysed the first edition of the ORCA benchmark, examining how AI chatbots performed on everyday quantitative tasks rather than purely academic problems. The earlier analysis already showed notable gaps between systems and raised questions about the reliability of AI models for calculations people might encounter in daily life.

The second benchmark compares four widely accessible models: ChatGPT-5.2, Gemini 3 Flash, Grok-4.1 and DeepSeek V3.2. Gemini recorded the largest improvement, decisively outpacing the others. ChatGPT and DeepSeek posted smaller but steady gains, while Grok’s results declined slightly in several subject areas.

Performance improvements were uneven across domains, with Gemini showing particularly strong gains in fields such as biology, chemistry, physics and health-related calculations.

Closer examination of the errors reveals why AI still struggles with mathematical accuracy. Calculation mistakes have increased as a share of total errors, while rounding and formatting problems have decreased.

Researchers explain that large language models do not actually compute numbers in the same way that calculators do. Instead, they predict likely sequences of words and numbers, which can lead to small shortcuts during multi-step reasoning that eventually produce incorrect results.

The benchmark also highlights another challenge: instability. The same question can produce different answers when asked multiple times, even when the model initially responded correctly. Such variation reflects the probabilistic nature of AI systems.

As a result, the benchmark concludes that AI chatbots can assist with calculations but cannot yet match the consistency of traditional calculators, which always return the same answer for the same input.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!