Australian ASIC grants stablecoin relief to boost innovation

The Australian Securities and Investments Commission (ASIC) has introduced class relief for intermediaries involved in the secondary distribution of stablecoins issued by licensed providers. The measure aims to encourage responsible growth in the digital assets and payments sectors.

Under the relief, intermediaries do not need to hold separate Australian financial services (AFS), market, or clearing licences when offering services linked to stablecoins issued by an AFS licensee. However, they must provide clients with the product disclosure statement, if one is available.

ASIC has indicated that as more issuers of eligible stablecoins obtain an AFS licence, the exemption could extend further. The regulator stressed its support for innovation while safeguarding consumers by requiring stablecoins to remain under licensing rules.

The move follows ASIC’s consultation on updates to its guidance for digital assets, which included stablecoins and other token types. ASIC will soon release revised guidance and is working with Treasury on digital asset reforms, including a payment stablecoin framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers for OpenAI and Apollo find scheming behaviour in frontier AI models

OpenAI and Apollo Research developed evaluations to reveal hidden misalignment in advanced AI systems, often labelled scheming, and observed scheming-like behaviour in several frontier models, including OpenAI o3, o4-mini, Gemini 2.5 pro and Claude Opus-4.

Researchers treated covert actions, such as deliberately withholding or distorting task-relevant information, as a measurable proxy and stress-tested models across dozens of scenarios.

Scheming occurs when a model outwardly appears aligned while pursuing an alternate goal. The teams used a trader analogy to explain the risk: a rule-breaking broker who covers tracks can seem more compliant than an honest operator who accepts lower gains.

They trained models with deliberative alignment to counter the behaviour, asking them to read and reason about an anti-scheming specification before acting. Hence, models follow safety principles instead of merely avoiding detection.

That training produced about a thirtyfold reduction in covert actions on out-of-distribution tests, with o3 falling from 13 percent to 0.4 percent and o4-mini dropping from 8.7 percent to 0.3 percent.

Nevertheless, rare but serious failures remained, and measurement proved complex. Models often became more aware of being evaluated, and situational awareness can suppress observable scheming without altering underlying aims.

The study relied heavily on access to the models’ chain of thought to probe motivations, so preserving reasoning transparency is crucial for reliable oversight. The teams warned that training practices eroding transparency could make monitoring far harder and let misalignment hide rather than vanish.

OpenAI and Apollo called for broader cross-lab safety evaluations, stronger monitoring tools and continued research into anti-scheming techniques. They renewed their partnership, launched a $500,000 red-teaming challenge focused on scheming and proposed shared testing protocols.

The researchers emphasised there is no evidence that today’s deployed AI models would abruptly begin harmful scheming. Still, the risk will grow as systems take on more ambiguous, long-term, real-world responsibilities instead of short, narrow tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds AI features to Chrome browser on Android and desktop

Alphabet’s Google has announced new AI-powered features for its Chrome browser that aim to make web browsing more proactive instead of reactive. The update centres on integrating Gemini, Google’s AI assistant, into Chrome to provide contextual support across tabs and tasks.

The AI assistant will help students and professionals manage large numbers of open tabs by summarising articles, answering questions, and recalling previously visited pages. It will also connect with Google services such as Docs and Calendar, offering smoother workflows on desktop and mobile devices.

Chrome’s address bar, the omnibox, is being upgraded with AI Mode. Users can ask multi-part questions and receive context-aware suggestions relevant to the page they are viewing. Initially available in the US, the feature will roll out to other regions and languages soon.

Beyond productivity, Google is also applying AI to security and convenience. Chrome now blocks billions of spam notifications daily, fills in login details, and warns users about malicious apps.

Future updates are expected to bring agentic capabilities, enabling Chrome to carry out complex tasks such as ordering groceries with minimal user input.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and Google to block political ads in EU under new regulations

Broadcasters and advertisers seek clarity before the EU’s political advertising rules become fully applicable on 10 October. The European Commission has promised further guidance, but details on what qualifies as political advertising remain vague.

Meta and Google will block the EU’s political, election, and social issue ads when the rules take effect, citing operational challenges and legal uncertainty. The regulation, aimed at curbing disinformation and foreign interference, requires ads to display labels with sponsors, payments, and targeting.

Publishers fear they lack the technical means to comply or block non-compliant programmatic ads, risking legal exposure. They call for clear sponsor identification procedures, standardised declaration formats, and robust verification processes to ensure authenticity.

Advertisers warn that the rules’ broad definition of political actors may be hard to implement. At the same time, broadcasters fear issue-based campaigns – such as environmental awareness drives – could unintentionally fall under the scope of political advertising.

The Dutch parliamentary election on 29 October will be the first to take place under the fully applicable rules, making clarity from Brussels urgent for media and advertisers across the bloc.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US judge rejects Meta’s bid to overturn verdict in reproductive data case

Meta has failed to overturn a jury verdict that found it illegally collected sensitive reproductive health data from users of the Flo period tracking app. US District Judge James Donato rejected Meta’s claim that the data was ‘secondhand’ and not protected under California’s wiretapping law.

The court found that Meta directly intercepted real-time communications between users and the app, such as when users indicated they wanted to track their menstrual cycle or pregnancy.

Judge Donato also dismissed Meta’s argument that Flo users had consented to the data sharing, calling the claim “rank speculation” unsupported by evidence.

The jury’s August verdict marked one of the first major legal decisions involving big tech’s handling of sensitive health information. Legal experts say it could open the door to more lawsuits and greater scrutiny of tech companies’ data practices. Meta has not responded to requests for comment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon and Mercado Libre criticised for limiting seller mobility in Mexico

Mexico’s competition watchdog has accused Amazon and Mercado Libre of erecting barriers that limit the mobility of sellers in the country’s e-commerce market. The two platforms reportedly account for 85% of the seller market.

The Federal Economic Competition Commission (COFECE) stated that the companies provide preferential treatment to sellers who utilise their logistics services and fail to disclose how featured offers are selected, thereby restricting fair competition.

Despite finding evidence of these practices, COFECE stopped short of imposing corrective measures, citing a lack of consensus among stakeholders. Amazon welcomed the decision, saying it demonstrates the competitiveness of the retail market in Mexico.

The watchdog aims to promote a more dynamic e-commerce sector, benefiting buyers and sellers. Its February report had recommended measures to improve transparency, separate loyalty programme services, and allow fairer access to third-party delivery options.

Trade associations praised COFECE for avoiding sanctions, warning that penalties could harm consumers and shield traditional retailers. Mercado Libre has not yet commented on the findings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tool predicts risk of over 1,000 diseases years ahead

Scientists have unveiled an AI tool capable of predicting the risk of developing over 1,000 medical conditions. Published in Nature, the model can forecast certain cancers, heart attacks, and other diseases more than a decade in advance.

Developed by the German Cancer Research Centre (DKFZ), the European Molecular Biology Laboratory (EMBL), and the University of Copenhagen, the model utilises anonymised health data from the UK and Denmark. It tracks the order and timing of medical events to spot patterns that lead to serious illness.

Researchers said the tool is exceptionally accurate for diseases with consistent progression, including some cancers, diabetes, heart attacks, and septicaemia. Its predictions work like a weather forecast, indicating higher risk rather than certainty.

The model is less reliable for unpredictable conditions such as mental health disorders, infectious diseases, or pregnancy complications. It is more accurate for near-term forecasts than for those decades ahead.

Though not yet ready for clinical use, the system could help doctors identify high-risk patients earlier and enable more personalised, preventive healthcare strategies. Researchers say more work is needed to ensure the tool works for diverse populations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act enforcement gears up with 15 authorities named in Ireland

Ireland has designated 15 authorities to monitor compliance with the EU’s AI Act, making it one of the first EU countries fully ready to enforce the new rules. The AI Act regulates AI systems according to their risk to society and began phasing in last year.

Governments had until 2 August to notify the European Commission of their appointed market surveillance authorities. In Ireland, these include the Central Bank, Coimisiún na Meán, the Data Protection Commission, the Competition and Consumer Protection Commission, and the Health and Safety Authority.

The country will also establish a National AI Office as the central coordinator for AI Act enforcement and liaise with EU institutions. A single point of contact must be designated where multiple authorities are involved to ensure clear communication.

Ireland joins Cyprus, Latvia, Lithuania, Luxembourg, Slovenia, and Spain as countries that have appointed their contact points. The Commission has not yet published the complete list of authorities notified by member states.

Former Italian Prime Minister Mario Draghi has called for a pause in the rollout of the AI Act, citing risks and a lack of technical standards. The Commission has launched a consultation as part of its digital simplification package, which will be implemented in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI smart glasses with Ray-Ban and Oakley

Zuckerberg’s Meta has unveiled a new generation of smart glasses powered by AI at its annual Meta Connect conference in California. Working with Ray-Ban and Oakley, the company introduced devices including the Meta Ray-Ban Display and the Oakley Meta Vanguard.

These glasses are designed to bring the Meta AI assistant into daily use instead of being confined to phones or computers.

The Ray-Ban Display comes with a colour lens screen for video calls and messaging and a 12-megapixel camera, and will sell for $799. It can be paired with a neural wristband that enables tasks through hand gestures.

Meta also presented $499 Oakley Vanguard glasses aimed at sports fans and launched a second generation of its Ray-Ban Meta glasses at $379. Around two million smart glasses have been sold since Meta entered the market in 2023.

Analysts see the glasses as a more practical way of introducing AI to everyday life than the firm’s costly Metaverse project. Yet many caution that Meta must prove the benefits outweigh the price.

Chief executive Mark Zuckerberg described the technology as a scientific breakthrough. He said it forms part of Meta’s vast AI investment programme, which includes massive data centres and research into artificial superintelligence.

The launch came as activists protested outside Meta’s New York headquarters, accusing the company of neglecting children’s safety. Former safety researchers also told the US Senate that Meta ignored evidence of harm caused by its VR products, claims the company has strongly denied.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!