Vietnam has emerged as Southeast Asia’s leader in AI readiness, with daily usage, upskilling rates and data-sharing willingness topping regional rankings. Survey data show 81 percent of users engage with AI tools each day, supported by widespread training and high trust levels.
Commercial activity reflects the shift, with AI-enhanced apps recording a 78 percent rise in revenue over the past year. Investors contributed 123 million dollars to local AI ventures, and most expect funding to grow further across software, services and deep-tech fields.
Vietnam’s digital economy is forecast to reach 39 billion dollars in 2025, fuelled by rapid expansion across e-commerce, online media, travel and digital finance. E-commerce continues to dominate, while gaming and online payments record notable acceleration across broader markets.
Vietnamese government support for cashless payments and favourable travel measures further strengthens digital adoption. Analysts say that Vietnam’s combination of strong user trust, fast-growing platforms and rising investment positions the country as a strong regional technological powerhouse.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
EU member states reached a common position on a regulation intended to reduce online child sexual abuse.
The proposal introduces obligations for digital service providers to prevent the spread of harmful content and to respond when national authorities require the removal, blocking or delisting of material.
A framework that requires providers to assess how their services could be misused and to adopt measures that lower the risk.
Authorities will classify services into three categories based on objective criteria, allowing targeted obligations for higher-risk environments. Victims will be able to request assistance when seeking the removal or disabling of material that concerns them.
The regulation establishes an EU Centre on Child Sexual Abuse, which will support national authorities, process reports from companies and maintain a database of indicators. The Centre will also work with Europol to ensure that relevant information reaches law enforcement bodies in member states.
The Council position makes permanent the voluntary activities already carried out by companies, including scanning and reporting, which were previously supported by a temporary exemption.
Formal negotiations with the European Parliament can now begin with the aim of adopting the final regulation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
EU policy debates intensified after Denmark abandoned plans for mandatory mass scanning in the draft Child Sexual Abuse Regulation. Advocates welcomed the shift yet warned that new age checks and potential app bans still threaten privacy.
France and the UK advanced consultations on good practice guidelines for cyber intrusion firms, seeking more explicit rules for industry responsibility. Civil society groups also marked two years of the Digital Services Act by reflecting on enforcement experience and future challenges.
Campaigners highlighted rising concerns about tech-facilitated gender violence during the 16 Days initiative. The Centre for Democracy and Technology launched fresh resources stressing encryption protection, effective remedies and more decisive action against gendered misinformation.
CDT Europe also criticised the Commission’s digital omnibus package for weakening safeguards under laws, including the AI Act. The group urged firm enforcement of existing frameworks while exploring better redress options for AI-related harms in the EU legislation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major provider of three widely used nudify services has cut off Australian access after enforcement action from eSafety.
The company received an official warning in September for allowing its tools to be used to produce AI-generated material that harmed children.
A withdrawal that follows concerns about incidents involving school students and repeated reminders that online services must meet Australia’s mandatory safety standards.
eSafety stated that Australia’s codes and standards are encouraging companies to adopt stronger safeguards.
The Commissioner noted that preventing the misuse of consumer tools remains central to reducing the risk of harm and that more precise boundaries can lower the likelihood of abuse affecting young people.
Attention has also turned to underlying models and the hosting platforms that distribute them.
Hugging Face has updated its terms to require users to take steps to mitigate the risks associated with uploaded models, including preventing misuse for generating harmful content. The company is required to act when reports or internal checks reveal breaches of its policies.
eSafety indicated that failure to comply with industry codes or standards can lead to enforcement measures, including significant financial penalties.
The agency is working with the government on further reforms intended to restrict access to nudify tools and strengthen protections across the technology stack.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Staffordshire Police will trial AI-powered ‘agents’ on its 101 non-emergency service early next year, according to a recent BBC report.
The technology, known as Agentforce, is designed to resolve simple information requests without human intervention, allowing call handlers to focus on more complex or urgent cases. The force said the system aims to improve contact centre performance after past criticism over long wait times.
Senior officers explained that the AI agent will support queries where callers are seeking information rather than reporting crimes. If keywords indicating risk or vulnerability are detected, the system will automatically route the call to a human operator.
Thames Valley Police is already using the technology and has given ‘very positive reports’, according to acting Chief Constable Becky Riggs.
The force’s current average wait for 101 calls is 3.3 minutes, a marked improvement on the previous 7.1-minute average. Abandonment rates have also fallen from 29.2% to 18.7%. However, Commissioner Ben Adams noted that around eight percent of callers still wait over an hour.
UK officers say they have been calling back those affected, both to apologise and to gather ‘significant intelligence’ that has strengthened public confidence in the system.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A South Tyneside family has spoken publicly after an elderly man lost almost £3,000 to a highly persuasive cryptocurrency scam, according to a recent BBC report. The scammer contacted the victim repeatedly over several weeks, initially offering help with online banking before shifting to an ‘investment opportunity’.
According to the family, the caller built trust by using personal details, even fabricating a story about ‘free Bitcoin’ awarded to the man years earlier.
Police said the scam fits a growing trend of crypto-related fraud. The victim, under the scammer’s guidance, opened multiple new bank accounts and was eventually directed to transfer nearly £3,000 into a Coinbase-linked crypto wallet.
Attempts by the family to recover the funds were unsuccessful. Coinbase said it advises users to research any investment carefully and provides guidance on recognising scams.
Northumbria Police and national fraud agencies have been alerted. Officers said crypto scams present particular challenges because, unlike traditional banking fraud, the transferred funds are far harder to trace.
Community groups in Sunderland, such as Pallion Action Group, are now running sessions to educate older residents about online threats, noting that rapid changes in technology can make such scams especially daunting for pensioners.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU member states have endorsed a position for new rules to counter child sexual abuse online. The plan introduces duties for digital services to prevent the spread of abusive material. It also creates an EU Centre to coordinate enforcement and support national authorities.
Service providers must assess how their platforms could be misused and apply mitigation measures. These may include reporting tools, stronger privacy defaults for minors, and controls over shared content. National authorities will review these steps and can order additional action where needed.
A three-tier risk system will categorise services as high, medium, or low risk. High-risk platforms may be required to help develop protective technologies. Providers that fail to comply with obligations could face financial penalties under the regulation.
Victims will be able to request the removal or disabling of abusive material depicting them. The EU Centre will verify provider responses and maintain a database to manage reports. It will also share relevant information with Europol and law enforcement bodies.
The Council supports extending voluntary scanning for abusive content beyond its current expiry. Negotiations with the European Parliament will now begin on the final text. The Parliament adopted its position in 2023 and will help decide the Centre’s location.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.
Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.
Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.
Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.
Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.
Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.
Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.
Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.
The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.
eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.
The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.
Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.
A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.
Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.
Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.
Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.
Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.
Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.
The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!