EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia reverses its stance and restricts YouTube for children under 16

Australia has announced that YouTube will be banned for children under 16 starting in December, reversing its earlier exemption from strict new social media age rules. The decision follows growing concerns about online harm to young users.

Platforms like Facebook, Instagram, Snapchat, TikTok, and X are already subject to the upcoming restrictions, and YouTube will now join the list of ‘age-restricted social media platforms’.

From 10 December, all such platforms will be required to ensure users are aged 16 or older or face fines of up to AU$50 million (£26 million) for not taking adequate steps to verify age. Although those steps remain undefined, users will not need to upload official documents like passports or licences.

The government has said platforms must find alternatives instead of relying on intrusive ID checks.

Communications Minister Anika Wells defended the policy, stating that four in ten Australian children reported recent harm on YouTube. She insisted the government would not back down under legal pressure from Alphabet Inc., YouTube’s US-based parent company.

Children can still view videos, but won’t be allowed to hold personal YouTube accounts.

YouTube criticised the move, claiming the platform is not social media but a video library often accessed through TVs. Prime Minister Anthony Albanese said Australia would campaign at a UN forum in September to promote global backing for social media age restrictions.

Exemptions will apply to apps used mainly for education, health, messaging, or gaming, which are considered less harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tea dating app suspends messaging after the major data breach

The women’s dating safety app Tea has suspended its messaging feature following a cyberattack that exposed thousands of private messages, posts and images.

The app, which helps women run background checks on men, confirmed that direct messages were accessed during the initial breach disclosed in late July.

Tea has 1.6 million users, primarily in the US. Affected users will be contacted directly and offered free identity protection services, including credit monitoring and fraud alerts.

The company said it is working to strengthen its security and will provide updates as the investigation continues. Some of the leaked conversations reportedly contain sensitive discussions about infidelity and abortion.

Experts have warned that the leak of both images and messages raises the risk of emotional harm, blackmail or identity theft. Cybersecurity specialists recommend that users accept the free protection services as soon as possible.

The breach affected those who joined the app before February 2024, including users who submitted ID photos that Tea had promised would be deleted after verification.

Tea is known for allowing women to check if a potential partner is married or has a criminal record, as well as share personal experiences to flag abusive or trustworthy behaviour.

The app’s recent popularity surge has also sparked criticism, with some claiming it unfairly targets men. As users await more information, experts urge caution and vigilance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India uses AI to catch crypto tax evaders

India’s Income Tax Department is using AI and data tools to identify tax evasion in cryptocurrency transactions. The government collected ₹437 crore in crypto taxes in 2022-2023 using machine learning and digital forensics to spot suspicious activity.

Tax authorities match deducted at source (TDS) data from crypto exchanges to improve compliance. The introduction of the Crypto-Asset Reporting Framework (CARF) also enables automated sharing of tax information, aligning India’s efforts with international tax agreements.

These moves mark a push for greater transparency in India’s digital asset market. Enhanced wallet visibility and automatic data exchange aim to reduce anonymity and curb tax evasion in the crypto space.

India continues to develop regulations focused on consumer protection, cross-border cooperation, and tax compliance, demonstrating a commitment to a more traceable and accountable crypto industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Thailand launches crypto sandbox to boost tourism

Thailand has launched a digital asset sandbox to attract high-spending, tech-savvy tourists by enabling seamless cryptocurrency payments. The initiative lets foreign visitors convert digital assets to Thai baht and spend them using local e-money platforms.

The Securities and Exchange Commission, the Bank of Thailand, and other agencies oversee the regulatory sandbox. It aims to simplify payments from street vendors to luxury retailers, eliminating currency conversion friction and card fees.

Authorities plan to focus on merchant education, compliance, and cybersecurity to support the programme’s success.

The move aligns with Thailand’s broader strategy to become a regional digital finance and blockchain innovation hub. Recent policies include a five-year capital gains tax exemption on crypto sales through local exchanges.

The sandbox could attract fintech firms and blockchain events, signalling Thailand’s ambition to lead in digital asset adoption while maintaining regulatory safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hanwha and Samsung lead Korea’s cyber insurance push

South Korea is stepping up efforts to strengthen its cyber insurance sector as corporate cyberattacks surge across industries. A string of major breaches has revealed widespread vulnerability and renewed demand for more comprehensive digital risk protection.

Hanwha General Insurance launched Korea’s first Cyber Risk Management Centre last November and partnered with global cybersecurity firm Theori and law firm Shin & Kim to expand its offerings.

Despite the growing need, the market remains underdeveloped. Cyber insurance makes up only 1 percent of Korea’s accident insurance sector, with a 2024 report estimating local cyber premiums at $50 million, just 0.3 percent of the global total.

Regulators and industry voices call for higher mandatory coverage, clearer underwriting standards, and financial incentives to promote adoption.

As Korean demand rises, comprehensive policies offering tailored options and emergency coverage are gaining traction, with Hanwha reporting a 200 percent revenue jump in under a year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trust in human doctors remains despite AI advancements

OpenAI CEO Sam Altman has stated that AI, especially ChatGPT, now surpasses many doctors in diagnosing illnesses. However, he pointed out that individuals still prefer human doctors because of the trust and emotional connection they provide.

Altman also expressed concerns about the potential misuse of AI, such as using voice cloning for fraud and identity theft. He emphasised the need for stronger privacy protections for sensitive conversations with AI tools like ChatGPT, noting that current standards are inadequate and should align with those for therapists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DOJ seizes $2.3 million Bitcoin from Chaos ransomware

The US Department of Justice has moved to seize over $2.3 million in Bitcoin tied to a member of the Chaos ransomware group. The funds, taken from a wallet linked to the individual known as ‘Hors’, are alleged to be proceeds of extortion and money laundering.

Chaos operates as a ransomware-as-a-service group, renting its malware to affiliates targeting Windows, Linux, and NAS systems. The group has been active since early 2025 and is known for encrypting victims’ data while demanding crypto payments under threat of public leaks.

US Federal agents accessed the wallet in April using a recovery seed phrase from an older Electrum platform and transferred the assets to a government-controlled address. The DOJ said the operation demonstrates growing success in disrupting ransomware-related crypto flows.

Despite the seizure, challenges remain as such groups evolve their tactics and benefit from the relative anonymity of decentralised platforms. Authorities stress that continued cross-agency cooperation and advances in blockchain forensics are essential in combating future threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU clears Microsoft deal after privacy changes

The European Data Protection Supervisor (EDPS) has ended its enforcement action against the European Commission over its use of Microsoft, following improvements to data protection practices. The decision came after the Commission revised its contract with Microsoft to improve privacy standards.

Under the updated terms, Microsoft must clarify the reasons for data transfers outside the European Economic Area and name the recipients. Transfers are only allowed to countries with EU-recognised protections or in public interest cases.

Microsoft must also inform the Commission if a foreign government requests access to EU data, unless the request comes from within the EU or a country with equivalent safeguards. The EDPS urged other EU institutions to adopt similar contractual protections if using Microsoft 365.

Despite the EDPS’ clearance, the Commission remains concerned about relying too heavily on a non-EU tech provider for essential digital services. It continues to support the current EU-US data adequacy deal, though recent political changes in the US have cast doubt on its long-term stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!