EU plans new law to tackle online consumer manipulation

The European Commission is preparing to introduce the Digital Fairness Act, a new law that aims to boost consumer protection online instead of adding more regulatory burden on businesses.

Justice Commissioner Michael McGrath described the upcoming legislation as both pro-consumer and pro-business during a speech at the European Retail Innovation Summit, seeking to calm industry concerns about further EU regulation following the Digital Services Act and the Digital Markets Act.

Designed to tackle deceptive practices in the digital space, the law will address issues such as manipulative design tricks known as ‘dark patterns’, influencer marketing, and personalised pricing based on user profiling.

It will also target concerns around addictive service design and virtual currencies in video games—areas where current EU consumer rules fall short. The legislation will be based on last year’s Digital Fairness Fitness Check, which highlighted regulatory gaps in the online marketplace.

McGrath acknowledged the cost of complying with EU-wide consumer protection measures, which can run into millions for businesses.

However, he stressed that the new act would provide legal clarity and ease administrative pressure, particularly for smaller companies, instead of complicating compliance requirements further.

A public consultation will begin in the coming weeks, ahead of a formal legislative proposal expected by mid-2026.

Maria-Myrto Kanellopoulou, head of the Commission’s consumer law unit, promised a thoughtful approach, saying the process would be both careful and thorough to ensure the right balance is struck.

For more information on these topics, visit diplomacy.edu

EU refuses to soften tech laws for Trump trade deal

The European Union has firmly ruled out dismantling its strict digital regulations in a bid to secure a trade deal with Donald Trump. Henna Virkkunen, the EU’s top official for digital policy, said the bloc remained fully committed to its digital rulebook instead of relaxing its standards to satisfy American demands.

While she welcomed a temporary pause in US tariffs, she made clear that the EU’s regulations were designed to ensure fairness and safety for all companies, regardless of origin, and were not intended as a direct attack on US tech giants.

Tensions have mounted in recent weeks, with Trump officials accusing the EU of unfairly targeting American firms through regulatory means. Executives like Mark Zuckerberg have criticised the EU’s approach, calling it a form of censorship, while the US has continued imposing tariffs on European goods.

Virkkunen defended the tougher obligations placed on large firms like Meta, Apple and Alphabet, explaining that greater influence came with greater responsibility.

She also noted that enforcement actions under the Digital Markets Act and Digital Services Act aim to ensure compliance instead of simply imposing large fines.

Although France has pushed for stronger retaliation, the European Commission has held back from launching direct countermeasures against US tech firms, instead preparing a range of options in case talks fail.

Virkkunen avoided speculation on such moves, saying the EU preferred cooperation to conflict. At the same time, she is advancing a broader tech strategy, including plans for five AI gigafactories, while also considering adjustments to the EU’s AI Act to better support small businesses and innovation.

Acknowledging creative industries’ concerns over generative AI, Virkkunen said new measures were needed to ensure fair compensation for copyrighted material used in AI training instead of leaving European creators unprotected.

The Commission is now exploring licensing models that could strike a balance between enabling innovation and safeguarding rights, reflecting the bloc’s intent to lead in tech policy without sacrificing democratic values or artistic contributions.

For more information on these topics, visit diplomacy.edu.

Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

IBM pushes towards quantum advantage in two years with breakthrough code

IBM’s Quantum CTO, Oliver Dial, predicts that quantum advantage, where quantum computers outperform classical ones on specific tasks, could be achieved within two years.

The milestone is seen as possible due to advances in error mitigation techniques, which enable quantum computers to provide reliable results despite their inherent noise. While full fault-tolerant quantum systems are still years away, IBM’s focus on error mitigation could bring real-world results soon.

A key part of IBM’s progress is the introduction of the ‘Gross code,’ a quantum error correction method that drastically reduces the number of physical qubits needed per logical qubit, making the engineering of quantum systems much more feasible.

Dial described this development as a game changer, improving both efficiency and practicality, making quantum systems easier to build and test. The Gross code reduces the need for large, cumbersome arrays of qubits, streamlining the path toward more powerful quantum computers.

Looking ahead, IBM’s roadmap outlines ambitious goals, including building a fully error-corrected system with 200 logical qubits by 2029. Dial stressed the importance of flexibility in the roadmap, acknowledging that the path to these goals could shift but would still lead to the achievement of quantum milestones.

The company’s commitment to these advancements reflects the dedication of the quantum team, many of whom have been working on the project for over a decade.

Despite the excitement and the challenges that remain, IBM’s vision for the future of quantum computing is clear: building the world’s first useful quantum computers.

The company’s ongoing work in quantum computing continues to capture imaginations, with significant steps being taken towards making these systems a reality in the near future.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

Gemini 2.5 Pro boosts Deep Research tool with smarter AI

Google has upgraded its Deep Research tool with the experimental Gemini 2.5 Pro model, promising major improvements in how users access and process complex information.

Deep Research acts as an AI research assistant capable of scanning hundreds of websites, evaluating content, and producing multi-page reports complete with citations and even podcast-style summaries.

Previously powered by Gemini 2.0 Flash, the new iteration significantly enhances reasoning, planning, and reporting capabilities. Human evaluators in Google’s testing preferred Deep Research’s outputs over those generated by OpenAI’s equivalent by a ratio greater than 2 to 1.

Users also noted clearer analytical thinking and better synthesis of information across sources.

The Gemini 2.5 Pro upgrade is available now to Gemini Advanced subscribers across web, Android, and iOS platforms.

For those using the free version, the Gemini 2.0 Flash model remains accessible in over 150 countries, continuing Google’s push to offer powerful research tools to a wide user base.

For more information on these topics, visit diplomacy.edu.

DeepSeek highlights the risk of data misuse

The launch of DeepSeek, a Chinese-developed LLM, has reignited long-standing concerns about AI, national security, and industrial espionage.

While issues like data usage and bias remain central to AI discourse, DeepSeek’s origins in China have introduced deeper geopolitical anxieties. Echoing the scrutiny faced by TikTok, the model has raised fears of potential links to the Chinese state and its history of alleged cyber espionage.

With China and the US locked in a high-stakes AI race, every new model is now a strategic asset. DeepSeek’s emergence underscores the need for heightened vigilance around data protection, especially regarding sensitive business information and intellectual property.

Security experts warn that AI models may increasingly be trained using data acquired through dubious or illicit means, such as large-scale scraping or state-sponsored hacks.

The practice of data hoarding further complicates matters, as encrypted data today could be exploited in the future as decryption methods evolve.

Cybersecurity leaders are being urged to adapt to this evolving threat landscape. Beyond basic data visibility and access controls, there is growing emphasis on adopting privacy-enhancing technologies and encryption standards that can withstand future quantum threats.

Businesses must also recognise the strategic value of their data in an era where the lines between innovation, competition, and geopolitics have become dangerously blurred.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

Amazon launches Nova Sonic AI for natural voice interactions

Amazon has unveiled Nova Sonic, a new AI model designed to process and generate human-like speech, positioning it as a rival to OpenAI and Google’s top voice assistants. The company claims it outperforms competitors in speed, accuracy, and cost, and it is reportedly 80% cheaper than GPT-4o.

Already powering Alexa+, Nova Sonic excels in real-time conversation, handling interruptions and noisy environments better than legacy AI assistants.

Unlike older voice models, Nova Sonic can dynamically route requests, fetching live data or triggering external actions when needed. Amazon says it achieves a 4.2% word error rate across multiple languages and responds in just 1.09 seconds, faster than OpenAI’s GPT-4o.

Developers can access it via Bedrock, Amazon’s AI platform, using a new streaming API.

The launch signals Amazon’s push into artificial general intelligence (AGI), AI that mimics human capabilities.

Rohit Prasad, head of Amazon’s AGI division, hinted at future models handling images, video, and sensory data. This follows last week’s preview of Nova Act, an AI for browser tasks, suggesting Amazon is accelerating its AI rollout beyond Alexa.

For more information on these topics, visit diplomacy.edu.

New AI firm Deep Cogito launches versatile open models

A new San Francisco-based startup, Deep Cogito, has unveiled its first family of AI models, Cogito 1, which can switch between fast-response and deep-reasoning modes instead of being limited to just one approach.

These hybrid models combine the efficiency of standard AI with the step-by-step problem-solving abilities seen in advanced systems like OpenAI’s o1. While reasoning models excel in fields like maths and physics, they often require more computing power, a trade-off Deep Cogito aims to balance.

The Cogito 1 series, built on Meta’s Llama and Alibaba’s Qwen models instead of starting from scratch, ranges from 3 billion to 70 billion parameters, with larger versions planned.

Early tests suggest the top-tier Cogito 70B outperforms rivals like DeepSeek’s reasoning model and Meta’s Llama 4 Scout in some tasks. The models are available for download or through cloud APIs, offering flexibility for developers.

Founded in June 2024 by ex-Google DeepMind product manager Dhruv Malhotra and former Google engineer Drishan Arora, Deep Cogito is backed by investors like South Park Commons.

The company’s ambitious goal is to develop general superintelligence,’ AI that surpasses human capabilities, rather than merely matching them. For now, the team says they’ve only scratched the surface of their scaling potential.

For more information on these topics, visit diplomacy.edu.