Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

IBM pushes towards quantum advantage in two years with breakthrough code

IBM’s Quantum CTO, Oliver Dial, predicts that quantum advantage, where quantum computers outperform classical ones on specific tasks, could be achieved within two years.

The milestone is seen as possible due to advances in error mitigation techniques, which enable quantum computers to provide reliable results despite their inherent noise. While full fault-tolerant quantum systems are still years away, IBM’s focus on error mitigation could bring real-world results soon.

A key part of IBM’s progress is the introduction of the ‘Gross code,’ a quantum error correction method that drastically reduces the number of physical qubits needed per logical qubit, making the engineering of quantum systems much more feasible.

Dial described this development as a game changer, improving both efficiency and practicality, making quantum systems easier to build and test. The Gross code reduces the need for large, cumbersome arrays of qubits, streamlining the path toward more powerful quantum computers.

Looking ahead, IBM’s roadmap outlines ambitious goals, including building a fully error-corrected system with 200 logical qubits by 2029. Dial stressed the importance of flexibility in the roadmap, acknowledging that the path to these goals could shift but would still lead to the achievement of quantum milestones.

The company’s commitment to these advancements reflects the dedication of the quantum team, many of whom have been working on the project for over a decade.

Despite the excitement and the challenges that remain, IBM’s vision for the future of quantum computing is clear: building the world’s first useful quantum computers.

The company’s ongoing work in quantum computing continues to capture imaginations, with significant steps being taken towards making these systems a reality in the near future.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

Gemini 2.5 Pro boosts Deep Research tool with smarter AI

Google has upgraded its Deep Research tool with the experimental Gemini 2.5 Pro model, promising major improvements in how users access and process complex information.

Deep Research acts as an AI research assistant capable of scanning hundreds of websites, evaluating content, and producing multi-page reports complete with citations and even podcast-style summaries.

Previously powered by Gemini 2.0 Flash, the new iteration significantly enhances reasoning, planning, and reporting capabilities. Human evaluators in Google’s testing preferred Deep Research’s outputs over those generated by OpenAI’s equivalent by a ratio greater than 2 to 1.

Users also noted clearer analytical thinking and better synthesis of information across sources.

The Gemini 2.5 Pro upgrade is available now to Gemini Advanced subscribers across web, Android, and iOS platforms.

For those using the free version, the Gemini 2.0 Flash model remains accessible in over 150 countries, continuing Google’s push to offer powerful research tools to a wide user base.

For more information on these topics, visit diplomacy.edu.

DeepSeek highlights the risk of data misuse

The launch of DeepSeek, a Chinese-developed LLM, has reignited long-standing concerns about AI, national security, and industrial espionage.

While issues like data usage and bias remain central to AI discourse, DeepSeek’s origins in China have introduced deeper geopolitical anxieties. Echoing the scrutiny faced by TikTok, the model has raised fears of potential links to the Chinese state and its history of alleged cyber espionage.

With China and the US locked in a high-stakes AI race, every new model is now a strategic asset. DeepSeek’s emergence underscores the need for heightened vigilance around data protection, especially regarding sensitive business information and intellectual property.

Security experts warn that AI models may increasingly be trained using data acquired through dubious or illicit means, such as large-scale scraping or state-sponsored hacks.

The practice of data hoarding further complicates matters, as encrypted data today could be exploited in the future as decryption methods evolve.

Cybersecurity leaders are being urged to adapt to this evolving threat landscape. Beyond basic data visibility and access controls, there is growing emphasis on adopting privacy-enhancing technologies and encryption standards that can withstand future quantum threats.

Businesses must also recognise the strategic value of their data in an era where the lines between innovation, competition, and geopolitics have become dangerously blurred.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

Amazon launches Nova Sonic AI for natural voice interactions

Amazon has unveiled Nova Sonic, a new AI model designed to process and generate human-like speech, positioning it as a rival to OpenAI and Google’s top voice assistants. The company claims it outperforms competitors in speed, accuracy, and cost, and it is reportedly 80% cheaper than GPT-4o.

Already powering Alexa+, Nova Sonic excels in real-time conversation, handling interruptions and noisy environments better than legacy AI assistants.

Unlike older voice models, Nova Sonic can dynamically route requests, fetching live data or triggering external actions when needed. Amazon says it achieves a 4.2% word error rate across multiple languages and responds in just 1.09 seconds, faster than OpenAI’s GPT-4o.

Developers can access it via Bedrock, Amazon’s AI platform, using a new streaming API.

The launch signals Amazon’s push into artificial general intelligence (AGI), AI that mimics human capabilities.

Rohit Prasad, head of Amazon’s AGI division, hinted at future models handling images, video, and sensory data. This follows last week’s preview of Nova Act, an AI for browser tasks, suggesting Amazon is accelerating its AI rollout beyond Alexa.

For more information on these topics, visit diplomacy.edu.

New AI firm Deep Cogito launches versatile open models

A new San Francisco-based startup, Deep Cogito, has unveiled its first family of AI models, Cogito 1, which can switch between fast-response and deep-reasoning modes instead of being limited to just one approach.

These hybrid models combine the efficiency of standard AI with the step-by-step problem-solving abilities seen in advanced systems like OpenAI’s o1. While reasoning models excel in fields like maths and physics, they often require more computing power, a trade-off Deep Cogito aims to balance.

The Cogito 1 series, built on Meta’s Llama and Alibaba’s Qwen models instead of starting from scratch, ranges from 3 billion to 70 billion parameters, with larger versions planned.

Early tests suggest the top-tier Cogito 70B outperforms rivals like DeepSeek’s reasoning model and Meta’s Llama 4 Scout in some tasks. The models are available for download or through cloud APIs, offering flexibility for developers.

Founded in June 2024 by ex-Google DeepMind product manager Dhruv Malhotra and former Google engineer Drishan Arora, Deep Cogito is backed by investors like South Park Commons.

The company’s ambitious goal is to develop general superintelligence,’ AI that surpasses human capabilities, rather than merely matching them. For now, the team says they’ve only scratched the surface of their scaling potential.

For more information on these topics, visit diplomacy.edu.

Amazon’s Nova Reel can now generate two-minute AI videos

Amazon has enhanced its generative AI video tool, Nova Reel, with an update that allows for the creation of videos up to two minutes long.

The updated model, Nova Reel 1.1, supports multi-shot video generation with a consistent style and accepts detailed prompts of up to 4,000 characters.

A new feature called Multishot Manual gives users more creative control, combining images and short prompts to guide video composition. However, this mode supports up to 20 shots from a single 1280 x 720 image and a 512-character prompt, offering finer-tuned outputs.

Nova Reel is currently accessible through Amazon Web Services (AWS), including its Bedrock AI development suite, although developers must request access, which is automatically granted.

The model enters a competitive field dominated by OpenAI, Google, and others racing to lead in generative video AI.

Despite its growing capabilities, Amazon has not disclosed how the model was trained or the sources of its training data. Questions around intellectual property remain, but Amazon says it will shield customers from copyright claims through its indemnification policy.

For more information on these topics, visit diplomacy.edu.

DeepMind blocks staff from joining AI rivals

Google DeepMind is enforcing strict non-compete agreements in the United Kingdom, preventing employees from joining rival AI companies for up to a year. The length of the restriction depends on an employee’s seniority and involvement in key projects.

Some DeepMind staff, including those working on Google’s Gemini AI, are reportedly being paid not to work while their non-competes run. The policy comes as competition for AI talent intensifies worldwide.

Employees have voiced concern that these agreements could stall their careers in a rapidly evolving industry. Some are seeking ways around the restrictions, such as moving to countries with less rigid employment laws.

While DeepMind claims the contracts are standard for sensitive work, critics say they may stifle innovation and mobility. The practice remains legal in the UK, even though similar agreements have been banned in the US.

For more information on these topics, visit diplomacy.edu.