South Korean e-commerce firm Coupang has apologised for a major data breach affecting more than 33 million users and announced a compensation package worth 1.69 trillion won. Founder Kim Bom acknowledged the disruption caused, following public and political backlash over the incident.
Under the plan, affected customers will receive vouchers worth 50,000 won, usable Choi Minonly on Coupang’s own platforms. The company said the measure was intended to compensate users, but the approach has drawn criticism from lawmakers and consumer groups.
Choi Min-hee, a lawmaker from the ruling Democratic Party, criticised the decision in a social media post, arguing that the vouchers were tied to services with limited use. She accused Coupang of attempting to turn the crisis into a business opportunity.
Consumer advocacy groups echoed these concerns, saying the compensation plan trivialised the seriousness of the breach. They argued that limiting compensation to vouchers resembled a marketing strategy rather than meaningful restitution for affected users.
The controversy comes as the National Assembly of South Korea prepares to hold hearings on Coupang. While the company has admitted negligence, it has declined to appear before lawmakers amid scrutiny of its handling of the breach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.
Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.
The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.
Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.
Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.
Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.
The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.
Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.
While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.
Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.
Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.
Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.
Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s Tsinghua University has emerged as a central hub in the country’s push to become a global leader in AI. The campus hosts a high level of research activity, with students and faculty working across disciplines related to AI development.
Momentum has been boosted by the success of DeepSeek, an AI startup founded by alums of Tsinghua University. The company reinforced confidence that Chinese teams can compete with leading international laboratories.
The university’s rise is closely aligned with Beijing’s national technology strategy. Government backing has included subsidies, tax incentives, and policy support, as well as public endorsements of AI entrepreneurs affiliated with Tsinghua.
Patent and publication data highlight the scale of output. Tsinghua has filed thousands of AI-related patents and ranks among the world’s most cited institutions in AI research, reflecting China’s rapidly expanding share of global AI innovation.
Despite this growth, the United States continues to lead in influential patents and top-performing models. Analysts note, however, that a narrowing gap is expected, as China produces a growing share of elite AI researchers and expands AI education from schools to advanced research.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
India faces a lower risk of AI-driven disruption to white-collar jobs than Western economies, IT Secretary S Krishnan said. A smaller share of cognitive roles and strong STEM employment reduce near-term impact.
Rather than replacing workers, artificial intelligence is expected to create jobs through sector-specific applications. Development and deployment of these systems will require many trained professionals.
Human oversight will remain essential as issues such as AI hallucinations limit full automation of cognitive tasks. Productivity gains are expected to support, rather than eliminate, knowledge-based work.
India is positioning itself as a global contributor to applied artificial intelligence solutions. Indigenous AI models under development are expected to support jobs, innovation and long-term economic growth.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Groq has signed a non-exclusive licensing agreement with Nvidia to share its inference technology, aiming to make high-performance, cost-efficient AI processing more widely accessible.
Groq’s founder, Jonathan Ross, president Sunny Madra, and other team members will join Nvidia to help develop and scale the licensed technology. Despite the collaboration, Groq will remain an independent company, with Simon Edwards taking over as Chief Executive Officer.
Operations of GroqCloud will continue without interruption, ensuring ongoing services for existing customers. The agreement highlights a growing trend of partnerships in the AI sector, combining innovation with broader access to advanced processing capabilities.
The partnership could speed up AI inference adoption, offering companies more scalable and cost-effective options for deploying AI workloads. Analysts suggest such collaborations are likely to drive competition and innovation in the rapidly evolving AI hardware and software market.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US State Department has imposed a visa ban on former EU Commissioner Thierry Breton and four other individuals, citing opposition to European regulation of social media platforms. The US visa ban reflects growing tensions between Washington and Brussels over digital governance and free expression.
US officials said the visa ban targets figures linked to organisations involved in content moderation and disinformation research. Those named include representatives from HateAid, the Center for Countering Digital Hate, and the Global Disinformation Index, alongside Breton.
Secretary of State Marco Rubio accused the individuals of pressuring US-based platforms to restrict certain viewpoints. A senior State Department official described Breton as a central figure behind the EU’s Digital Services Act, a law that sets obligations for large online platforms operating in Europe.
Breton rejected the US visa ban, calling it a witch hunt and denying allegations of censorship. European organisations affected by the decision criticised the move as unlawful and authoritarian, while the European Commission said it had sought clarification from US authorities.
France and the European Commission condemned the visa ban and warned of a possible response. EU officials said European digital rules are applied uniformly and are intended to support a safe, competitive online environment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.
The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.
Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.
Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.
Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.
The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amid growing attention on AI, Google DeepMind chief Demis Hassabis has argued that future systems could learn anything humans can.
He suggested that as technology advances, AI may no longer remain confined to single tasks. Instead of specialising narrowly, it could solve different kinds of problems and continue improving over time.
Supporters say rapid progress already shows how powerful the technology has become.
Other experts disagree and warn that human intelligence remains deeply complex. People rely on emotions, personal experience and social understanding when they think, while machines depend on data and rules.
Critics argue that comparing AI with the human mind oversimplifies how intelligence really works, and that even people vary widely in ability.
Elon Musk has supported the idea that AI could eventually learn as much as humans, while repeating his long-standing view that powerful systems must be handled carefully. His backing has intensified the debate, given his influence in the technology world.
The discussion matters because highly capable AI could reshape work, education and creativity, raising questions over safety and control.
For now, AI performs specific tasks extremely well yet cannot think or feel like humans, and no one can say for certain whether true human-level intelligence will ever emerge.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!