White House condemns EU fines on Apple and Meta

The White House has strongly criticised the EU after landmark fines were imposed on Apple and Meta Platforms, describing the penalties as a ‘novel form of economic extortion’ that the US would not tolerate.

The European Commission fined Apple €500 million and Meta €200 million under the Digital Markets Act (DMA), a new law designed to rein in the power of dominant tech giants.

Rather than viewing the DMA as a fair attempt to promote market competition, US officials called it ‘discriminatory’ and claimed it unfairly targets American firms, undermines innovation, and restricts civil liberties.

The White House warned that such extraterritorial measures would be treated as trade barriers and hinted at retaliation.

At the same time, tensions were mounting on another front, with US Treasury Secretary Scott Bessent acknowledging that tariffs between the US and China were unsustainable.

He said both sides must lower their tariffs, currently as high as 145 per cent, instead of expecting unilateral moves, suggesting a potential thaw in the ongoing trade war.

President Trump, while indicating openness to cutting Chinese import duties, also threatened to raise the existing 25 per cent tariff on Canadian car imports. He said the US should focus on building its own vehicles instead of relying on foreign manufacturers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft expands rewards for reporting AI vulnerabilities

Microsoft has announced an expanded bug bounty initiative, offering up to $30,000 for researchers who uncover critical vulnerabilities in AI features within Dynamics 365 and the Power Platform.

The programme aims to strengthen security in enterprise software by encouraging ethical hackers to identify and report risks before cybercriminals can exploit them.

Rather than relying on general severity scales, Microsoft has introduced an AI-specific vulnerability classification system. It highlights prompt injection attacks, data poisoning during training, and techniques like model stealing and training data reconstruction that could expose sensitive information.

Highest payouts are reserved for flaws that allow attackers to access other users’ data or perform privileged actions without their consent.

The company urges researchers to use free trials of its services, such as PowerApps and AI Builder, to identify weaknesses. Detailed product documentation is provided to help participants understand the systems they are testing.

Even reports that don’t qualify for a financial reward can still lead to recognition if they result in improved defences.

The AI bounty initiative is part of Microsoft’s wider commitment to collaborative cybersecurity. With AI becoming more deeply integrated into enterprise software, the company says it is more important than ever to identify vulnerabilities early instead of waiting for security breaches to occur.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ubisoft under fire for forcing online connection in offline games

French video game publisher Ubisoft is facing a formal privacy complaint from European advocacy group noyb for requiring players to stay online even when enjoying single-player games.

The complaint, lodged with Austria’s data protection authority, accuses Ubisoft of violating EU privacy laws by collecting personal data without consent.

Noyb argues that Ubisoft makes players connect to the internet and log into a Ubisoft account unnecessarily, even when they are not interacting with other users.

Instead of limiting data collection to essential functions, noyb claims the company contacts external servers, including Google and Amazon, over 150 times during gameplay. This, they say, reveals a broader surveillance practice hidden beneath the surface.

Ubisoft, known for blockbuster titles like Assassin’s Creed and Far Cry, has not yet explained why such data collection is needed for offline play.

The complainant who examined the traffic found that Ubisoft gathers login and browsing data and uses third-party tools, practices that, under GDPR rules, require explicit user permission. Instead of offering transparency, Ubisoft reportedly failed to justify these invasive practices.

Noyb is calling on regulators to demand deletion of all data collected without a clear legal basis and to fine Ubisoft €92 million. They argue that consumers, who already pay steep prices for video games, should not have to sacrifice their privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware decline masks growing threat

A recent drop in reported ransomware attacks might seem encouraging, yet experts warn this is likely misleading. Figures from the NCC Group show a 32% decline in March 2025 compared to the previous month, totalling 600 incidents.

However, this dip is attributed to unusually large-scale attacks in earlier months, rather than an actual reduction in cybercrime. In fact, incidents were up 46% compared with March last year, highlighting the continued escalation in threat activity.

Rather than fading, ransomware groups are becoming more sophisticated. Babuk 2.0 emerged as the most active group in March, though doubts surround its legitimacy. Security researchers believe it may be recycling leaked data from previous breaches, aiming to trick victims instead of launching new attacks.

A tactic like this mirrors behaviours seen after law enforcement disrupted other major ransomware networks, such as LockBit in 2024.

Industrials were the hardest hit, followed by consumer-focused sectors, while North America bore the brunt of geographic targeting.

With nearly half of all recorded attacks occurring in the region, analysts expect North America, especially Canada, to remain a prime target amid rising political tensions and cyber vulnerability.

Meanwhile, cybercriminals are turning to malvertising, malicious code hidden in online advertisements, as a stealthier route of attack. This tactic has gained traction through the misuse of trusted platforms like GitHub and Dropbox, and is increasingly being enhanced with generative AI tools.

Instead of relying solely on technical expertise, attackers now use AI to craft more convincing and complex threats. As these strategies grow more advanced, experts urge organisations to stay alert and prioritise threat intelligence and collaboration to navigate this volatile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom investigates data breach after cyberattack

South Korean telecom leader SK Telecom has confirmed a cyberattack that compromised customer data following a malware infection.

The breach was detected on 19 April, prompting an immediate internal investigation and response. Authorities, including the Korea Internet Security Agency, have been alerted.

Personal information of South Korean customers was accessed during the attack, although the extent of the breach remains under review. In response, SK Telecom is offering a complimentary SIM protection service, hinting at potential SIM swapping risks linked to the leaked data.

The infected systems were quickly isolated and the malware removed. While no group has claimed responsibility, concerns remain over possible state-sponsored involvement, as telecom providers are frequent targets for cyberespionage.

It is currently unknown whether ransomware played a role in the incident. Investigations are ongoing as officials continue to assess the scope and origin of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Baidu rolls out new AI agent Xinxiang for Android

Chinese tech giant Baidu has launched a new AI agent, Xinxiang, aimed at enhancing user productivity by assisting with tasks such as information analysis and travel planning.

The tool is currently available on Android devices, with an iOS version still under review by Apple.

According to Baidu, Xinxiang represents a shift from traditional chatbot interactions towards a more task-focused AI experience, providing streamlined assistance tailored to practical needs.

The move reflects growing competition in China’s rapidly evolving AI market.

However, the launch highlights Baidu’s ambition to stay ahead in AI innovation and offer tools that integrate seamlessly into everyday digital life.

As regulatory reviews continue, the success of Xinxiang may depend on user adoption and the speed at which it becomes available across platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Former OpenAI staff challenge company’s shift to for-profit model

​A group of former OpenAI employees, supported by Nobel laureates and AI experts, has urged the attorneys general of California and Delaware to block the company’s proposed transition from a nonprofit to a for-profit structure.

They argue that such a shift could compromise OpenAI’s founding mission to develop artificial general intelligence (AGI) that benefits all of humanity, potentially prioritising profit over public safety and accountability, not just in the US, but globally.

The coalition, including notable figures like economists Oliver Hart and Joseph Stiglitz, and AI pioneers Geoffrey Hinton and Stuart Russell, expressed concerns that the restructuring would reduce nonprofit oversight and increase investor influence.

They fear this change could lead to diminished ethical safeguards, especially as OpenAI advances toward creating AGI. OpenAI responded by stating that any structural changes would aim to ensure broader public benefit from AI advancements.

The company plans to adopt a public benefit corporation model while maintaining a nonprofit arm to uphold its mission. The final decision rests with the state authorities, who are reviewing the proposed restructuring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russian hackers target NGOs with fake video calls

Hackers linked to Russia are refining their techniques to infiltrate Microsoft 365 accounts, according to cybersecurity firm Volexity.

Their latest strategy targets non-governmental organisations (NGOs) associated with Ukraine by exploiting OAuth, a protocol used for app authorisation without passwords.

Victims are lured into fake video calls through apps like Signal or WhatsApp and tricked into handing over OAuth codes, which attackers then use to access Microsoft 365 environments.

The campaign, first detected in March, involved messages claiming to come from European security officials proposing meetings with political representatives. Instead of legitimate video links, these messages directed recipients to OAuth code generators.

Once a code was shared, attackers could gain entry into accounts containing sensitive data. Staff at human rights organisations were especially targeted due to their work on Ukraine-related issues.

Volexity attributed the scheme to two threat actors, UTA0352 and UTA0355, though it did not directly connect them to any known Russian advanced persistent threat groups.

A previous attack from the same actors used Microsoft Device Code Authentication, usually reserved for connecting smart devices, instead of traditional login methods. Both campaigns show a growing sophistication in social engineering tactics.

Given the widespread use of Microsoft 365 tools like Outlook and Teams, experts urge organisations to heighten awareness among staff.

Rather than trusting unsolicited messages on encrypted apps, users should remain cautious when prompted to click links or enter authentication codes, as these could be cleverly disguised attempts to breach secure systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!