Microsoft expands rewards for reporting AI vulnerabilities

Microsoft has announced an expanded bug bounty initiative, offering up to $30,000 for researchers who uncover critical vulnerabilities in AI features within Dynamics 365 and the Power Platform.

The programme aims to strengthen security in enterprise software by encouraging ethical hackers to identify and report risks before cybercriminals can exploit them.

Rather than relying on general severity scales, Microsoft has introduced an AI-specific vulnerability classification system. It highlights prompt injection attacks, data poisoning during training, and techniques like model stealing and training data reconstruction that could expose sensitive information.

Highest payouts are reserved for flaws that allow attackers to access other users’ data or perform privileged actions without their consent.

The company urges researchers to use free trials of its services, such as PowerApps and AI Builder, to identify weaknesses. Detailed product documentation is provided to help participants understand the systems they are testing.

Even reports that don’t qualify for a financial reward can still lead to recognition if they result in improved defences.

The AI bounty initiative is part of Microsoft’s wider commitment to collaborative cybersecurity. With AI becoming more deeply integrated into enterprise software, the company says it is more important than ever to identify vulnerabilities early instead of waiting for security breaches to occur.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ubisoft under fire for forcing online connection in offline games

French video game publisher Ubisoft is facing a formal privacy complaint from European advocacy group noyb for requiring players to stay online even when enjoying single-player games.

The complaint, lodged with Austria’s data protection authority, accuses Ubisoft of violating EU privacy laws by collecting personal data without consent.

Noyb argues that Ubisoft makes players connect to the internet and log into a Ubisoft account unnecessarily, even when they are not interacting with other users.

Instead of limiting data collection to essential functions, noyb claims the company contacts external servers, including Google and Amazon, over 150 times during gameplay. This, they say, reveals a broader surveillance practice hidden beneath the surface.

Ubisoft, known for blockbuster titles like Assassin’s Creed and Far Cry, has not yet explained why such data collection is needed for offline play.

The complainant who examined the traffic found that Ubisoft gathers login and browsing data and uses third-party tools, practices that, under GDPR rules, require explicit user permission. Instead of offering transparency, Ubisoft reportedly failed to justify these invasive practices.

Noyb is calling on regulators to demand deletion of all data collected without a clear legal basis and to fine Ubisoft €92 million. They argue that consumers, who already pay steep prices for video games, should not have to sacrifice their privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware decline masks growing threat

A recent drop in reported ransomware attacks might seem encouraging, yet experts warn this is likely misleading. Figures from the NCC Group show a 32% decline in March 2025 compared to the previous month, totalling 600 incidents.

However, this dip is attributed to unusually large-scale attacks in earlier months, rather than an actual reduction in cybercrime. In fact, incidents were up 46% compared with March last year, highlighting the continued escalation in threat activity.

Rather than fading, ransomware groups are becoming more sophisticated. Babuk 2.0 emerged as the most active group in March, though doubts surround its legitimacy. Security researchers believe it may be recycling leaked data from previous breaches, aiming to trick victims instead of launching new attacks.

A tactic like this mirrors behaviours seen after law enforcement disrupted other major ransomware networks, such as LockBit in 2024.

Industrials were the hardest hit, followed by consumer-focused sectors, while North America bore the brunt of geographic targeting.

With nearly half of all recorded attacks occurring in the region, analysts expect North America, especially Canada, to remain a prime target amid rising political tensions and cyber vulnerability.

Meanwhile, cybercriminals are turning to malvertising, malicious code hidden in online advertisements, as a stealthier route of attack. This tactic has gained traction through the misuse of trusted platforms like GitHub and Dropbox, and is increasingly being enhanced with generative AI tools.

Instead of relying solely on technical expertise, attackers now use AI to craft more convincing and complex threats. As these strategies grow more advanced, experts urge organisations to stay alert and prioritise threat intelligence and collaboration to navigate this volatile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BMW partners with DeepSeek for in-car AI features

BMW has announced plans to integrate AI developed by China’s DeepSeek into its vehicles sold in the Chinese market.

The announcement was made by CEO Oliver Zipse during the Shanghai Auto Show, aligning BMW with local brands such as Geely and Zeekr that have adopted similar AI technologies.

The DeepSeek-R1 model has been increasingly used across Chinese automotive sector to power intelligent cockpit systems, voice controls, and driving assistance.

Geely showcased its ‘Full-Domain AI for Smart Vehicles’, which includes AI-powered chassis control and driver interaction capabilities.

DeepSeek’s influence extends beyond automotive applications, with its technology now used in Chinese courtrooms, healthcare, and customer service.

A successor model, DeepSeek-R2, is expected soon and promises multilingual reasoning and enhanced coding capabilities, rivalling Western counterparts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom investigates data breach after cyberattack

South Korean telecom leader SK Telecom has confirmed a cyberattack that compromised customer data following a malware infection.

The breach was detected on 19 April, prompting an immediate internal investigation and response. Authorities, including the Korea Internet Security Agency, have been alerted.

Personal information of South Korean customers was accessed during the attack, although the extent of the breach remains under review. In response, SK Telecom is offering a complimentary SIM protection service, hinting at potential SIM swapping risks linked to the leaked data.

The infected systems were quickly isolated and the malware removed. While no group has claimed responsibility, concerns remain over possible state-sponsored involvement, as telecom providers are frequent targets for cyberespionage.

It is currently unknown whether ransomware played a role in the incident. Investigations are ongoing as officials continue to assess the scope and origin of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Baidu rolls out new AI agent Xinxiang for Android

Chinese tech giant Baidu has launched a new AI agent, Xinxiang, aimed at enhancing user productivity by assisting with tasks such as information analysis and travel planning.

The tool is currently available on Android devices, with an iOS version still under review by Apple.

According to Baidu, Xinxiang represents a shift from traditional chatbot interactions towards a more task-focused AI experience, providing streamlined assistance tailored to practical needs.

The move reflects growing competition in China’s rapidly evolving AI market.

However, the launch highlights Baidu’s ambition to stay ahead in AI innovation and offer tools that integrate seamlessly into everyday digital life.

As regulatory reviews continue, the success of Xinxiang may depend on user adoption and the speed at which it becomes available across platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

JusticeLink breach leads to arrest in Sydney

A man has been charged following a serious cyberattack on JusticeLink, New South Wales’ largest online court-filing system.

Authorities say more than 9,000 files were illegally downloaded over a two-month period, although no personal data appears to have been compromised. The breach was first detected in March, prompting an immediate shutdown of the suspect’s account.

JusticeLink handles sensitive legal documents for over 400,000 cases annually. The 38-year-old suspect, arrested in Maroubra, Sydney, now faces charges of unauthorised access and misuse of a carriage service to cause harm. Two laptops were seized during the arrest.

Officials have reassured the public that the system is now secure, with no indication that personal information was leaked or found online.

Acting Attorney-General Ron Hoenig confirmed that people under court protection orders were not exposed to heightened risk. The man is expected to appear in Waverley Court on Thursday.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Former OpenAI staff challenge company’s shift to for-profit model

​A group of former OpenAI employees, supported by Nobel laureates and AI experts, has urged the attorneys general of California and Delaware to block the company’s proposed transition from a nonprofit to a for-profit structure.

They argue that such a shift could compromise OpenAI’s founding mission to develop artificial general intelligence (AGI) that benefits all of humanity, potentially prioritising profit over public safety and accountability, not just in the US, but globally.

The coalition, including notable figures like economists Oliver Hart and Joseph Stiglitz, and AI pioneers Geoffrey Hinton and Stuart Russell, expressed concerns that the restructuring would reduce nonprofit oversight and increase investor influence.

They fear this change could lead to diminished ethical safeguards, especially as OpenAI advances toward creating AGI. OpenAI responded by stating that any structural changes would aim to ensure broader public benefit from AI advancements.

The company plans to adopt a public benefit corporation model while maintaining a nonprofit arm to uphold its mission. The final decision rests with the state authorities, who are reviewing the proposed restructuring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!