How AI could quietly sabotage critical software

When Google’s Jules AI agent added a new feature to a live codebase in under ten minutes, it initially seemed like a breakthrough. But the same capabilities that allow AI tools to scan, modify, and deploy code rapidly also introduce new, troubling possibilities—particularly in the hands of malicious actors.

Experts are now voicing concern over the risks posed by hostile agents deploying AI tools with coding capabilities. If weaponised by rogue states or cybercriminals, the tools could be used to quietly embed harmful code into public or private repositories, potentially affecting millions of lines of critical software.

Even a single unnoticed line among hundreds of thousands could trigger back doors, logic bombs, or data leaks. The risk lies in how AI can slip past human vigilance.

From modifying update mechanisms to exfiltrating sensitive data or weakening cryptographic routines, the threat is both technical and psychological.

Developers must catch every mistake; an AI only needs to succeed once. As such tools become more advanced and publicly available, the conversation around safeguards, oversight, and secure-by-design principles is becoming urgent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Croatia urged to embed human rights into AI law

Politiscope recently held an event at the Croatian Journalists’ Association to highlight the human rights risks of AI.

As Croatia begins drafting a national law to implement the EU AI Act, the event aimed to push for stronger protections and transparency instead of relying on vague promises of innovation.

Croatia’s working group is still forming key elements of the law, such as who will enforce it, making it an important moment for public input.

Experts warned that AI systems could increase surveillance, discrimination, and exclusion. Speakers presented troubling examples, including inaccurate biometric tools and algorithms that deny benefits or profile individuals unfairly.

Campaigners from across Europe, including EDRi, showcased how civil society has already stopped invasive AI tools in places like the Netherlands and Serbia. They argued that ‘values’ embedded in corporate AI systems often lack accountability and harm marginalised groups instead of protecting them.

Rather than presenting AI as a distant threat or a miracle cure, the event focused on current harms and the urgent need for safeguards. Speakers called for a public register of AI use in state institutions, a ban on biometric surveillance in public, and full civil society participation in shaping AI rules.

A panel urged Croatia to go beyond the EU Act’s baseline by embracing more transparent and citizen-led approaches.

Despite having submitted recommendations, Politiscope and other civil society organisations remain excluded from the working group drafting the law. While business groups and unions often gain access through social dialogue rules, CSOs are still sidelined.

Politiscope continues to demand an open and inclusive legislative process, arguing that democratic oversight is essential for AI to serve people instead of controlling them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU says US tech firms censor more

Far more online content is removed under US tech firms’ terms and conditions than under the EU’s Digital Services Act (DSA), according to Tech Commissioner Henna Virkkunen.

Her comments respond to criticism from American tech leaders, including Elon Musk, who have labelled the DSA a threat to free speech.

In an interview with Euractiv, Virkkunen said recent data show that 99% of content removals in the EU between September 2023 and April 2024 were carried out by platforms like Meta and X based on their own rules, not due to EU regulation.

Only 1% of cases involved ‘trusted flaggers’ — vetted organisations that report illegal content to national authorities. Just 0.001% of those reports led to an actual takedown decision by authorities, she added.

The DSA’s transparency rules made those figures available. ‘Often in the US, platforms have more strict rules with content,’ Virkkunen noted.

She gave examples such as discussions about euthanasia and nude artworks, which are often removed under US platform policies but remain online under European guidelines.

Virkkunen recently met with US tech CEOs and lawmakers, including Republican Congressman Jim Jordan, a prominent critic of the DSA and the DMA.

She said the data helped clarify how EU rules actually work. ‘It is important always to underline that the DSA only applies in the European territory,’ she said.

While pushing back against American criticism, Virkkunen avoided direct attacks on individuals like Elon Musk or Mark Zuckerberg. She suggested platform resistance reflects business models and service design choices.

Asked about delays in final decisions under the DSA — including open cases against Meta and X — Virkkunen stressed the need for a strong legal basis before enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AT&T hit by alleged 31 million record breach

A hacker has allegedly leaked data from 31 million AT&T customers, raising fresh concerns over the security of one of America’s largest telecom providers. The data, posted on a major dark web forum in late May 2025, is said to contain 3.1GB of customer information in both JSON and CSV formats.

Instead of isolated details, the breach reportedly includes highly sensitive data: full names, dates of birth, tax IDs, physical and email addresses, device and cookie identifiers, phone numbers, and IP addresses.

Cybersecurity firm DarkEye flagged the leak, warning that the structured formats make the data easy for criminals to exploit.

If verified, the breach would mark yet another major incident for AT&T. In March 2024, the company confirmed that personal information from 73 million users had been leaked.

Just months later, a July breach exposed call records and location metadata for nearly 110 million customers, with blame directed at compromised Snowflake cloud accounts.

AT&T has yet to comment on the latest claims. Experts warn that the combination of tax numbers and device data could enable identity theft, financial scams, and advanced phishing attacks.

For a company already under scrutiny for past security lapses, the latest breach could further damage public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Libra meme coin wallets frozen on Solana

Two wallets tied to the controversial Libra meme coin team have been frozen. Nearly $58 million in USDC stablecoins on the Solana blockchain are now locked.

The freeze on Solscan affects accounts holding $44.59 million and $13.06 million in USDC, a stablecoin issued by Circle. Major stablecoin issuers like Circle have the authority to blacklist addresses in cases of fraud or legal disputes.

The freeze follows a temporary restraining order from a US federal court, requested by Burwick Law amid ongoing litigation. Argentina’s justice department has also been linked to the legal action, connected to the Libra token promoted by Argentine President Javier Milei.

The token’s rapid rise and fall earlier this year sparked accusations of a pump-and-dump scheme.

Despite the legal troubles, Circle has recently filed for an initial public offering on the New York Stock Exchange, aiming for a $6.7 billion valuation. Meanwhile, Argentina’s task force investigating the scandal was disbanded last week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Victoria’s Secret website hit by cyber attack

Victoria’s Secret’s website has remained offline for three days due to a security incident the company has yet to fully explain. A spokesperson confirmed steps are being taken to address the issue, saying external experts have been called in and some in-store systems were also taken down as a precaution.

Instead of revealing specific details, the retailer has left users with only a holding message on a pink background. It has declined to comment on whether ransomware is involved, when the disruption began, or if law enforcement has been contacted.

The firm’s physical stores continue operating as normal, and payment systems are unaffected, suggesting the breach has hit other digital infrastructure. Still, the shutdown has rattled investors—shares fell nearly seven percent on Wednesday.

With online sales accounting for a third of Victoria’s Secret’s $6 billion annual revenue, the pressure to resolve the situation is high.

The timing has raised eyebrows, as cybercriminals often strike during public holidays like Memorial Day, when IT teams are short-staffed. The attack follows a worrying trend among retailers.

UK giants such as Harrods, Marks & Spencer, and the Co-op have all suffered recent breaches. Experts warn that US chains are becoming the next major targets, with threat groups like Scattered Spider shifting their focus across the Atlantic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot for Gaming now in testing on Xbox app for iOS and Android

Microsoft has begun rolling out the beta version of its new AI-powered tool, Copilot for Gaming, designed to enhance the gaming experience through personalised assistance. Available now in the Xbox app for iOS and Android, the feature lets users ask game-related questions and receive tailored responses based on their gaming history, achievements, and account data.

The AI assistant can provide tips to improve a gamer’s score, suggest games based on user preferences, and answer account-specific questions like subscription details or recent in-game milestones. It operates on a second screen to avoid disrupting gameplay and uses player activity and Bing search data to craft responses.

Initially available in English for players aged 18 and older, the beta spans over 50 countries, including the US, Canada, Brazil, Serbia, and Japan. Microsoft says more features are on the way, including personalised coaching and deeper in-game support, with plans to expand the rollout to additional regions in the future.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The future of search: Personalised AI and the privacy crossroads

The rise of personalised AI is poised to radically reshape how we interact with technology, with search engines evolving into intelligent agents that not only retrieve information but also understand and act on our behalf. No longer just a list of links, search is merging into chatbots and AI agents that synthesise information from across the web to deliver tailored answers.

Google and OpenAI have already begun this shift, with services like AI Overview and ChatGPT Search leading a trend that analysts say could cut traditional search volume by 25% by 2026. That transformation is driven by the AI industry’s hunger for personal data.

To offer highly customised responses and assistance, AI systems require in-depth profiles of their users, encompassing everything from dietary preferences to political beliefs. The deeper the personalisation, the greater the privacy risks.

OpenAI, for example, envisions a ‘super assistant’ capable of managing nearly every aspect of your digital life, fed by detailed knowledge of your past interactions, habits, and preferences. Google and Meta are pursuing similar paths, with Mark Zuckerberg even imagining AI therapists and friends that recall your social context better than you do.

As these tools become more capable, they also grow more invasive. Wearable, always-on AI devices equipped with microphones and cameras are on the horizon, signalling an era of ambient data collection.

AI assistants won’t just help answer questions—they’ll book vacations, buy gifts, and even manage your calendar. But with these conveniences comes unprecedented access to our most intimate data, raising serious concerns over surveillance and manipulation.

Policymakers are struggling to keep up. Without a comprehensive federal privacy law, the US relies on a patchwork of state laws and limited federal oversight. Proposals to regulate data sharing, such as forcing Google to hand over user search histories to competitors like OpenAI and Meta, risk compounding the problem unless strict safeguards are enacted.

As AI becomes the new gatekeeper to the internet, regulators face a daunting task: enabling innovation while ensuring that the AI-powered future doesn’t come at the expense of our privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK leads crypto adoption growth in 2025

The United Kingdom has recorded the fastest growth in cryptocurrency adoption globally in 2025. The finding comes from a new report by Gemini, the crypto exchange based in the United States.

The proportion of UK adults holding cryptocurrencies rose to 24% in April, up from 18% a year earlier, marking the sharpest year-on-year increase among the countries surveyed.

The report, based on a survey of more than 7,000 people across Europe, the United States, Singapore, and Australia, shows that Europe is leading the rise in cryptocurrency ownership.

Singapore continues to hold the highest individual rate, with 28% of respondents reporting ownership of cryptocurrencies.

Despite not yet having a national regulatory framework in place, the UK remains attractive to investors. In April, the government published a draft statutory instrument aimed at regulating crypto exchanges and related services.

The Treasury is expected to finalise the near-final version later in 2025 following public consultation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

App Store revenue climbs amid regulatory pressure

Apple’s App Store in the United States generated more than US$10 billion in revenue in 2024, according to estimates from app intelligence firm Appfigures.

This marks a sharp increase from the US$4.76 billion earned in 2020 and reflects the growing importance of Apple’s services business. Developers on the US App Store earned US$33.68 billion in gross revenue last year, receiving US$23.57 billion after Apple’s standard commission.

Globally, the App Store brought in an estimated US$91.3 billion in revenue in 2024. Apple’s dominance in app monetisation continues, with App Store publishers earning an average of 64% more per quarter than their counterparts on Google Play.

In subscription-based categories, the difference is even more pronounced, with iOS developers earning more than three times as much revenue per quarter as those on Android.

Legal scrutiny of Apple’s longstanding 30% commission model has intensified. A US federal judge recently ruled that Apple violated court orders by failing to reform its App Store policies.

While the company maintains that the commission supports its secure platform and vast user base, developers are increasingly pushing back, arguing that the fees are disproportionate to the services provided.

The outcome of these legal and regulatory pressures could reshape how app marketplaces operate, particularly in fast-growing regions like Latin America and Africa, where app revenue is expected to surge in the coming years.

As global app spending climbs toward US$156 billion annually, decisions around payment processing and platform control will have significant financial implications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!