FBI warns public to avoid scanning QR codes on unsolicited packages

The FBI has issued a public warning about a rising scam involving QR codes placed on packages delivered to people who never ordered them.

According to the agency, these codes can lead recipients to malicious websites or prompt them to install harmful software, potentially exposing sensitive personal and financial data.

The scheme is a variation of the so-called brushing scam, in which online sellers send unordered items and use recipients’ names to post fake product reviews. In the new version, QR codes are added to the packaging, increasing the risk of fraud by directing users to deceptive websites.

While not as widespread as other fraud attempts, the FBI urges caution. The agency recommends avoiding QR codes from unknown sources, especially those attached to unrequested deliveries.

It also advised consumers to pay close attention to the web address that appears before tapping on any QR code link.

Authorities have noted broader misuse of QR codes, including cases where criminals place fake codes over legitimate ones in public spaces.

In one recent incident, scammers used QR stickers on parking meters in New York to redirect people to third-party payment pages requesting card details.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Prisons trial AI to forecast conflict and self‑harm risk

UK Justice Secretary Shabana Mahmood has rolled out an AI-driven violence prediction tool across prisons and probation services. One system evaluates inmates’ profiles, factoring in age, past behaviour, and gang ties, to flag those likely to become violent. Matching prisoners to tighter supervision or relocation aims to reduce attacks on staff and fellow inmates.

Another feature actively scans content from seized mobile phones. AI algorithms sift through over 33,000 devices and 8.6 million messages, detecting coded language tied to contraband, violence, or escape plans. When suspicious content is flagged, staff receive alerts for preventive action.

Rising prison violence and self-harm underscore the urgency of such interventions. Assaults on staff recently reached over 10,500 a year, the highest on record, while self-harm incidents reached nearly 78,000. Overcrowding and drug infiltration have intensified operational challenges.

Analysts compare the approach to ‘pre‑crime’ models, drawing parallels with sci-fi narratives, raising concerns around civil liberties. Without robust governance, predictive tools may replicate biases or punish potential rather than actual behaviour. Transparency, independent audit, and appeals processes are essential to uphold inmate rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US court mandates Android app competition, loosens billing rules

Long-standing dominance over Android app distribution has been declared illegal by the Ninth Circuit Court of Appeals, reinforcing a prior jury verdict in favour of Epic Games. Google now faces an injunction that compels it to allow rival app stores and alternative billing systems inside the Google Play Store ecosystem for a three-year period ending November 2027.

A technical committee jointly selected by Epic and Google will oversee sensitive implementation tasks, including granting competitors approved access to Google’s expansive app catalogue while ensuring minimal security risk. The order also requires that developers not be tied to Google’s billing system for in-app purchases.

Market analysts warn that reduced dependency on Play Store exclusivity and the option to use alternative payment processors could cut Google’s app revenue by as much as $1 to $1.5 billion annually. Despite brand recognition, developers and consumers may shift toward lower-cost alternatives competing on platform flexibility.

While the ruling aims to restore competition, Google maintains it is appealing and has requested additional delays to avoid rapid structural changes. Proponents, including Microsoft, regulators, and Epic Games, hail the decision as a landmark step toward fairer mobile market access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram-powered TON on track for mass adoption

TON, the blockchain natively embedded in Telegram’s app, is emerging as the most practical path to mainstream crypto adoption. With over 900 million users on Telegram and more than 150 million TON accounts created, the platform is delivering Web3 features through a familiar, app-like experience.

Unlike Ethereum or Solana, which require external wallets and technical knowledge, TON integrates features like tipping, staking, and gaming directly into Telegram. Mini apps like Notcoin and Catizen let users access blockchain without dealing with wallets or gas fees.

TON currently processes around 2 million daily transactions and may reach over 10 million daily users by 2027. Growing user fatigue with complex blockchain makes TON’s simple, mobile-first design ready to lead the next adoption wave.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Delta’s personalised flight costs under scrutiny

Delta Air Lines’ recent revelation about using AI to price some airfares is drawing significant criticism. The airline aims to increase AI-influenced pricing to 20 per cent of its domestic flights by late 2025.

While Delta’s president, Glen Hauenstein, noted positive results from their Fetcherr-supplied AI tool, industry observers and senators are voicing concerns. Critics worry that AI-driven pricing, similar to rideshare surge models, could lead to increased fares for travellers and raise serious data privacy issues.

Senators like Ruben Gallego, Mark Warner, and Richard Blumenthal, highlighted fears that ‘surveillance pricing’ could utilise extensive personal data to estimate a passenger’s willingness to pay.

Despite Delta’s spokesperson denying individualised pricing based on personal information, AI experts suggest factors like device type and Browse behaviour are likely influencing prices, making them ‘deeply personalised’.

Different travellers could be affected unevenly. Bargain hunters with flexible dates might benefit, but business travellers and last-minute bookers may face higher costs. Other airlines like Virgin Atlantic also use Fetcherr’s technology, indicating a wider industry trend.

Pricing experts like Philip Carls warn that passengers won’t know if they’re getting a fair deal, and proving discrimination, even if unintended by AI, could be almost impossible.

American Airlines’ CEO, Robert Isom, has publicly criticised Delta’s move, stating American won’t copy the practice, though past incidents show airlines can adjust fares based on booking data even without AI.

With dynamic pricing technology already permitted, experts anticipate lawmakers will soon scrutinise AI’s role more closely, potentially leading to new transparency mandates.

For now, travellers can try strategies like using incognito mode, clearing cookies, or employing a VPN to obscure their digital footprint and potentially avoid higher AI-driven fares.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gulf states reframe AI as the ‘new oil’ in post‑petroleum push

Gulf states are actively redefining national strategy by embracing AI as a cornerstone of post-oil modernization. Saudi Arabia, through its AI platform Humain, a subsidiary of the Public Investment Fund, has committed state resources to build core infrastructure and develop Arabic multimodal models. Concurrently, the UAE is funding its $100 billion MGX initiative and supporting projects like G42 and the Falcon open-source model from Abu Dhabi’s Technology Innovation Institute.

Economic rationale underpins this ambition. Observers suggest that broad AI adoption across GCC sectors, including energy, healthcare, aviation, and government services, could add as much as $150 billion to regional GDP. Yet, concerns persist around workforce limitations, regulatory maturation, and geopolitical complications tied to supply chain dependencies.

Interest in AI has also reached geopolitical levels. Gulf leaders have struck partnerships with US firms to secure advanced AI chips and infrastructure, as seen during high-profile agreements with Nvidia, AMD, and Amazon. Critics caution that hosting major data centres in geopolitically volatile zones introduces physical and strategic risks, especially in contexts of rising regional tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!