Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use steganography to evade Windows defences

North Korea-linked hacking group APT37 is using malicious JPEG image files to deploy advanced malware on Windows systems, according to Genians Security Centre. The new campaign showcases a more evasive version of RoKRAT malware, which hides payloads in image files through steganography.

These attacks rely on large Windows shortcut files embedded in email attachments or cloud storage links, enticing users with decoy documents while executing hidden code. Once activated, the malware launches scripts to decrypt shellcode and inject it into trusted apps like MS Paint and Notepad.

This fileless strategy makes detection difficult, avoiding traditional antivirus tools by leaving minimal traces. The malware also exfiltrates data through legitimate cloud services, complicating efforts to trace and block the threat.

Researchers stress the urgency for organisations to adopt cybersecurity measures, behavioural monitoring, robust end point management, and ongoing user education. Defenders must prioritise proactive strategies to protect critical systems as threat actors evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moflin, Japan’s AI-powered robot pet with a personality

A fluffy, AI-powered robot pet named Moflin is capturing the imagination of consumers in Japan with its unique ability to develop distinct personalities based on how it is ‘raised.’ Developed by Casio, Moflin recognises its owner and learns their preferences through interactions such as cuddling and stroking, boasting over four million possible personality variations.

Priced at ¥59,400, Moflin has become more than just a companion at home, with some owners even taking it along on day trips. To complement the experience, Casio offers additional services, including a specialised salon to clean and maintain the robot’s fur, further enhancing its pet-like feel.

Erina Ichikawa, the lead developer, says the aim was to create a supportive sidekick capable of providing comfort during challenging moments, blending technology with emotional connection in a new way.

A similar ‘pet’ was also seen in China. Namely, AI-powered ‘smart pets’ like BooBoo are gaining popularity in China, especially among youth, offering emotional support and companionship. Valued for easing anxiety and isolation, the market is set to reach $42.5 billion by 2033, reflecting shifting social and family dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN use surges in UK as age checks go live

The way UK internet users access adult content has undergone a significant change, with new age-verification rules now in force. Under Ofcom’s directive, anyone attempting to visit adult websites must now prove they are over 18, typically by providing credit card or personal ID details.

The move aims to prevent children from encountering harmful content online, but it has raised serious privacy and cybersecurity concerns.

Experts have warned that entering personal and financial information could expose users to cyber threats. Jake Moore from cybersecurity firm ESET pointed out that the lack of clear implementation standards leaves users vulnerable to data misuse and fraud.

There’s growing unease that ID verification systems might inadvertently offer a goldmine to scammers.
In response, many have started using VPNs to bypass the restrictions, with providers reporting a surge in UK downloads.

VPNs mask user locations, allowing access to blocked content, but free versions often lack the security features of paid services. As demand rises, cybersecurity specialists are urging users to be cautious.

Free VPNs can compromise user data through weak encryption or selling browsing histories to advertisers. Mozilla and EC-Council have stressed the importance of avoiding no-cost VPNs unless users know the risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns public to avoid scanning QR codes on unsolicited packages

The FBI has issued a public warning about a rising scam involving QR codes placed on packages delivered to people who never ordered them.

According to the agency, these codes can lead recipients to malicious websites or prompt them to install harmful software, potentially exposing sensitive personal and financial data.

The scheme is a variation of the so-called brushing scam, in which online sellers send unordered items and use recipients’ names to post fake product reviews. In the new version, QR codes are added to the packaging, increasing the risk of fraud by directing users to deceptive websites.

While not as widespread as other fraud attempts, the FBI urges caution. The agency recommends avoiding QR codes from unknown sources, especially those attached to unrequested deliveries.

It also advised consumers to pay close attention to the web address that appears before tapping on any QR code link.

Authorities have noted broader misuse of QR codes, including cases where criminals place fake codes over legitimate ones in public spaces.

In one recent incident, scammers used QR stickers on parking meters in New York to redirect people to third-party payment pages requesting card details.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta bets on smartglasses to lead future tech

Mark Zuckerberg is boldly pushing to replace the smartphone with smartglasses powered by superintelligent AI. The Meta CEO described a future where wearable devices replace phones, using sight and sound to assist users throughout the day.

Meta is heavily investing, offering up to $100 million to attract top AI talent. Zuckerberg’s idea of ‘personal superintelligence’ merges AI and hardware to offer personalised help and build an Apple-style ecosystem under Meta’s control.

The company’s smartglasses already feature cameras, microphones and speakers, and future models could include built-in screens and AI-generated interfaces.

Other major players are also chasing the next computing shift. Amazon is acquiring a startup that builds AI wearables, while OpenAI’s Sam Altman and former Apple designer Jony Ive are working on a new physical AI device.

These efforts all point to a changing landscape in which mobile screens might no longer dominate.

Apple CEO Tim Cook responded by defending the iPhone’s central role in modern life, though he acknowledged complementary technologies may emerge. While Apple remains dominant, Meta’s advances signal that the competition to define the next computing platform is wide open.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US court mandates Android app competition, loosens billing rules

Long-standing dominance over Android app distribution has been declared illegal by the Ninth Circuit Court of Appeals, reinforcing a prior jury verdict in favour of Epic Games. Google now faces an injunction that compels it to allow rival app stores and alternative billing systems inside the Google Play Store ecosystem for a three-year period ending November 2027.

A technical committee jointly selected by Epic and Google will oversee sensitive implementation tasks, including granting competitors approved access to Google’s expansive app catalogue while ensuring minimal security risk. The order also requires that developers not be tied to Google’s billing system for in-app purchases.

Market analysts warn that reduced dependency on Play Store exclusivity and the option to use alternative payment processors could cut Google’s app revenue by as much as $1 to $1.5 billion annually. Despite brand recognition, developers and consumers may shift toward lower-cost alternatives competing on platform flexibility.

While the ruling aims to restore competition, Google maintains it is appealing and has requested additional delays to avoid rapid structural changes. Proponents, including Microsoft, regulators, and Epic Games, hail the decision as a landmark step toward fairer mobile market access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon plans to bring ads to Alexa+ chats

Amazon is exploring ways to insert ads into conversations with its AI assistant Alexa+, according to CEO Andy Jassy. Speaking during the company’s latest earnings call, he described the feature as a potential tool for product discovery and future revenue.

Alexa+ is Amazon’s upgraded digital assistant designed to support more natural, multi-step conversations using generative AI. It is already available to millions of users through Prime subscriptions or as a standalone service.

Jassy said longer interactions open the door for embedded advertising, although the approach has not yet been fully developed. Industry observers see this as part of a wider trend, with companies like Google and OpenAI also weighing ad-based business models.

Alexa+ has received mixed reviews so far, with delays in feature delivery and technical challenges like hallucinations raising concerns. Privacy advocates have warned that ad targeting within personal conversations may worry users, given the data involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!