Google aims for profit with new AI Search

At its annual developer event, Google I/O, Google unveiled a new feature called AI Mode, built directly into its core product, Google Search.

Rather than being a separate app, AI Mode integrates a chatbot into the search engine, allowing users to ask complex, detailed queries and receive direct answers along with curated web links. Google hopes this move will stop users from drifting to other AI tools instead of its own services.

The launch follows concerns that Google Search was starting to lose ground. Investors took notice when Apple’s Eddy Cue revealed that Safari searches had dropped for the first time in April, as users began to favour AI-powered alternatives.

A decline like this led to a 7% drop in Alphabet’s stock, highlighting just how critical search remains to Google’s dominance. By embedding AI into Search, Google aims to maintain its leadership instead of risking a steady erosion of its user base.

Unlike most AI platforms still searching for profitability, Google’s AI Mode is already positioned to make money. Advertising—long the engine of Google’s revenue—will be introduced into AI Mode, ensuring it generates income just as traditional search does.

While rivals burn through billions running large language models, Google is simply monetising the same way it always has.

AI Mode also helps defend Google’s biggest asset. Rather than seeing AI as a threat, Google embraced it to reinforce Search and protect the advertising revenue it depends on.

Most AI competitors still rely on expensive, unsustainable models, whereas Google is leveraging its existing ecosystem instead of building from scratch. However, this gives it a major edge in the race for AI dominance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China blames Taiwan for tech company cyberattack

Chinese authorities have accused Taiwan’s ruling Democratic Progressive Party of backing a cyberattack on a tech company based in Guangzhou.

According to public security officials in the city, an initial police investigation linked the attack to a foreign hacker group allegedly supported by the Taiwanese government.

The unnamed technology firm was reportedly targeted in the incident, with local officials suggesting political motives behind the cyber activity. They claimed Taiwan’s Democratic Progressive Party had provided backing instead of the group acting independently.

Taiwan’s Mainland Affairs Council has not responded to the allegations. The ruling DPP has faced similar accusations before, which it has consistently rejected, often describing such claims as attempts to stoke tension rather than reflect reality.

A development like this adds to the already fragile cross-strait relations, where cyber and political conflicts continue to intensify instead of easing, as both sides exchange accusations in an increasingly digital battleground.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Melania Trump’s AI audiobook signals a new era in media creation

Melania Trump has released an audiobook version of her memoir, but the voice readers hear isn’t hers in the traditional sense. Instead, it’s an AI-generated replica, created under her guidance and produced using technology from ElevenLabs.

Announcing the release as ‘The AI Audiobook,’ Trump declared this innovation as a step into the future of publishing, highlighting how AI is now entering mainstream media production. That move places AI-generated content into the public spotlight, especially as tech companies like Google and OpenAI are rolling out advanced tools to create audio, video, and even entire scenes with minimal human input.

While experts note that a complete replacement of voice actors and media professionals is unlikely in the immediate future, Trump’s audiobook represents a notable shift that aligns with rising interest from television and media companies looking to explore AI integration to compete with social media creators.

Industry observers suggest this trend could lead to a more interactive form of media. Imagine, for instance, engaging in a two-way conversation with a virtual Melania Trump about her book.

Though this level of interactivity isn’t here yet, it’s on the horizon as companies experiment with AI-generated personalities and digital avatars to enhance viewer engagement and create dynamic experiences. Still, the growth of generative AI sparks concern about job security in creative fields.

While some roles, like voiceover work, are vulnerable to automation, others—especially those requiring human insight and emotional intelligence, like investigative journalism—remain more resistant. Rather than eliminating jobs outright, AI may reshape media employment, demanding hybrid skills that combine traditional storytelling with technological proficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers are selling 94 billion stolen cookies on Telegram

Cybercriminals are trading nearly 94 billion stolen browser cookies on Telegram, with over 20% still active and capable of granting direct access to user accounts.

These cookies, essential for keeping users logged in and websites functioning smoothly, are being repurposed as tools for account hijacking, bypassing login credentials and putting personal data at risk. Security experts warn that hundreds of millions of users globally could be exposed.

The data, revealed by cybersecurity firm NordVPN, shows that the theft spans 253 countries, with Brazil, India, Indonesia, Vietnam, and the US among the most affected.

Google services were the prime target, with over 4.5 billion stolen cookies linked to Google accounts, followed by YouTube, Microsoft, and Bing. Many of these cookies contain session IDs and user identifiers, which allow hackers to impersonate users and access their online accounts without detection.

The surge in cookie theft marks a 74% increase over the previous year, driven largely by the spread of malware. Redline, Vidar, and LummaC2 are among the most prolific infostealers, collectively responsible for over 60 billion stolen cookies.

These malware strains extract saved data from browsers and often act as gateways for more advanced cyberattacks.

New strains like RisePro, Stealc, Nexus, and Rhadamanthys are also emerging, designed to steal browser credentials and banking data more efficiently.

Many of these stolen cookies are being exchanged on Telegram channels, raising alarm about the app’s misuse. In response, Telegram stated:

The sale of private data is expressly forbidden by Telegram’s terms of service and is removed whenever discovered. Moderators empowered with custom AI and machine learning tools proactively monitor public parts of the platform and accept reports to remove millions of pieces of harmful content each year.’

With cookie theft becoming an increasingly common tactic, experts urge users to regularly clear cookies, use secure browsers, and consider additional protective measures to guard their digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Florida woman scammed by fake Keanu Reeves in AI-powered romance fraud

A Florida woman, Dianne Ringstaff, shared her painful story after falling victim to an elaborate online scam involving someone impersonating actor Keanu Reeves. The fraud began innocently when she received a message while playing a mobile game, followed by a video call confirming she was speaking with the Hollywood star.

The impostor cultivated a friendship through calls and messages for two and a half years, eventually gaining her trust. Things took a turn when the scammer began pleading for money, claiming Reeves was being sued and targeted by the FBI, which had supposedly frozen his assets.

Vulnerable after personal losses, Ringstaff was persuaded to help, ultimately taking out a home equity loan and selling her car. She sent around $160,000 in total, convinced she was aiding the beloved actor.

Authorities later informed her that not only had she been scammed, but her bank account had been used to funnel money from other victims as well. Devastated, Ringstaff broke down—but is now determined to reclaim her life and raise awareness.

She is speaking out to warn others about the growing threat of AI-powered ‘romance’ scams, where fraudsters use deepfake videos and cloned voices to impersonate celebrities and gain victims’ trust.

‘Don’t be naive,’ she cautions. ‘Do your research and don’t give out personal information unless you truly know who you’re dealing with.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

German court allows Meta to use Facebook and Instagram data

A German court has ruled in favour of Meta, allowing the tech company to use data from Facebook and Instagram to train AI systems. A Cologne court ruled Meta had not breached the EU law and deemed its AI development a legitimate interest.

According to the court, Meta is permitted to process public user data without explicit consent. Judges argued that training AI systems could not be achieved by other equally effective and less intrusive methods.

They noted that Meta plans to use only publicly accessible data and had taken adequate steps to inform users via its mobile apps.

Despite the ruling, the North Rhine-Westphalia Consumer Advice Centre remains critical, raising concerns about legality and user privacy. Privacy group Noyb also challenged the decision, warning it could take further legal action, including a potential class-action lawsuit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI could accelerate and automate future cyberattacks, Malwarebytes warns

A new report by Malwarebytes warns that the rise of agentic AI will significantly increase the frequency, sophistication, and scale of cyberattacks.

Since the launch of ChatGPT in late 2022, threat actors have used generative AI to write malware, craft phishing emails, and execute realistic social engineering schemes.

One notable case from January 2024 involved a finance employee who was deceived into transferring $25 million during a video call with AI-generated deepfakes of company executives.

Criminals have also found ways to bypass safety features in AI models using techniques such as prompt chaining, injection, and jailbreaking to generate malicious outputs.

While generative AI has already lowered the barrier to entry for cybercrime, the report highlights that agentic AI—capable of autonomously executing complex tasks—poses a far greater risk by automating time-consuming attacks like ransomware at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber scams use a three-letter trap

Staying safe from cybercriminals can be surprisingly simple. While AI-powered scams grow more realistic, some signs are still painfully obvious.

If you spot the letters ‘.TOP’ in any message link, it’s best to stop reading and hit delete. That single clue is often enough to expose a scam in progress.

Most malicious texts pose as alerts about road tolls, deliveries or account issues, using trusted brand names to lure victims into clicking fake links.

The worst of these is the ‘.TOP’ top-level domain (TLD), which has become infamous for its role in phishing and scam operations. Although launched in 2014 for premium business use, its low cost and lack of oversight quickly made it a favourite among cyber gangs, especially those based in China.

Today, nearly one-third of all .TOP domains are linked to cybercrime — far surpassing the criminal activity seen on mainstream domains like ‘.com’.

Despite repeated warnings and an unresolved compliance notice from internet regulator ICANN, abuse linked to .TOP has only worsened.

Experts warn that it is highly unlikely any legitimate Western organisation would ever use a .TOP domain. If one appears in your messages, the safest option is to delete it without clicking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Secret passwords could fight deepfake scams

As AI-generated images grow increasingly lifelike, a cyber security expert has warned that families should create secret passwords to guard against deepfake scams.

Cody Barrow, chief executive of EclecticIQ and a former US government adviser, says AI is making it far easier for criminals to impersonate others using fabricated videos or images.

Mr Barrow and his wife now use a private code to confirm each other’s identity if either receives a suspicious message or video.

He believes this precaution, simple enough for anyone regardless of age or digital skills, could soon become essential. ‘It may sound dramatic here in May 2025,’ he said, ‘but I’m quite confident that in a few years, if not months, people will say: I should have done that.’

The warning comes the same week Google launched Veo 3, its AI video generator capable of producing hyper-realistic footage and lifelike dialogue. Its public release has raised concerns about how easily deepfakes could be misused for scams or manipulation.

Meanwhile, President Trump signed the ‘Take It Down Act’ into law, making the creation of deepfake pornography a criminal offence. The bipartisan measure will see prison terms for anyone producing or uploading such content, with First Lady Melania Trump stating it will ‘prioritise people over politics’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder Durov to address Oslo Freedom Forum remotely amid legal dispute

Telegram founder Pavel Durov will deliver a livestreamed keynote at the Oslo Freedom Forum, following a French court decision barring him from international travel. The Human Rights Foundation (HRF), which organizes the annual event, expressed disappointment at the court’s ruling.

Durov, currently under investigation in France, was arrested in August 2024 on charges related to child sexual abuse material (CSAM) distribution and failure to assist law enforcement.

He was released on €5 million bail but ordered to remain in the country and report to police twice a week. Durov maintains the charges are unfounded and says Telegram complies with law enforcement when possible.

Recently, Durov accused French intelligence chief Nicolas Lerner of pressuring him to censor political voices ahead of elections in Romania. France’s DGSE denies the allegation, saying meetings with Durov focused solely on national security threats.

The claim has sparked international debate, with figures like Elon Musk and Edward Snowden defending Durov’s stance on free speech.

Supporters say the legal action against Durov may be politically motivated and warn it could set a dangerous precedent for holding tech executives accountable for user content. Critics argue Telegram must do more to moderate harmful material.

Despite legal restrictions, HRF says Durov’s remote participation is vital for ongoing discussions around internet freedom and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!