Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO panel calls for ethics to be core of emerging tech, not an afterthought

At the WSIS+20 High-Level Event in Geneva, UNESCO hosted a session titled ‘Ethics in AI: Shaping a Human-Centred Future in the Digital Age,’ where global experts warned that ethics must be built into the foundation of emerging technologies such as AI, neurotechnology, and quantum computing—not added later as damage control.

UNESCO’s Chief of Bioethics and Ethics of Science and Technology, Dafna Feinholz, stressed that ethical considerations should shape technology development from the start, echoing the organisation’s mission to safeguard human rights and freedoms alongside scientific innovation.

Panellists underscored the tension between individual intentions and institutional realities. Philosopher Mira Wolf-Bauwens argued that while developers often begin with a sense of moral responsibility, corporate pressures quickly override these principles.

Drawing from her work in the quantum sector, she described how companies dilute ethical concerns into mere legal compliance, eroding their original purpose. Neuroscientist and entrepreneur Ryota Kanai echoed this concern, sharing how the rush to commercialise neurotechnology has led to premature products that risk undermining public trust, especially when privacy risks remain poorly understood.

The session also highlighted success stories in ethical governance, such as Thailand’s efforts to implement UNESCO’s AI ethics framework. Chaichana Mitrpant, leading the country’s digital policy agency, described a localised yet uncompromised approach that engaged multiple stakeholders—from regulators to small businesses. The collaborative model helped tailor global ethical guidelines to national realities while maintaining core human values.

Panellists agreed that while regulation plays a role, ethics must remain broader, more agile, and focused on motivation rather than just rule enforcement. With technologies evolving faster than laws can adapt, anticipatory governance, cross-sector collaboration, and inclusive debate were hailed as essential. The session closed with a shared call to action: embedding ethics in every stage of technology development is not just ideal—it’s urgently necessary to build a trustworthy digital future.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google partners with UK government on AI training

The UK government has struck a major partnership with Google Cloud aimed at modernising public services by eliminating agreing IT systems and equipping 100,000 civil servants with digital and AI skills by 2030.

Backed by DSIT, the initiative targets sectors like the NHS and local councils, seeking both operational efficiency and workforce transformation.

Replacing legacy contracts, some of which date back decades, could unlock as much as £45 billion in efficiency savings, say ministers. Google DeepMind will provide technical expertise to help departments adopt emerging AI solutions and accelerate public sector innovation.

Despite these promising aims, privacy campaigners warn that reliance on a US-based tech giant threatens national data sovereignty and may lead to long-term lock-in.

Foxglove’s Martha Dark described the deal as ‘dangerously naive’, with concerns around data access, accountability, public procurement processes and geopolitical risk.

As ministers pursue broader technological transformation, similar partnerships with Microsoft, OpenAI and Meta are underway, reflecting an industry-wide effort to bridge digital skills gaps and bring agile solutions into Whitehall.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns rise over Gemini’s on‑device data access

From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.

Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.

Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.

However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.

Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.

The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija: Digital tools are reshaping diplomacy

Once the global stage for peace negotiations and humanitarian accords, Geneva finds itself at the heart of a new kind of diplomacy shaped by algorithms, data flows, and AI. Jovan Kurbalija, Executive Director of Diplo and Head of the Geneva Internet Platform, believes this transformation reflects Geneva’s long tradition of engaging with science, technology, and global governance. He explained this in an interview with Léman Bleu.

Diplo, a Swiss-Maltese foundation, supports diplomats and international professionals as they navigate the increasingly complex landscape of digital governance.

‘Where we once trained them to understand the internet,’ Kurbalija explains, ‘we now help them grasp and negotiate issues around AI and digital tools.’

The foundation not only aids diplomats in addressing cyber threats and data privacy but also equips them with AI-enhanced tools for negotiation, public communication, and consular protection.

According to Kurbalija, digital governance touches everyone. From how our phones are built to how data moves across borders, nearly 50 distinct issues—from cybersecurity and e-commerce to data protection and digital standards—are debated in the corridors of International Geneva. These debates are no longer reserved for specialists because they affect the everyday lives of billions.

Kurbalija draws a fascinating connection between Geneva’s philosophical heritage and today’s technological dilemmas. Writers like Mary Shelley, Voltaire, and Borges, each with ties to Geneva, grappled with themes eerily relevant today: unchecked scientific ambition, the tension between freedom and control, and the challenge of processing vast amounts of knowledge. He dubs this tradition ‘EspriTech de Genève,’ a spirit of intellectual inquiry that still echoes in debates over AI and its impact on society.

AI, Kurbalija warns, is both a marvel and a potential menace.

‘It’s not exactly Frankenstein,’ he says, ‘but without proper governance, it could become one.’

As technology evolves, so must international mechanisms ensure it serves humanity rather than endangers it.

Diplomacy, meanwhile, is being reshaped not just in terms of content but in method. Digital tools allow diplomats to engage more directly with the public and make negotiations more transparent. Yet, the rise of social media has its downsides. Public broadcasting of diplomatic proceedings risks undermining the very privacy and trust needed to reach a compromise.

‘Diplomacy,’ Kurbalija notes, ‘needs space to breathe—to think, negotiate, resolve.’

He also cautions against the growing concentration of AI and data power in the hands of a few corporations.

‘We risk having our collective knowledge privatised, commodified, and sold back to us,’ he says.

The antidote? A push for more inclusive, bottom-up AI development that empowers individuals, communities, and nations.

As Geneva continues its historic role in shaping the future, Kurbalija’s message is clear: managing technology wisely is not just a diplomatic challenge—it’s a global necessity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Over 2.3 million users hit by Chrome and Edge extension malware

A stealthy browser hijacking campaign has infected over 2.3 million users through Chrome and Edge extensions that appeared safe and even displayed Google’s verified badge.

According to cybersecurity researchers at Koi Security, the campaign, dubbed RedDirection, involves 18 malicious extensions offering legitimate features like emoji keyboards and VPN tools, while secretly tracking users and backdooring their browsers.

One of the most popular extensions — a colour picker developed by ‘Geco’ — continues to be available on the Chrome and Edge stores with thousands of positive reviews.

While it works as intended, the extension also hijacks sessions, records browsing activity, and sends data to a remote server controlled by attackers.

What makes the campaign more insidious is how the malware was delivered. The extensions began as clean, valuable tools, but malicious code was quietly added during later updates.

Due to how Google and Microsoft handle automatic updates, most users receive spyware without taking action or clicking anything.

Koi Security’s Idan Dardikman describes the campaign as one of the largest documented. Users are advised to uninstall any affected extensions, clear browser data, and monitor accounts for unusual activity.

Despite the serious breach, Google and Microsoft have not responded publicly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered imposter poses as US Secretary of State Rubio

An imposter posing as US Secretary of State Marco Rubio used an AI-generated voice and text messages to contact high-ranking officials, including foreign ministers, a senator, and a state governor.

The messages, sent through SMS and the encrypted app Signal, triggered an internal warning across the US State Department, according to a classified cable dated 3 July.

The individual created a fake Signal account using the name ‘Marco.Rubio@state.gov’ and began contacting targets in mid-June.

At least two received AI-generated voicemails, while others were encouraged to continue the chat via Signal. US officials said the aim was likely to gain access to sensitive information or compromise official accounts.

The State Department confirmed it is investigating the breach and has urged all embassies and consulates to remain alert. While no direct cyber threat was found, the department warned that shared information could still be exposed if targets were deceived.

A spokesperson declined to provide further details for security reasons.

The incident appears linked to a broader wave of AI-driven disinformation. A second operation, possibly tied to Russian actors, reportedly targeted Gmail accounts of journalists and former officials.

The FBI has warned of rising cases of ‘smishing’ and ‘vishing’ involving AI-generated content.

Experts now warn that deepfakes are becoming harder to detect, as the technology advances faster than defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fraudsters exploit dormant Bitcoin addresses to steal data

Analysts at BitMEX Research have revealed a new scam aimed at early Bitcoin holders, particularly those with dormant wallets dating back to 2011. Attackers use Bitcoin’s OP_Return field to send false transactions and messages to deceive owners into sharing sensitive data.

One high-profile victim is the ‘1Feex’ wallet, known for holding around 80,000 BTC stolen from the Mt. Gox hack.

Scammers made a fake Salomon Brothers site claiming that wallets are abandoned unless owners prove ownership with signed messages or personal documents. The site bears no genuine link to the original financial firm or its former executives.

Crypto community members recommend a safer approach: moving a small amount of Bitcoin to demonstrate wallet activity instead of risking the full balance. BitMEX urges users to avoid interacting with fake sites or sharing personal data.

The scam exemplifies growing sophistication in crypto fraud, with losses exceeding $2.1 billion in just the first half of 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot