Bye Bye Google AI hides unwanted AI results in Search

Google is pushing AI deeper into its services, with AI Overviews already reaching billions of users and AI Mode now added to Search. Chrome is also being rebranded as an AI-first browser.

Not all users welcome these changes. Concerns remain about accuracy, intrusive design and Google’s growing control over how information is displayed. Unlike other features, AI elements in Search cannot be turned off directly, leaving users reliant on third-party solutions.

One such solution is the new ‘Bye Bye, Google AI’ extension, which hides AI-generated results and unwanted blocks such as sponsored links, shopping sections and discussion forums.

The extension works across Chromium-based browsers, though it relies on CSS and may break when Google updates its interface.

A debate that reflects wider unease about AI in Search.

While Google claims it improves user experience, critics argue it risks spreading false information and keeping traffic within Google’s ecosystem rather than directing users to original publishers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The strategic shift toward open-source AI

The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endorsement of open-source AI as a national priority, has marked a turning point in the global AI race, writes Jovan Kurbalija in his blog ‘The strategic imperative of open source AI’.

What once seemed an ideological stance is now being reframed as a matter of geostrategic necessity. Despite their historical reliance on proprietary systems, China and the United States have embraced openness as the key to competitiveness.

Kurbalija adds that history offers clear lessons that open systems tend to prevail. Just as TCP/IP defeated OSI in the 1980s and Linux outpaced costly proprietary operating systems in the 1990s, today’s open-source AI models are challenging closed platforms. Companies like Meta and DeepSeek have positioned their tools as the new foundations of innovation, while proprietary players such as OpenAI are increasingly seen as constrained by their closed architectures.

The advantages of open-source AI are not only philosophical but practical. Open models evolve faster through global collaboration, lower costs by sharing development across vast communities, and attract younger talent motivated by purpose and impact.

They are also more adaptable, making integrating into industries, education, and governance easier. Importantly, breakthroughs in efficiency show that smaller, smarter models can now rival giant proprietary systems, further broadening access.

The momentum is clear. Open-source AI is emerging as the dominant paradigm. Like the internet protocols and operating systems that shaped previous digital eras, openness is proving more ethical and strategically effective. As researchers, governments, and companies increasingly adopt this approach, open-source AI could become the backbone of the next phase of the digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Persistent WordPress malware campaign hides as fake plugin to evade detection

A new malware campaign targets WordPress sites, utilising steganography and persistent backdoors to maintain unauthorised admin access. It uses two components that work together to maintain control.

The attack begins with malicious files disguised as legitimate WordPress components. These files are heavily obfuscated, create administrator accounts with hardcoded credentials, and bypass traditional detection tools. However, this ensures attackers can retain access even after security teams respond.

Researchers say the malware exploits WordPress plugin infrastructure and user management functions to set up redundant access points. It then communicates with command-and-control servers, exfiltrating system data and administrator credentials to attacker-controlled endpoints.

This campaign can allow threat actors to inject malicious code, redirect site visitors, steal sensitive data, or deploy additional payloads. Its persistence and stealth tactics make it difficult to detect, leaving websites vulnerable for long periods.

The main component poses as a fake plugin called ‘DebugMaster Pro’ with realistic metadata. Its obfuscated code creates admin accounts, contacts external servers, and hides by allowing known admin IPs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk escalates legal battle with new lawsuit against OpenAI

Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.

The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.

Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.

Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.

xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils new Gemini Robotics models

Google has unveiled two new robotics models, Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, designed to help robots better perceive, plan, and act in complex environments. The models aim to enable more capable robots to complete multi-step tasks efficiently and transparently.

Gemini Robotics 1.5 converts visual information and instructions into actions, letting robots think before acting and explain their reasoning. Gemini Robotics-ER 1.5 acts as a high-level planner, reasoning about the physical world and using tools like Google Search to support decisions.

Together, the models form an ‘agentic’ framework. ER 1.5 orchestrates a robot’s activities, while Robotics 1.5 carries them out, enabling the machines to tackle semantically complex tasks. The pairing strengthens generalisation across diverse environments and longer missions.

Google said Gemini Robotics-ER 1.5 is now available to developers through the Gemini API in Google AI Studio, while Gemini Robotics 1.5 is currently open to select partners. Both models advance robots’ reasoning, spatial awareness, and multi-tasking capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spotify launches new policies on AI and music spam

Spotify announced new measures to address AI risks in music, aiming to protect artists’ identities and preserve trust on the platform. The company said AI can boost creativity but also enable harmful content like impersonations and spam that exploit artists and cut into royalties.

A new impersonation policy has been introduced, clarifying that AI-generated vocal clones of artists are only permitted with explicit authorisation. Spotify is strengthening processes to block fraudulent uploads and mismatches, giving artists quicker recourse when their work is misused.

The platform will launch a new spam filter this year to detect and curb manipulative practices like mass uploads and artificially short tracks. The system will be deployed cautiously, with updates added as new abuse tactics emerge, in order to safeguard legitimate creators.

In addition, Spotify will back an industry standard for AI disclosures in music credits, allowing artists and rights holders to show how AI was used in production. The company said these steps show its commitment to protecting artists, ensuring transparency, and fair royalties as AI reshapes the music industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI SHIELD unveiled to protect financial AI systems

Ant International has introduced AI SHIELD, a security framework to protect AI systems used in financial services. The toolkit aims to reduce risks such as fraud, bias, and misuse in AI applications like fraud detection, payment authorisation, and customer chatbots.

At the centre of AI SHIELD is the AI Security Docker, which applies safeguards throughout development and deployment. The framework includes authentication of AI agents, continuous monitoring to block threats in real time, and ongoing adversarial testing.

Ant said the system will support over 100 million merchants and 1.8 billion users worldwide across services like Alipay+, Antom, Bettr, and WorldFirst. It will also defend against deepfake attacks and account takeovers, with the firm claiming its EasySafePay 360 tool can cut such incidents by 90%.

The initiative is part of Ant’s wider role in setting industry standards, including its work with Google on the Agent Payments Protocol, which defines how AI agents transact securely with user approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants warn Digital Markets Act is failing

Apple and Google have urged the European Union to revisit its Digital Markets Act, arguing the law is damaging users and businesses.

Apple said the rules have forced delays to new features for European customers, including live translation on AirPods and improvements to Apple Maps. It warned that competition requirements could weaken security and slow innovation without boosting the EU economy.

Google raised concerns that its search results must now prioritise intermediary travel sites, leading to higher costs for consumers and fewer direct sales for airlines and hotels. It added that AI services may arrive in Europe up to a year later than elsewhere.

Both firms stressed that enforcement should be more consistent and user-focused. The European Commission is reviewing the Act, with formal submissions under consideration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government considers supplier aid after JLR cyberattack

Jaguar Land Rover (JLR) is recovering from a disruptive cyberattack, gradually bringing its systems back online. The company is focused on rebuilding its operations, aiming to restore confidence and momentum as key digital functions are restored.

JLR said it has boosted its IT processing capacity for invoicing to clear its payment backlog. The Global Parts Logistics Centre is also resuming full operations, restoring parts distribution to retailers.

The financial system used for processing vehicle wholesales has been restored, allowing the company to resume car sales and registration. JLR is collaborating with the UK’s NCSC and law enforcement to ensure a secure restart of operations.

Production remains suspended at JLR’s three UK factories in Halewood, Solihull, and Wolverhampton. The company typically produces around 1,000 cars a day, but staff have been instructed to stay at home since the August cyberattack.

The government is considering support packages for the company’s suppliers, some of whom are under financial pressure. A group identifying itself as Scattered Lapsus$ Hunters has claimed responsibility for the incident.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!