Persistent WordPress malware campaign hides as fake plugin to evade detection

A new malware campaign targets WordPress sites, utilising steganography and persistent backdoors to maintain unauthorised admin access. It uses two components that work together to maintain control.

The attack begins with malicious files disguised as legitimate WordPress components. These files are heavily obfuscated, create administrator accounts with hardcoded credentials, and bypass traditional detection tools. However, this ensures attackers can retain access even after security teams respond.

Researchers say the malware exploits WordPress plugin infrastructure and user management functions to set up redundant access points. It then communicates with command-and-control servers, exfiltrating system data and administrator credentials to attacker-controlled endpoints.

This campaign can allow threat actors to inject malicious code, redirect site visitors, steal sensitive data, or deploy additional payloads. Its persistence and stealth tactics make it difficult to detect, leaving websites vulnerable for long periods.

The main component poses as a fake plugin called ‘DebugMaster Pro’ with realistic metadata. Its obfuscated code creates admin accounts, contacts external servers, and hides by allowing known admin IPs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spotify launches new policies on AI and music spam

Spotify announced new measures to address AI risks in music, aiming to protect artists’ identities and preserve trust on the platform. The company said AI can boost creativity but also enable harmful content like impersonations and spam that exploit artists and cut into royalties.

A new impersonation policy has been introduced, clarifying that AI-generated vocal clones of artists are only permitted with explicit authorisation. Spotify is strengthening processes to block fraudulent uploads and mismatches, giving artists quicker recourse when their work is misused.

The platform will launch a new spam filter this year to detect and curb manipulative practices like mass uploads and artificially short tracks. The system will be deployed cautiously, with updates added as new abuse tactics emerge, in order to safeguard legitimate creators.

In addition, Spotify will back an industry standard for AI disclosures in music credits, allowing artists and rights holders to show how AI was used in production. The company said these steps show its commitment to protecting artists, ensuring transparency, and fair royalties as AI reshapes the music industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI SHIELD unveiled to protect financial AI systems

Ant International has introduced AI SHIELD, a security framework to protect AI systems used in financial services. The toolkit aims to reduce risks such as fraud, bias, and misuse in AI applications like fraud detection, payment authorisation, and customer chatbots.

At the centre of AI SHIELD is the AI Security Docker, which applies safeguards throughout development and deployment. The framework includes authentication of AI agents, continuous monitoring to block threats in real time, and ongoing adversarial testing.

Ant said the system will support over 100 million merchants and 1.8 billion users worldwide across services like Alipay+, Antom, Bettr, and WorldFirst. It will also defend against deepfake attacks and account takeovers, with the firm claiming its EasySafePay 360 tool can cut such incidents by 90%.

The initiative is part of Ant’s wider role in setting industry standards, including its work with Google on the Agent Payments Protocol, which defines how AI agents transact securely with user approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants warn Digital Markets Act is failing

Apple and Google have urged the European Union to revisit its Digital Markets Act, arguing the law is damaging users and businesses.

Apple said the rules have forced delays to new features for European customers, including live translation on AirPods and improvements to Apple Maps. It warned that competition requirements could weaken security and slow innovation without boosting the EU economy.

Google raised concerns that its search results must now prioritise intermediary travel sites, leading to higher costs for consumers and fewer direct sales for airlines and hotels. It added that AI services may arrive in Europe up to a year later than elsewhere.

Both firms stressed that enforcement should be more consistent and user-focused. The European Commission is reviewing the Act, with formal submissions under consideration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI unveils ChatGPT Pulse for proactive updates

OpenAI has introduced a preview of ChatGPT Pulse, a feature designed to deliver proactive and personalised updates to Pro users on mobile. Instead of waiting for users to ask questions, Pulse researches chat history, feedback, and connected apps to deliver daily insights.

The updates appear as visual cards covering relevant topics, which users can scan quickly or expand for detail. Integrations with Gmail and Google Calendar are available, enabling suggestions such as drafting meeting agendas, recommending restaurants for trips, or reminding users about birthdays.

These integrations are optional and can be switched off at any time.

Pulse is built to prioritise usefulness over screen time, offering updates that expire daily unless saved or added to chat history. Early trials with students highlighted the importance of simple feedback to refine results, and users can guide what appears by curating topics or rating suggestions.

OpenAI plans to refine the feature further before expanding its availability beyond Pro users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government considers supplier aid after JLR cyberattack

Jaguar Land Rover (JLR) is recovering from a disruptive cyberattack, gradually bringing its systems back online. The company is focused on rebuilding its operations, aiming to restore confidence and momentum as key digital functions are restored.

JLR said it has boosted its IT processing capacity for invoicing to clear its payment backlog. The Global Parts Logistics Centre is also resuming full operations, restoring parts distribution to retailers.

The financial system used for processing vehicle wholesales has been restored, allowing the company to resume car sales and registration. JLR is collaborating with the UK’s NCSC and law enforcement to ensure a secure restart of operations.

Production remains suspended at JLR’s three UK factories in Halewood, Solihull, and Wolverhampton. The company typically produces around 1,000 cars a day, but staff have been instructed to stay at home since the August cyberattack.

The government is considering support packages for the company’s suppliers, some of whom are under financial pressure. A group identifying itself as Scattered Lapsus$ Hunters has claimed responsibility for the incident.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google and Flo Health settle health data privacy suit for $56 million

Google has agreed to pay $48 million, and Flo Health, a menstrual tracking app, has agreed to pay $8 million to resolve claims that the app shared users’ health data without their consent.

The lawsuit alleged that Flo used third-party tools to transmit personal information, including menstruation and pregnancy details, to companies like Google, Meta, and analytics firm Flurry.

The class-action case, filed in 2021 by plaintiff Erica Frasko and later consolidated with similar complaints, accused Flo of violating privacy laws by allowing user data to be intercepted via embedded software development kits (SDKs).

Google’s settlement, disclosed this week, covers users who inputted reproductive health data between November 2016 and February 2019.

While neither Flo nor Google admitted wrongdoing, the settlement avoids the uncertainty of a trial. A notice to claimants stated the resolution helps sidestep the costs and risks of prolonged litigation.

Meta, a co-defendant, opted to go to trial and was found liable in August for violating California’s Invasion of Privacy Act. A judge recently rejected Meta’s attempt to overturn that verdict.

According to The Record, the case has drawn significant attention from privacy advocates and the tech industry, highlighting the potential legal risks of data-sharing practices tied to ad-tracking technology.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Brazil to host massive AI-ready data centre by RT-One

RT-One plans to build Latin America’s largest AI data centre after securing land in Uberlândia, Minas Gerais, Brazil. The US$1.2bn project will span over one million square metres, with 300,000 m² reserved as protected green space.

The site will support high-performance computing, sovereign cloud services, and AI workloads, launching with 100MW capacity and scaling to 400MW. It will run on 100% renewable energy and utilise advanced cooling systems to minimise its environmental impact.

RT-One states that the project will prepare Brazil to compete globally, generate skilled jobs, and train new talent for the digital economy. A wide network of partners, including Hitachi, Siemens, WEG, and Schneider Electric, is collaborating on the development, aiming to ensure resilience and sustainability at scale.

The project is expected to stimulate regional growth, with jobs, training programmes, and opportunities for collaboration between academia and industry. Local officials, including the mayor of Uberlândia, attended the launch event to underline government support for the initiative.

Once complete, the Uberlândia facility will provide sovereign cloud capacity, high-density compute, and AI-ready infrastructure for Brazil and beyond. RT-One says the development will position the city as a hub for digital innovation and strengthen Latin America’s role in the global AI economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!