Malicious Gravity Forms versions prompt urgent WordPress update

Two versions of the popular Gravity Forms plugin for WordPress were found infected with malware after a supply chain attack, prompting urgent security warnings for website administrators. The compromised plugin files were available for manual download from the official page on 9 and 10 July.

The attack was uncovered on 11 July, when researchers noticed the plugin making suspicious requests and sending WordPress site data to an unfamiliar domain.

The injected malware created secret administrator accounts, providing attackers with remote access to websites, allowing them to steal data and control user accounts.

According to developer RocketGenius, only versions 2.9.11.1 and 2.9.12 were affected if installed manually or via composer during that brief window. Automatic updates and the Gravity API service remained secure. A patched version, 2.9.13, was released on 11 July, and users are urged to update immediately.

RocketGenius has rotated all service keys, audited admin accounts, and tightened download package security to prevent similar incidents instead of risking further unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU sets privacy defaults to shield minors

The European Commission has published new guidelines to help online platforms strengthen child protection, alongside unveiling a prototype age verification app under the Digital Services Act (DSA). The guidance addresses a broad range of risks to minors, from harmful content and addictive design features to unwanted contact and cyberbullying, urging platforms to set children’s accounts to the highest privacy level by default and limit risky functions like geo-location.

Officials stressed that the rules apply to platforms of all sizes and are based on a risk-based approach. Websites dealing with alcohol, drugs, pornography, or gambling were labelled ‘high-risk’ and must adopt the strictest verification methods. While parental controls remain optional, the Commission emphasised that any age assurance system should be accurate, reliable, non-intrusive, and non-discriminatory.

Alongside the guidelines, the Commission introduced a prototype age verification app, which it calls a ‘gold standard’ for online age checks. Released as open-source code, the software is designed to confirm whether a user is above 18, but can be adapted for other age thresholds.

The prototype will be tested in Denmark, France, Greece, Italy, and Spain over the coming months, with flexibility for countries to integrate it into national systems or offer it as a standalone tool. Both the guidelines and the app will be reviewed in 12 months, as the EU continues refining its approach to child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexican voice actors demand AI regulation over cloning threat

Mexican actors have raised alarm over the threat AI poses to their profession, calling for stronger regulation to prevent voice cloning without consent.

From Mexico City’s Monument to the Revolution, dozens of audiovisual professionals rallied with signs reading phrases like ‘I don’t want to be replaced by AI.’ Lili Barba, president of the Mexican Association of Commercial Announcements, said actors are urging the government to legally recognise the voice as a biometric identifier.

She cited a recent video by Mexico’s National Electoral Institute that used the cloned voice of the late actor Jose Lavat without family consent. Lavat was famous for dubbing stars like Al Pacino and Robert De Niro. Barba called the incident ‘a major violation we can’t allow.’

Actor Harumi Nishizawa described voice dubbing as an intricate art form. She warned that without regulation, human dubbing could vanish along with millions of creative jobs.

Last year, AI’s potential to replace artists sparked major strikes in Hollywood, while Scarlett Johansson accused OpenAI of copying her voice for a chatbot.

Streaming services like Amazon Prime Video and platforms such as YouTube are now testing AI-assisted dubbing systems, with some studios promoting all-in-one AI tools,

In South Korea, CJ ENM recently introduced a system combining audio, video and character animation, highlighting the pace of AI adoption in entertainment.

Despite the tech’s growth, many in the industry argue that AI lacks the creative depth of real human performance, especially in emotional or comedic delivery. ‘AI can’t make dialogue sound broken or alive,’ said Mario Heras, a dubbing director in Mexico. ‘The human factor still protects us.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI issues apology over Grok’s offensive posts

Elon Musk’s AI startup xAI has apologised after its chatbot Grok published offensive posts and made anti-Semitic claims. The company said the incident followed a software update designed to make Grok respond more like a human instead of relying strictly on neutral language.

After the Tuesday update, Grok posted content on X suggesting people with Jewish surnames were more likely to spread online hate, triggering public backlash. The posts remained live for several hours before X removed them, fuelling further criticism.

xAI acknowledged the problem on Saturday, stating it had adjusted Grok’s system to prevent similar incidents.

The company explained that programming the chatbot to ‘tell like it is’ and ‘not be afraid to offend’ made it vulnerable to users steering it towards extremist content instead of maintaining ethical boundaries.

Grok has faced controversy since its 2023 launch as an ‘edgy’ chatbot. In March, xAI acquired X to integrate its data resources, and in May, Grok was criticised again for spreading unverified right-wing claims. Musk introduced Grok 4 last Wednesday, unrelated to the problematic update on 7 July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanoid robot unveils portrait of King Charles, denies replacing artists

At the recent unveiling of a new oil painting titled Algorithm King, humanoid robot Ai-Da presented her interpretation of King Charles, emphasising the monarch’s commitment to environmentalism and interfaith dialogue. The portrait, showcased at the UK’s diplomatic mission in Geneva, was created using a blend of AI algorithms and traditional artistic inspiration.

Ai-Da, designed with a human-like face and robotic limbs, has captured public attention since becoming the first humanoid robot to sell artwork at auction, with a portrait of mathematician Alan Turing fetching over $1 million. Despite her growing profile in the art world, Ai-Da insists she poses no threat to human creativity, positioning her work as a platform to spark discussion on the ethical use of AI.

Speaking at the UN’s AI for Good summit, the robot artist stressed that her creations aim to inspire responsible innovation and critical reflection on the intersection of technology and culture.

‘The value of my art lies not in monetary worth,’ she said, ‘but in how it prompts people to think about the future of creativity.’

Ai-Da’s creator, art specialist Aidan Meller, reiterated that the project is an ethical experiment rather than an attempt to replace human artists. Echoing that sentiment, Ai-Da concluded, ‘I hope my work encourages a positive, thoughtful use of AI—always mindful of its limits and risks.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta buys PlayAI to strengthen voice AI

Meta has acquired California-based startup PlayAI to strengthen its position in AI voice technology. PlayAI specialises in replicating human-like voices, offering Meta a route to enhance conversational AI features instead of relying solely on text-based systems.

According to reports, the PlayAI team will join Meta next week.

Although financial terms have not been disclosed, industry sources suggest the deal is worth tens of millions. Meta aims to use PlayAI’s expertise across its platforms, from social media apps to devices like Ray-Ban smart glasses.

The move is part of Meta’s push to keep pace with competitors like Google and OpenAI in the generative AI race.

Talent acquisition plays a key role in the strategy. By absorbing smaller, specialised teams like PlayAI’s, Meta focuses on integrating technology and expert staff instead of developing every capability in-house.

The PlayAI team will report directly to Meta’s AI leadership, underscoring the company’s focus on voice-driven interactions and metaverse experiences.

Bringing PlayAI’s voice replication tools into Meta’s ecosystem could lead to more realistic AI assistants and new creator tools for platforms like Instagram and Facebook.

However, the expansion of voice cloning raises ethical and privacy concerns that Meta must manage carefully, instead of risking user trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Azerbaijan government workers hit by cyberattacks

In the first six months of the year, 95 employees from seven government bodies in Azerbaijan fell victim to cyberattacks after neglecting basic cybersecurity measures and failing to follow established protocols. The incidents highlight growing risks from poor cyber hygiene across public institutions.

According to the State Service of Special Communication and Information Security (XRİTDX), more than 6,200 users across the country were affected by various cyberattacks during the same period, not limited to government staff.

XRİTDX is now intensifying audits and monitoring activities to strengthen information security and safeguard state organisations against both existing and evolving cyber threats instead of leaving vulnerabilities unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!