Pentagon awards AI contracts to xAI and others after Grok controversy

The US Department of Defence has awarded contracts to four major AI firms, including Elon Musk’s xAI, as part of a strategy to boost military AI capabilities.

Each contract is valued at up to $200 million and involves developing advanced AI workflows for critical national security tasks.

Alongside xAI, Anthropic, Google, and OpenAI have also secured contracts. Pentagon officials said the deals aim to integrate commercial AI solutions into intelligence, business, and defence operations instead of relying solely on internal systems.

Chief Digital and AI Officer Doug Matty states that these technologies will help maintain the US’s strategic edge over rivals.

The decision comes as Musk’s AI company faces controversy after its Grok chatbot was reported to have published offensive content on social media. Critics, including Democratic lawmakers, have raised ethical concerns about awarding national security contracts to a company under public scrutiny.

xAI insists its Grok for Movement platform will help speed up government services and scientific innovation.

Despite political tensions and Musk’s past financial support for Donald Trump’s campaign, the Pentagon has formalised its relationship with xAI and other AI leaders instead of excluding them due to reputational risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia to restart China AI chip sales after US talks

Nvidia has announced plans to resume sales of its H20 AI chip in China, following meetings between CEO Jensen Huang and US President Donald Trump in Beijing.

The move comes after US export controls previously banned sales of the chip on national security grounds, costing Nvidia an estimated $15 billion in lost revenue.

The company confirmed it is filing for licences with the US government to restart deliveries of the H20 graphics processing unit, expecting approval shortly.

Nvidia also revealed a new RTX Pro GPU designed specifically for China, compliant with US export rules, offering a lower-cost alternative instead of risking further restrictions.

Huang, attending a supply chain expo in Beijing, described China as essential to Nvidia’s growth, despite rising competition from local firms like Huawei.

Chinese companies remain highly dependent on Nvidia’s CUDA platform, while US lawmakers have raised concerns about Nvidia engaging with Chinese entities linked to military or intelligence services.

Nvidia’s return to the Chinese market comes as Washington and Beijing show signs of easing trade tensions, including relaxed rare earth export rules from China and restored chip design services from the US.

Analysts note, however, that Chinese firms are likely to keep diversifying suppliers instead of relying solely on US chips for supply chain security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious Gravity Forms versions prompt urgent WordPress update

Two versions of the popular Gravity Forms plugin for WordPress were found infected with malware after a supply chain attack, prompting urgent security warnings for website administrators. The compromised plugin files were available for manual download from the official page on 9 and 10 July.

The attack was uncovered on 11 July, when researchers noticed the plugin making suspicious requests and sending WordPress site data to an unfamiliar domain.

The injected malware created secret administrator accounts, providing attackers with remote access to websites, allowing them to steal data and control user accounts.

According to developer RocketGenius, only versions 2.9.11.1 and 2.9.12 were affected if installed manually or via composer during that brief window. Automatic updates and the Gravity API service remained secure. A patched version, 2.9.13, was released on 11 July, and users are urged to update immediately.

RocketGenius has rotated all service keys, audited admin accounts, and tightened download package security to prevent similar incidents instead of risking further unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zuckerberg unveils Meta’s multi-gigawatt AI data clusters

Meta Platforms is building several of the world’s largest data centres to power its AI ambitions, with the first facility expected to go online in 2026.

Chief Executive Mark Zuckerberg revealed on Threads that the site, called Prometheus, will be the first of multiple ‘titan clusters’ designed to support AI development instead of relying on existing infrastructure.

Frustrated by earlier AI efforts, Meta is investing heavily in talent and technology. The company has committed up to $72 billion towards AI and data centre expansion, while Zuckerberg has personally recruited high-profile figures from OpenAI, DeepMind, and Apple.

That includes appointing Scale AI’s Alexandr Wang as chief AI officer through a $14.3 billion stake deal and securing Ruoming Pang with a compensation package worth over $200 million.

The facilities under construction will have multi-gigawatt capacity, placing Meta ahead of rivals such as OpenAI and Oracle in the race for large-scale AI infrastructure.

One supercluster in Richland Parish, Louisiana, is said to cover an area nearly the size of Manhattan instead of smaller conventional data centre sites.

Zuckerberg confirmed that Meta is prepared to invest ‘hundreds of billions of dollars’ into building superintelligence capabilities, using revenue from its core advertising business on platforms like Facebook and Instagram to fund these projects instead of seeking external financing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia chief says Chinese military unlikely to use US chips

Nvidia’s CEO, Jensen Huang, has downplayed concerns over Chinese military use of American AI technology, stating it is improbable that China would risk relying on US-made chips.

He noted the potential liabilities of using foreign tech, which could deter its adoption by the country’s armed forces.

In an interview on CNN’s Fareed Zakaria GPS, Huang responded to Washington’s growing export controls targeting advanced AI hardware sales to China.

He suggested the military would likely avoid US technology to reduce exposure to geopolitical risks and sanctions.

The Biden administration had tightened restrictions on AI chip exports, citing national security and fears that cutting-edge processors might boost China’s military capabilities.

Nvidia, whose chips are central to global AI development, has seen its access to the Chinese market increasingly limited under these rules.

While Nvidia remains a key supplier in the AI sector, Huang’s comments may ease some political pressure around the company’s overseas operations.

The broader debate continues over balancing innovation, commercial interest and national security in the AI age.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI agents are reshaping the marketing landscape

Marketers have weathered many disruptions, but a bigger shift is emerging—AI agents are starting to make purchasing decisions. As machines begin choosing what to buy, brands must rethink how they build visibility and relevance in this new landscape.

AI agents do not shop like humans. They use logic, structured data, and performance signals—not emotion, nostalgia or storytelling. They compare price, reviews, and specs. Brand loyalty and lifestyle marketing may carry less weight when decisions are made algorithmically.

According to Salesforce, 24% of people are open to AI shopping on their behalf—rising to 32% among Gen Z. Agents interpret products as data tables. Structured information, such as features and sentiment analysis, guide choices—not impulse or advertising flair.

Even long-trusted household brands may be evaluated solely on objective criteria, not reputation or emotional attachment. Marketers must adapt by preparing product data for machine interpretation—structured content, live feeds, and transparent performance metrics.

AI agents may also disguise themselves, interacting via email or traditional channels. Systems will need to detect and respond accordingly. Machine-to-machine buying is likely to rise, requiring cross-team coordination to align digital, data and marketing strategies.

Winning with AI agents means making products visible, verifiable, and understandable to machines—without compromising human trust. Those who act now will lead in a market where machines increasingly choose what consumers consume.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia enforces trade controls on AI chips with US origin

Malaysia’s trade ministry announced new restrictions on the export, transshipment and transit of high-performance AI chips of US origin. Effective immediately, individuals and companies must obtain a trade permit and notify authorities at least 30 days in advance for such activities.

The restrictions apply to items not explicitly listed in Malaysia’s strategic items list, which is currently under review to include relevant AI chips. The move aims to close regulatory gaps while Malaysia updates its export control framework to match emerging technologies.

‘Malaysia stands firm against any attempt to circumvent export controls or engage in illicit trade activities,’ the ministry stated on Monday. Violations will result in strict legal action, with authorities emphasising a zero-tolerance approach to export control breaches.

The announcement follows increasing pressure from the United States to curb the flow of advanced chips to China. In March, the Financial Times reported that Washington had asked allies including Malaysia to tighten semiconductor export rules.

Malaysia is also investigating a shipment of servers linked to a Singapore-based fraud case that may have included restricted AI chips. Authorities are assessing whether local laws were breached and whether any controlled items were transferred without proper authorisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!