X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia nears $4 trillion milestone as AI boom continues

Nvidia has made financial history by nearly reaching a $4 trillion market valuation, a milestone highlighting investor confidence in AI as a powerful economic force.

Shares briefly peaked at $164.42 before closing slightly lower at $162.88, just under the record threshold. The rise underscores Nvidia’s position as the leading supplier of AI chips amid soaring demand from major tech firms.

Led by CEO Jensen Huang, the company now holds a market value larger than the economies of Britain, France, or India.

Nvidia’s growth has helped lift the Nasdaq to new highs, aided in part by improved market sentiment following Donald Trump’s softened stance on tariffs.

However, trade barriers with China continue to pose risks, including export restrictions that cost Nvidia $4.5 billion in the first quarter of 2025.

Despite those challenges, Nvidia secured a major AI infrastructure deal in Saudi Arabia during Trump’s visit in May. Innovations such as the next-generation Blackwell GPUs and ‘real-time digital twins’ have helped maintain investor confidence.

The company’s stock has risen over 21% in 2025, far outpacing the Nasdaq’s 6.7% gain. Nvidia chips are also being used by the US administration as leverage in global tech diplomacy.

While competition from Chinese AI firms like DeepSeek briefly knocked $600 billion off Nvidia’s valuation, Huang views rivalry as essential to progress. With the growing demand for complex reasoning models and AI agents, Nvidia remains at the forefront.

Still, the fast pace of AI adoption raises concerns about job displacement, with firms like Ford and JPMorgan already reporting workforce impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Asia emerges as global hub for telco‑powered AI infrastructure

Asia‑Pacific telecom operators are rapidly building sovereign AI factories and high‑performance data centres optimised for AI workloads by retrofitting existing facilities with NVIDIA GPUs and leveraging their fibre networks and system‑management skillsets.

Major Southeast‑Asian telcos, including Singtel (RE: AI), Indonesia’s Indosat Ooredoo Hutchison, Vietnam’s FPT, Malaysia’s YTL, and India’s Tata Communications, are pioneering cloud‑based AI platforms tailored to local enterprise needs. These investments often mirror national AI strategies focused on data sovereignty and regional self‑sufficiency.

Operators are pursuing a hybrid strategy, combining partnerships with hyperscalers like AWS and Azure for scale, while building local infrastructure to avoid vendor lock‑in, cost volatility, and compliance risks. Examples include SoftBank and KDDI in Japan, KT and Viettel in Southeast Asia, and Kazakhtelecom in Central Asia.

This telco‑led, on‑premises AI infrastructure boom marks a significant shift in global AI deployment, transforming operators from mere connectivity providers into essential sovereign AI enablers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google partners with UK government on AI training

The UK government has struck a major partnership with Google Cloud aimed at modernising public services by eliminating agreing IT systems and equipping 100,000 civil servants with digital and AI skills by 2030.

Backed by DSIT, the initiative targets sectors like the NHS and local councils, seeking both operational efficiency and workforce transformation.

Replacing legacy contracts, some of which date back decades, could unlock as much as £45 billion in efficiency savings, say ministers. Google DeepMind will provide technical expertise to help departments adopt emerging AI solutions and accelerate public sector innovation.

Despite these promising aims, privacy campaigners warn that reliance on a US-based tech giant threatens national data sovereignty and may lead to long-term lock-in.

Foxglove’s Martha Dark described the deal as ‘dangerously naive’, with concerns around data access, accountability, public procurement processes and geopolitical risk.

As ministers pursue broader technological transformation, similar partnerships with Microsoft, OpenAI and Meta are underway, reflecting an industry-wide effort to bridge digital skills gaps and bring agile solutions into Whitehall.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI interviews leave job candidates in the dark

An increasing number of startups are now using AI to conduct video job interviews, often without making this clear to applicants. Senior software developers are finding themselves unknowingly engaging with automated systems instead of human recruiters.

Applicants are typically asked to submit videos responding to broad interview prompts, including examples and case studies, often without time constraints or human engagement.

AI processes these asynchronous interviews, which evaluate responses using natural language processing, facial cues and tone to assign scores.

Critics argue that this approach shifts the burden of labour onto job seekers, while employers remain unaware of the hidden costs and flawed metrics. There is also concern about the erosion of dignity in hiring, with candidates treated as data points rather than individuals.

Although AI offers potential efficiencies, the current implementation risks deepening dysfunctions in recruitment by prioritising speed over fairness, transparency and candidate experience. Until the technology is used more thoughtfully, experts advise job seekers to avoid such processes altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI industry warned of looming financial collapse

Despite widespread popularity and unprecedented investment, OpenAI may be facing a deepening financial crisis. Since launching ChatGPT, the company has lost billions yearly, including an estimated $5 billion in 2024 alone.

Tech critic Ed Zitron argues that the AI industry is heading towards a ‘subprime AI crisis’, comparing the sector’s inflated valuations and spiralling losses to the subprime mortgage collapse in 2007. Startups like OpenAI and Anthropic continue to operate at huge losses.

Companies relying on AI infrastructure are already feeling the squeeze. Anysphere, which uses Anthropic’s models, recently raised prices sharply, angering users and blaming costs passed down from its infrastructure provider.

To manage exploding demand, OpenAI has also introduced tiered pricing and restricted services for free users, raising concerns that access to AI tools will soon be locked behind expensive paywalls. With 800 million weekly users, any future revenue strategy could alienate a large part of its global base.

Zitron believes these conditions cannot sustain long-term growth and will ultimately damage revenues and public trust. The industry, he warns, may be building its future on unstable ground.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and big data to streamline South Korea’s drug evaluation processes

The Ministry of Food and Drug Safety (MFDS) of South Korea is modernising its drug review and evaluation processes by incorporating AI, big data, and other emerging technologies.

The efforts are being spearheaded by the ministry’s National Institute for Food and Drug Safety Evaluation (NIFDS).

Starting next year, NIFDS plans to apply AI to assist with routine tasks such as preparing review data.

The initial focus will be synthetic chemical drugs, gradually expanding to other product categories.

‘Initial AI applications will focus on streamlining repetitive tasks,’ said Jeong Ji-won, head of the Pharmaceutical and Medical Device Research Department at NIFDS.

‘The AI system is being developed internally, and we are evaluating its potential for real-world inspection scenarios. A phased approach is necessary due to the large volume of data required,’ Jeong added.

In parallel, NIFDS is exploring using big data in various regulatory activities.

One initiative involves applying big data analytics to enhance risk assessments during overseas GMP inspections. ‘Standardisation remains a challenge due to varying formats across facilities,’ said Sohn Kyung-hoon, head of the Drug Research Division.

‘Nonetheless, we’re working to develop a system that enhances the efficiency of inspections without relying on foreign collaborations.’ Efforts also include building domain-specific Korean-English translation models for safety documentation.

The institute also integrates AI into pharmaceutical manufacturing oversight and develops public data utilisation frameworks. The efforts include systems for analysing adverse drug reaction reports and standardising data inputs.

NIFDS is actively researching new analysis methods and safety protocols regarding impurity control.

‘We’re prioritising research on impurities such as NDMA,’ Sohn noted. Simultaneous detection methods are being tailored for smaller manufacturers.

New categorisation techniques are also being developed to monitor previously untracked substances.

On the biologics front, NIFDS aims to finalise its mRNA vaccine evaluation technology by year-end.

The five-year project supports the national strategy for improving infectious disease preparedness in South Korea, including work on delivery mechanisms and material composition.

‘This initiative is part of our broader strategy to improve preparedness for future infectious disease outbreaks,’ said Lee Chul-hyun, head of the Biologics Research Division.

Evaluation protocols for antibody drugs are still in progress. However, indirect support is being provided through guidelines and benchmarking against international cases. Separately, the Herbal Medicine Research Division is upgrading its standardised product distribution model.

The current use-based system will shift to a field-based one next year, extending to pharmaceuticals, functional foods, and cosmetics sectors.

‘We’re refining the system to improve access and quality control,’ said Hwang Jin-hee, head of the division. Collaboration with regional research institutions remains a key component of this work.’

NIFDS currently offers 396 standardised herbal medicines. The institute continues to develop new reference materials annually as part of its evolving strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI scam targets donors with fake orphan images

Cambodian authorities have warned the public about increasing online scams using AI-generated images to deceive donors. The scams often show fabricated scenes of orphaned children or grieving families, with QR codes attached to collect money.

One Facebook account, ‘Khmer Khmer’, was named in an investigation by the Anti-Cyber Crime Department for spreading false stories and deepfake images to solicit charity donations. These included claims of a wife unable to afford a coffin and false fundraising efforts near the Thai border.

The department confirmed that AI-generated realistic visuals are designed to manipulate emotions and lure donations. Cambodian officials continue investigations and have promised legal action if evidence of criminal activity is confirmed.

Authorities reminded the public to remain cautious and to only contribute to verified and officially recognised campaigns. While AI’s ability to create realistic content has many uses, it also opens the door to dangerous forms of fraud and misinformation when abused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns rise over Gemini’s on‑device data access

From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.

Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.

Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.

However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.

Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.

The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!