Nvidia nears $4 trillion milestone as AI boom continues

Nvidia has made financial history by nearly reaching a $4 trillion market valuation, a milestone highlighting investor confidence in AI as a powerful economic force.

Shares briefly peaked at $164.42 before closing slightly lower at $162.88, just under the record threshold. The rise underscores Nvidia’s position as the leading supplier of AI chips amid soaring demand from major tech firms.

Led by CEO Jensen Huang, the company now holds a market value larger than the economies of Britain, France, or India.

Nvidia’s growth has helped lift the Nasdaq to new highs, aided in part by improved market sentiment following Donald Trump’s softened stance on tariffs.

However, trade barriers with China continue to pose risks, including export restrictions that cost Nvidia $4.5 billion in the first quarter of 2025.

Despite those challenges, Nvidia secured a major AI infrastructure deal in Saudi Arabia during Trump’s visit in May. Innovations such as the next-generation Blackwell GPUs and ‘real-time digital twins’ have helped maintain investor confidence.

The company’s stock has risen over 21% in 2025, far outpacing the Nasdaq’s 6.7% gain. Nvidia chips are also being used by the US administration as leverage in global tech diplomacy.

While competition from Chinese AI firms like DeepSeek briefly knocked $600 billion off Nvidia’s valuation, Huang views rivalry as essential to progress. With the growing demand for complex reasoning models and AI agents, Nvidia remains at the forefront.

Still, the fast pace of AI adoption raises concerns about job displacement, with firms like Ford and JPMorgan already reporting workforce impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fluency is the new office software skill

As tools like ChatGPT, Copilot, and other generative AI systems become embedded in daily workflows, employers increasingly prioritise a new skill: AI fluency.

Much like proficiency in office software became essential in the past, knowing how to collaborate effectively with AI is now a growing requirement across industries.

But interacting with AI isn’t always intuitive. Many users encounter generic or unhelpful responses from chatbots and assume the technology is limited. In reality, AI systems rely heavily on the context they are given, and that’s where users come in.

Rather than considering AI as a search engine, it helps to see it as a partner needing guidance. A vague prompt like ‘write a proposal’ is unlikely to produce meaningful results. A better approach provides background, direction, and clear expectations.

One practical framework is CATS: context, angle, task, and style.

Context sets the stage. It includes your role, the situation, the audience, and constraints. For example, ‘I’m a nonprofit director writing a grant proposal for an environmental education program in urban schools’ offers much more to work with than a general request.

Angle defines the perspective. You can ask the AI to act as a peer reviewer, a mentor, or even a sceptical audience member. The roles help shape the tone and focus of the response.

Task clarifies the action you want. Instead of asking for help with a presentation, try ‘Suggest three ways to improve my opening slide for an audience of small business owners.’

Style determines the format and tone. Whether you need a formal report, a friendly email, or an outline in bullet points, specifying the style helps the AI deliver a more relevant output.

Beyond prompts, users can also practice context engineering—managing the environment around the prompt. The method includes uploading relevant documents, building on previous chats, or setting parameters through instructions. The steps help tailor responses more closely to your needs.

Think of prompting as a conversation, not a one-shot command. If the initial response isn’t ideal, clarify, refine, or build on it. Ask follow-up questions, adjust your instructions, or extract functional elements to develop further in a new thread.

That said, it’s essential to stay critical. AI systems can mimic natural conversation, but don’t truly understand the information they provide. Human oversight remains crucial. Always verify outputs, especially in professional or high-stakes contexts.

Ultimately, AI tools are powerful collaborators—but only when paired with clear guidance and human judgment. Provide the correct input, and you’ll often find the output exceeds expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns rise over Gemini’s on‑device data access

From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.

Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.

Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.

However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.

Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.

The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LG’s Exaone Path 2.0 uses AI to transform genetic testing

LG AI Research has introduced Exaone Path 2.0, an upgraded AI model designed to analyse pathology images for disease diagnosis, significantly reducing the time required for genetic testing.

The new model, unveiled Wednesday, can reportedly process pathology images in under a minute—a significant shift from conventional genetic testing methods that often take more than two weeks.

According to LG, the AI system offers enhanced accuracy in detecting genetic mutations and gene expression patterns by learning from detailed image patches and full-slide pathology data.

Developed by LG AI Research, a division of the LG Group, Exaone Path 2.0 is trained on over 10,000 whole-slide images (WSIs) and multiomics pairs, enabling it to integrate structural information with molecular biology insights. The company said it has achieved a 78.4 percent accuracy rate in predicting genetic mutations.

The model has also been tailored for specific applications in oncology, including lung and colorectal cancers, where it can help clinicians identify patient groups most likely to benefit from targeted therapies.

LG AI Research is collaborating with Professor Hwang Tae-hyun and his team at Vanderbilt University Medical Centre in the US to further its application in real-world clinical settings.

Their shared goal is to develop a multimodal medical AI platform that can support precision medicine directly within clinical environments.

Hwang, a key contributor to the US government’s Cancer Moonshot program and founder of the Molecular AI Initiative at Vanderbilt, emphasised that the aim is to create AI tools usable by clinicians in active medical practice, rather than limiting innovation to the lab.

In addition to oncology, LG AI Research plans to extend its multimodal AI initiatives into transplant rejection, immunology, and diabetes.

It is also collaborating with the Jackson Laboratory to support Alzheimer’s research and working with Professor Baek Min-kyung’s team at Seoul National University on next-generation protein structure prediction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Activision pulls game after PC hacking reports

Activision has removed Call of Duty: WWII from the Microsoft Store and PC Game Pass following reports that hackers exploited a serious vulnerability in the game. Only the PC versions from Microsoft’s platforms are affected, while the game remains accessible via Steam and consoles.

The decision came after several players reported their computers being hijacked during gameplay. Streamed footage showed remote code execution attacks, where malicious code was deployed through the game to seize control of victims’ devices.

AN outdated and insecure build of the game, which had previously been patched elsewhere, was uploaded to the Microsoft platforms. Activision has yet to restore access and continues to investigate the issue.

Call of Duty: WWII was only added to Game Pass in June. The vulnerability highlights the dangers of pushing old game builds without sufficient review, exposing users to significant cybersecurity risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S urges UK firms to report cyberattacks

Marks & Spencer has called for a legal obligation requiring UK companies to report major cyberattacks to national authorities. Chairman Archie Norman told parliament that two serious cyberattacks on prominent firms in recent months had gone unreported.

He argued that underreporting leaves a significant gap in cybersecurity knowledge. It would not be excessive regulation to require companies to report material incidents to the National Cyber Security Centre.

The retailer was hit in April by what is believed to be a ransomware attack involving DragonForce, with links to the Scattered Spider hacking group.

The breach forced a seven-week suspension of online clothing orders, costing the business around £300 million in lost operating profit.

M&S had fortunately doubled its cyber insurance last year, though it may take 18 months to process the claim.

General counsel Nick Folland added that companies must be prepared to operate manually, using pen and paper, when systems go down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Over 2.3 million users hit by Chrome and Edge extension malware

A stealthy browser hijacking campaign has infected over 2.3 million users through Chrome and Edge extensions that appeared safe and even displayed Google’s verified badge.

According to cybersecurity researchers at Koi Security, the campaign, dubbed RedDirection, involves 18 malicious extensions offering legitimate features like emoji keyboards and VPN tools, while secretly tracking users and backdooring their browsers.

One of the most popular extensions — a colour picker developed by ‘Geco’ — continues to be available on the Chrome and Edge stores with thousands of positive reviews.

While it works as intended, the extension also hijacks sessions, records browsing activity, and sends data to a remote server controlled by attackers.

What makes the campaign more insidious is how the malware was delivered. The extensions began as clean, valuable tools, but malicious code was quietly added during later updates.

Due to how Google and Microsoft handle automatic updates, most users receive spyware without taking action or clicking anything.

Koi Security’s Idan Dardikman describes the campaign as one of the largest documented. Users are advised to uninstall any affected extensions, clear browser data, and monitor accounts for unusual activity.

Despite the serious breach, Google and Microsoft have not responded publicly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI chatbot suspended in Turkey following court order

A Turkish court has issued a nationwide ban on Grok, the AI chatbot developed by Elon Musk’s company xAI, following recent developments involving the platform.

The ruling, delivered on Wednesday by a criminal court in Ankara, instructed Turkey’s telecommunications authority to block access to the chatbot across the country. The decision came after public filings under Turkey’s internet law prompted a judicial review.

Grok, which is integrated into the X platform (formerly Twitter), recently rolled out an update to make the system more open and responsive. The update has sparked broader global discussions about the challenges of moderating AI-generated content in diverse regulatory environments.

In a brief statement, X acknowledged the situation and confirmed that appropriate content moderation measures had been implemented in response. The ban places Turkey among many countries examining the role of generative AI tools and the standards that govern their deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered imposter poses as US Secretary of State Rubio

An imposter posing as US Secretary of State Marco Rubio used an AI-generated voice and text messages to contact high-ranking officials, including foreign ministers, a senator, and a state governor.

The messages, sent through SMS and the encrypted app Signal, triggered an internal warning across the US State Department, according to a classified cable dated 3 July.

The individual created a fake Signal account using the name ‘Marco.Rubio@state.gov’ and began contacting targets in mid-June.

At least two received AI-generated voicemails, while others were encouraged to continue the chat via Signal. US officials said the aim was likely to gain access to sensitive information or compromise official accounts.

The State Department confirmed it is investigating the breach and has urged all embassies and consulates to remain alert. While no direct cyber threat was found, the department warned that shared information could still be exposed if targets were deceived.

A spokesperson declined to provide further details for security reasons.

The incident appears linked to a broader wave of AI-driven disinformation. A second operation, possibly tied to Russian actors, reportedly targeted Gmail accounts of journalists and former officials.

The FBI has warned of rising cases of ‘smishing’ and ‘vishing’ involving AI-generated content.

Experts now warn that deepfakes are becoming harder to detect, as the technology advances faster than defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!