AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail accounts targeted in phishing wave after Google data leak

Hackers linked to the ShinyHunters group have compromised Google’s Salesforce systems, leading to a data leak that puts Gmail and Google Cloud users at risk of phishing attacks.

Google confirmed that customer and company names were exposed, though no passwords were stolen. Attackers are now exploiting the breach with phishing schemes, including fake account resets and malware injection attempts through outdated access points.

With Gmail and Google Cloud serving around 2.5 billion users worldwide, both companies and individuals could be targeted. Early reports on Reddit describe callers posing as Google staff warning of supposed account breaches.

Google urges users to strengthen protections by running its Security Checkup, enabling Advanced Protection, and switching to passkeys instead of passwords. The company emphasised that its staff never initiates unsolicited password resets by phone or email.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud’s new AI tools expand enterprise threat protection

Following last week’s announcements on AI-driven cybersecurity, Google Cloud has unveiled further tools at its Security Summit 2025 aimed at protecting enterprise AI deployments and boosting efficiency for security teams.

The updates build on prior innovations instead of replacing them, reinforcing Google’s strategy of integrating AI directly into security operations.

Vice President and General Manager Jon Ramsey highlighted the growing importance of agentic approaches as AI agents operate across increasingly complex enterprise environments.

Building on the previous rollout, Google now introduces Model Armor protections, designed to shield AI agents from prompt injections, jailbreaking, and data leakage, enhancing safeguards without interrupting existing workflows.

Additional enhancements include the Alert Investigation agent, which automates event enrichment and analysis while offering actionable recommendations.

By combining Mandiant threat intelligence feeds with Google’s Gemini AI, organisations can now detect and respond to incidents across distributed agent networks more rapidly and efficiently than before.

SecOps Labs and updated SOAR dashboards provide early access to AI-powered threat detection experiments and comprehensive visualisations of security operations.

These tools allow teams to continue scaling agentic AI security, turning previous insights into proactive, enterprise-ready protections for real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Energy and government sectors in Poland face mounting hacktivist threats

Poland has become the leading global target for politically and socially motivated cyberattacks, recording over 450 incidents in the second quarter of 2025, according to Spain’s Industrial Cybersecurity Center.

The report ranked Poland ahead of Ukraine, the UK, France, Germany, and other European states in hacktivist activity. Government institutions and the energy sector were among the most targeted, with organisations supporting Ukraine described as especially vulnerable.

ZIUR’s earlier first-quarter analysis had warned of a sharp rise in attacks against state bodies across Europe. Pro-Russian groups were identified as among the most active, increasingly turning to denial-of-service campaigns to disrupt critical operations.

Europe accounted for the largest share of global hacktivism in the second quarter, with more than 2,500 successful denial-of-service attacks recorded between April and June, underlining the region’s heightened exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mount Fuji eruption simulated in an AI video for Tokyo

Residents of Tokyo have been shown a stark warning of what could happen if Mount Fuji erupts.

The metropolitan government released a three-minute AI-generated video depicting the capital buried in volcanic ash to raise awareness and urge preparation.

The simulation shows thick clouds of ash descending on Shibuya and other districts about one to two hours after an eruption, with up to 10 centimetres expected to accumulate. Unlike snow, volcanic ash does not melt away but instead hardens, damages powerlines, and disrupts communications once wet.

The video also highlights major risks to transport. Ash on train tracks, runways, and roads would halt trains, ground planes, and make driving perilous.

Two-wheel vehicles could become unusable under even modest ashfall. Power outages and shortages of food and supplies are expected as shops run empty, echoing the disruption seen after the 2011 earthquake.

Officials advise people to prepare masks, goggles, and at least three days of emergency food. The narrator warns that because no one knows when Mount Fuji might erupt, daily preparedness in Japan is vital to protect health, infrastructure, and communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia launches Spectrum-XGS to build global AI factories

American technology company Nvidia has unveiled Spectrum-XGS Ethernet, a new networking technology designed to connect multiple data centres into unified giga-scale AI factories.

With AI demand skyrocketing, single facilities are hitting limits in power and capacity, creating the need for infrastructure that can operate across cities, nations and continents.

Spectrum-XGS extends Nvidia’s Spectrum-X Ethernet platform, introducing what the company calls a ‘scale-across’ approach, alongside scale-up and scale-out models.

Integrating advanced congestion control, latency management, and telemetry nearly doubles the performance of the Nvidia Collective Communications Library, allowing geographically distributed data centres to function as one large AI cluster.

Early adopters like CoreWeave are preparing to link their facilities using the new system. According to Nvidia, the technology offers 1.6 times greater bandwidth density than traditional Ethernet and features Spectrum-X switches and ConnectX-8 SuperNICs, optimised for hyperscale AI operations.

The company argues that the approach will define the next phase of AI infrastructure, enabling super-factories to manage millions of GPUs while improving efficiency and lowering operational costs.

Nvidia CEO Jensen Huang described the development as part of the AI industrial revolution, highlighting that Spectrum-XGS can unify data centres into global networks that act as vast, giga-scale AI super-factories.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity Instagram hack fuels Solana meme coin scam

The Instagram accounts of Adele, Future, Tyla, and Michael Jackson were hacked late Thursday to promote an unauthorised meme coin. Posts showed an AI image of the Future with a ‘FREEBANDZ’ coin, falsely suggesting ties to the rapper.

The token, launched on the Solana platform Pump.fun, surged briefly to nearly $900,000 in market value before collapsing by 98% after its creator dumped 700 million tokens. The scheme netted more than $49,000 in Solana for the perpetrator, suspected of being behind the account hijackings.

None of the affected celebrities has issued a statement, while Future’s Instagram account remains deactivated. The hack continues a trend of using celebrity accounts for crypto pump-and-dump schemes. Previous cases involved the UFC, Barack Obama, and Elon Musk.

Such scams are becoming increasingly common, with attackers exploiting the visibility of major social media accounts to drive short-lived token gains before leaving investors with losses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot