Seemingly conscious AI may cause psychological problems and AI psychosis

Microsoft’s AI chief and DeepMind co-founder, Mustafa Suleyman, has warned that society is unprepared for AI systems that convincingly mimic human consciousness. He warned that ‘seemingly conscious’ AI could make the public treat machines as sentient.

Suleyman highlighted potential risks including demands for AI rights, welfare, and even AI citizenship. Since the launch of ChatGPT in 2022, AI developers have increasingly designed systems to act ‘more human’.

Experts caution that such technology could intensify mental health problems and distort perceptions of reality. The phenomenon known as AI Psychosis sees users forming intense emotional attachments or believing AI to be conscious or divine.

Suleyman called for clear boundaries in AI development, emphasising that these systems should be tools for people rather than digital persons. He urged careful management of human-AI interaction without calling for a halt to innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mount Fuji eruption simulated in an AI video for Tokyo

Residents of Tokyo have been shown a stark warning of what could happen if Mount Fuji erupts.

The metropolitan government released a three-minute AI-generated video depicting the capital buried in volcanic ash to raise awareness and urge preparation.

The simulation shows thick clouds of ash descending on Shibuya and other districts about one to two hours after an eruption, with up to 10 centimetres expected to accumulate. Unlike snow, volcanic ash does not melt away but instead hardens, damages powerlines, and disrupts communications once wet.

The video also highlights major risks to transport. Ash on train tracks, runways, and roads would halt trains, ground planes, and make driving perilous.

Two-wheel vehicles could become unusable under even modest ashfall. Power outages and shortages of food and supplies are expected as shops run empty, echoing the disruption seen after the 2011 earthquake.

Officials advise people to prepare masks, goggles, and at least three days of emergency food. The narrator warns that because no one knows when Mount Fuji might erupt, daily preparedness in Japan is vital to protect health, infrastructure, and communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How to spot AI-generated videos with simple visual checks

Mashable offers a hands-on guide to help users detect AI-generated videos by observing subtle technical cues. Key warning signs include mismatched lip movements and speech, where voices are dubbed over real footage and audio isn’t perfectly aligned with mouth motions.

Users are also advised to look for visual anomalies such as unnatural blurs, distorted shadows or odd lighting effects that seem inconsistent with natural environments. Deepfake videos can show slight flickers around faces or uneven reflections that betray their artificial origin.

Blinking, or the lack thereof, can also be revealing. AI faces often fail to replicate natural blinking patterns, and may display either no blinking or irregular frequency.

Viewers should also note unnatural head or body movements that do not align with speech or emotional expression, such as stiff postures or awkward gestures.

Experts stress these cues are increasingly well-engineered, making deepfakes harder to detect visually. They recommend combining observation with source verification, such as tracing the video back to reputable outlets or conducting reverse image searches for robust protection.

Ultimately, better detection tools and digital media literacy are essential to maintaining trust in online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity Instagram hack fuels Solana meme coin scam

The Instagram accounts of Adele, Future, Tyla, and Michael Jackson were hacked late Thursday to promote an unauthorised meme coin. Posts showed an AI image of the Future with a ‘FREEBANDZ’ coin, falsely suggesting ties to the rapper.

The token, launched on the Solana platform Pump.fun, surged briefly to nearly $900,000 in market value before collapsing by 98% after its creator dumped 700 million tokens. The scheme netted more than $49,000 in Solana for the perpetrator, suspected of being behind the account hijackings.

None of the affected celebrities has issued a statement, while Future’s Instagram account remains deactivated. The hack continues a trend of using celebrity accounts for crypto pump-and-dump schemes. Previous cases involved the UFC, Barack Obama, and Elon Musk.

Such scams are becoming increasingly common, with attackers exploiting the visibility of major social media accounts to drive short-lived token gains before leaving investors with losses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google enhances AI Mode with personalised dining suggestions

Google has expanded its AI Mode in Search to 180 additional countries and territories, introducing new agentic features to help users make restaurant reservations. The service remains limited to English and is not yet available in the European Union.

The update enables users to specify their dining preferences and constraints, allowing the system to scan multiple platforms and present real-time availability. Once a choice is made, users are directed to the restaurant’s booking page.

Partners supporting the service include OpenTable, Resy, SeatGeek, StubHub, Booksy, Tock, and Ticketmaster. The feature is part of Google’s Search Labs experiment, available to subscribers of Google AI Ultra in the United States.

AI Mode also tailors suggestions based on previous searches and introduces a Share function, letting users share restaurant options or planning results with others, with the option to delete links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia pushes mandatory messaging app Max on all new devices

Russia will require all new mobile phones and tablets sold starting in September, including a government-backed messenger called Max. Developed by Kremlin-controlled tech firm VK, the app offers messaging, video calls, mobile payments, and access to state services.

Authorities claim Max is a safe alternative to Western apps, but critics warn it could act as a state surveillance tool. The platform is reported to collect financial data, purchase history, and location details, all accessible to security services.

Journalist Andrei Okun described Max as a ‘Digital Gulag’ designed to control daily life and communications.

The move is part of Russia’s broader push to replace Western platforms. New restrictions have already limited calls on WhatsApp and Telegram, and officials hinted that WhatsApp may face a ban.

Telegram remains widely used but is expected to face greater pressure as the Kremlin directs officials to adopt Max.

VK says Max has already attracted 18 million downloads, though parts of the app remain in testing. From 2026, Russia will also require smart TVs to come preloaded with a state-backed service offering free access to government channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Surge in seat belt offences seen by AI monitors

AI-enabled cameras in Devon and Cornwall have detected 6,000 people failing to wear seat belts over the past year. The number caught was 50 percent higher than those penalised for using mobile phones while driving, police confirmed.

Road safety experts warn that the long-standing culture of belting up may be fading among newer generations of drivers. Geoff Collins of Acusensus noted a rise in non-compliance and said stronger legal penalties could help reverse the trend.

Current UK law imposes a £100 fine for not wearing a seat belt, with no points added to a driver’s licence. Campaigners now urge the government to make such offences endorsable, potentially adding penalty points and risking licence loss.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!