How to spot AI-generated videos with simple visual checks

Mashable offers a hands-on guide to help users detect AI-generated videos by observing subtle technical cues. Key warning signs include mismatched lip movements and speech, where voices are dubbed over real footage and audio isn’t perfectly aligned with mouth motions.

Users are also advised to look for visual anomalies such as unnatural blurs, distorted shadows or odd lighting effects that seem inconsistent with natural environments. Deepfake videos can show slight flickers around faces or uneven reflections that betray their artificial origin.

Blinking, or the lack thereof, can also be revealing. AI faces often fail to replicate natural blinking patterns, and may display either no blinking or irregular frequency.

Viewers should also note unnatural head or body movements that do not align with speech or emotional expression, such as stiff postures or awkward gestures.

Experts stress these cues are increasingly well-engineered, making deepfakes harder to detect visually. They recommend combining observation with source verification, such as tracing the video back to reputable outlets or conducting reverse image searches for robust protection.

Ultimately, better detection tools and digital media literacy are essential to maintaining trust in online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity Instagram hack fuels Solana meme coin scam

The Instagram accounts of Adele, Future, Tyla, and Michael Jackson were hacked late Thursday to promote an unauthorised meme coin. Posts showed an AI image of the Future with a ‘FREEBANDZ’ coin, falsely suggesting ties to the rapper.

The token, launched on the Solana platform Pump.fun, surged briefly to nearly $900,000 in market value before collapsing by 98% after its creator dumped 700 million tokens. The scheme netted more than $49,000 in Solana for the perpetrator, suspected of being behind the account hijackings.

None of the affected celebrities has issued a statement, while Future’s Instagram account remains deactivated. The hack continues a trend of using celebrity accounts for crypto pump-and-dump schemes. Previous cases involved the UFC, Barack Obama, and Elon Musk.

Such scams are becoming increasingly common, with attackers exploiting the visibility of major social media accounts to drive short-lived token gains before leaving investors with losses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google enhances AI Mode with personalised dining suggestions

Google has expanded its AI Mode in Search to 180 additional countries and territories, introducing new agentic features to help users make restaurant reservations. The service remains limited to English and is not yet available in the European Union.

The update enables users to specify their dining preferences and constraints, allowing the system to scan multiple platforms and present real-time availability. Once a choice is made, users are directed to the restaurant’s booking page.

Partners supporting the service include OpenTable, Resy, SeatGeek, StubHub, Booksy, Tock, and Ticketmaster. The feature is part of Google’s Search Labs experiment, available to subscribers of Google AI Ultra in the United States.

AI Mode also tailors suggestions based on previous searches and introduces a Share function, letting users share restaurant options or planning results with others, with the option to delete links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia pushes mandatory messaging app Max on all new devices

Russia will require all new mobile phones and tablets sold starting in September, including a government-backed messenger called Max. Developed by Kremlin-controlled tech firm VK, the app offers messaging, video calls, mobile payments, and access to state services.

Authorities claim Max is a safe alternative to Western apps, but critics warn it could act as a state surveillance tool. The platform is reported to collect financial data, purchase history, and location details, all accessible to security services.

Journalist Andrei Okun described Max as a ‘Digital Gulag’ designed to control daily life and communications.

The move is part of Russia’s broader push to replace Western platforms. New restrictions have already limited calls on WhatsApp and Telegram, and officials hinted that WhatsApp may face a ban.

Telegram remains widely used but is expected to face greater pressure as the Kremlin directs officials to adopt Max.

VK says Max has already attracted 18 million downloads, though parts of the app remain in testing. From 2026, Russia will also require smart TVs to come preloaded with a state-backed service offering free access to government channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Surge in seat belt offences seen by AI monitors

AI-enabled cameras in Devon and Cornwall have detected 6,000 people failing to wear seat belts over the past year. The number caught was 50 percent higher than those penalised for using mobile phones while driving, police confirmed.

Road safety experts warn that the long-standing culture of belting up may be fading among newer generations of drivers. Geoff Collins of Acusensus noted a rise in non-compliance and said stronger legal penalties could help reverse the trend.

Current UK law imposes a £100 fine for not wearing a seat belt, with no points added to a driver’s licence. Campaigners now urge the government to make such offences endorsable, potentially adding penalty points and risking licence loss.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft limits certain companies’ access to the SharePoint early warning system

Microsoft has limited certain Chinese companies’ access to its early warning system for cybersecurity vulnerabilities following suspicions about their involvement in recent SharePoint hacking attempts.

The decision restricts the sharing of proof-of-concept code, which mimics genuine malicious software. While valuable for cybersecurity professionals strengthening their systems, the code can also be misused by hackers.

The restrictions follow Microsoft’s observation of exploitation attempts targeting SharePoint servers in July. Concerns arose that a member of the Microsoft Active Protections Program may have repurposed early warnings for offensive activity.

Microsoft maintains that it regularly reviews participants and suspends those violating contracts, including prohibitions on participating in cyber attacks.

Beijing has denied involvement in the hacking, while Microsoft has refrained from disclosing which companies were affected or details of the ongoing investigation.

Analysts note that balancing collaboration with international security partners and preventing information misuse remains a key challenge for global cybersecurity programmes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students seek emotional support from AI chatbots

College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.

Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.

Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europol warns that the $50,000 Qilin reward is fake

Europol has warned that a reported $50,000 reward for information on two members of the Qilin ransomware group is fake. The message, circulating on Telegram, claimed the suspects, known as Haise and XORacle, coordinate affiliates and manage extortion operations.

Europol clarified that it does not operate a Telegram channel and that the message does not originate from its official accounts, which are active on Instagram, LinkedIn, X, Bluesky, YouTube, and Facebook.

Qilin, also known as Agenda, has been active since 2022 and, in 2025, listed over 400 victims on its leak website, including media and pharmaceutical companies.

Recent attacks, such as the one targeting Inotiv, demonstrate the group’s ongoing threat. Analysts note that cybercriminals often circulate false claims to undermine competitors, mislead affiliates, or sow distrust within rival gangs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot