Beer deliveries falter after Asahi cyber crisis

A ransomware attack by the Qilin group has crippled Asahi Group Holdings, Japan’s leading brewer, halting production across most of its 30 factories. Over 27GB of stolen Asahi data appeared online, forcing manual order processing with handwritten notes and faxes.

The attack has slashed shipments to 10-20% of normal capacity, disrupting supplies of its popular Super Dry beer.

Small businesses, like Tokyo’s Ben Thai restaurant, are left with dwindling stocks, some down to just a few bottles. Retail giants such as 7-Eleven, FamilyMart, and Lawson warn of shortages affecting not only beer but also Asahi’s soft drinks and bottled teas.

Liquor store owners, grappling with limited deliveries, fear disruptions could persist for weeks given Asahi’s 40% market dominance.

Experts point to Japan’s outdated legacy systems and low cybersecurity expertise as key vulnerabilities, making firms like Asahi prime targets. Recent attacks on Japan Airlines and Nagoya’s port highlight a growing trend.

The reliance on high trust in Japanese society further emboldens hackers, who often demand ransoms from unprepared organisations.

The government’s Active Cyber Defense Law aims to strengthen protections by enhancing information sharing and empowering proactive counterattacks. Chief Cabinet Secretary Yoshimasa Hayashi confirmed an ongoing investigation into the Asahi breach.

However, small vendors and customers face ongoing uncertainty, with no clear timeline for full recovery of Japan’s beloved brews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake VPN apps linked to banking malware warn security experts

Security researchers have issued urgent warnings about VPN applications that appear legitimate but secretly distribute banking trojans such as Klopatra and Mobdro.

The apps masquerade as trustworthy privacy tools, but once installed they can steal credentials, exfiltrate data or give attackers backdoor access to devices. Victims may initially notice nothing amiss.

Among the apps flagged, some were available on major app platforms, increasing the risk exposure. Analysts recommend users immediately uninstall any unfamiliar VPN apps, scan devices with a reputable security tool and change banking passwords if suspicious activity is detected.

Developers and platform operators are urged to strengthen vetting of privacy tool submissions. Given that VPNs are inherently powerful (encrypting traffic, accessing network functions), any malicious behaviour can escalate rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces UK action over market dominance

Google faces new regulatory scrutiny in the UK after the competition watchdog designated it with strategic market status under a new digital markets law. The ruling could change how users select search engines and how Google ranks online content.

The Competition and Markets Authority said Google controls more than 90 percent of UK searches, giving it a position of unmatched influence. The designation enables the regulator to propose targeted measures to ensure fair competition, with consultations expected later in 2025.

Google argued that tighter restrictions could slow innovation, claiming its search tools contributed £118 billion to the UK economy in 2023. The company warned that new rules might hinder product development during rapid AI advancement.

The move adds to global scrutiny of the tech giant, which faces significant fines and court cases in the US and EU over advertising and app store practices. The CMA’s decision marks the first important use of its new powers to regulate digital platforms with strategic control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake VPN app drains bank accounts across Europe

Cybersecurity experts are urging Android users to uninstall a fake VPN app capable of stealing banking details and draining accounts. The malware, hidden inside a Mobdro Pro IPTV + VPN app, has already infected more than 3,000 devices across Europe.

The app promises free access to films and live sports, but installs Klopatra, a sophisticated malware designed to gain complete control of a device. Once downloaded, it tricks users into granting access through Android’s Accessibility Services, enabling attackers to read screens and perform actions remotely.

Researchers at Cleafy, the firm that uncovered the operation, said attackers can use the permissions to operate phones as if they were the real owners. The firm believes the campaign originated in Turkey and estimates that around 1,000 people have fallen victim to the scam.

Cybersecurity analysts stress that the attack represents a growing trend in banking malware, where accessibility features are exploited to bypass traditional defences and gain near-total control of infected devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cameras boost wildfire detection in Minnesota

Xcel Energy has deployed the first AI-driven wildfire-detection cameras in Minnesota to improve early warning for grass and forest fires. The technology aims to protect communities, natural resources, and power infrastructure while strengthening the grid’s resilience.

The first two Pano AI camera systems have been installed in Mankato and Clear Lake, with 38 planned for higher-risk areas. The cameras provide continuous 360-degree scanning and use AI to detect smoke, enabling rapid alerts to local fire agencies.

Pano AI technology combines high-definition imaging, satellite data, and human verification to locate fires in real time. Fire departments gain access to live terrain intelligence, including hard-to-monitor areas, helping shorten response times and improve firefighter safety.

More than 1,200 wildfires have burned nearly 49,000 acres in Minnesota so far this year. Xcel Energy already uses Pano AI cameras in Colorado and Texas, where the technology has proven effective in identifying fires early and containing their spread.

The initiative is part of Xcel Energy’s Minnesota Wildfire Mitigation Program, which combines advanced technologies, modernised infrastructure, and vegetation management to reduce risks. The company is working with communities and agencies to strengthen prevention and response efforts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI joins dialogue with the EU on fair and transparent AI development

The US AI company, OpenAI, has met with the European Commission to discuss competition in the rapidly expanding AI sector.

A meeting focused on how large technology firms such as Apple, Microsoft and Google shape access to digital markets through their operating systems, app stores and search engines.

During the discussion, OpenAI highlighted that such platforms significantly influence how users and developers engage with AI services.

The company encouraged regulators to ensure that innovation and consumer choice remain priorities as the industry grows, noting that collaboration between major and minor players can help maintain a balanced ecosystem.

An issue arises as OpenAI continues to partner with several leading technology companies. Microsoft, a key investor, has integrated ChatGPT into Windows 11’s Copilot, while Apple recently added ChatGPT support to Siri as part of its Apple Intelligence features.

Therefore, OpenAI’s engagement with regulators is part of a broader dialogue about maintaining open and competitive markets while fostering cooperation across the industry.

Although the European Commission has not announced any new investigations, the meeting reflects ongoing efforts to understand how AI platforms interact within the broader digital economy.

OpenAI and other stakeholders are expected to continue contributing to discussions to ensure transparency, fairness and sustainable growth in the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!