Cyberattack halts Asahi beer production in Japan

Japanese beer maker Asahi Group Holdings has halted production at its main plant following a cyberattack that caused major system failures. Orders, shipments, and call centres were suspended across the company’s domestic operations, affecting most of its 30 breweries in Japan.

Asahi said it is still investigating the cause, believed to be a ransomware infection. The company confirmed there was no external leakage of personal information or employee data, but did not provide a timeline for restoring operations.

The suspension has raised concerns over possible shortages, as beer has limited storage capacity due to freshness requirements. Restaurants and retailers are expected to feel pressure if shipments continue to be disrupted.

The impact has also spread to other beverage companies such as Kirin and Sapporo, which share transport networks. Industry observers warn that supply chain delays could ripple across the food and drinks sectors in Japan.

In South Korea, the effect remains limited for now. Lotte Asahi Liquor, the official importer, declined to comment, but industry officials noted that if the disruption continues, import schedules could also be affected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals abandon Kido extortion attempt amid public backlash

Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.

Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.

The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.

Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.

Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cryptocurrency mining banned on Abu Dhabi farms

Abu Dhabi’s Agriculture and Food Safety Authority (ADAFSA) reaffirms ban on crypto mining on farms to promote sustainable land use. Such activities fall outside the permitted economic uses, which are strictly limited to agriculture and livestock production.

The authority aims to protect the emirate’s agricultural sustainability and biosecurity.

Inspections revealed multiple farms misusing agricultural land for cryptocurrency mining, violating regulations designed to preserve farmland for its intended purpose. ADAFSA considers these activities detrimental to the core objectives of farming.

Consequently, the authority has vowed to take decisive action against non-compliant farms to uphold its policies. Violators face severe penalties, including a AED100,000 fine, doubled for repeat offences, alongside suspension of all farm support services.

Additional measures include electricity disconnection and confiscation of mining equipment, which is then referred to relevant authorities for further legal action. These steps ensure compliance with agricultural regulations.

ADAFSA calls on farm owners and workers to adhere to approved agricultural practices to maintain access to support programmes. They enforces measures to protect Abu Dhabi’s agricultural sustainability and prevent practices that harm its environmental and economic goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gate Group secures MiCA license in Malta

Gate Group’s Malta-based subsidiary, Gate Technology Ltd, has secured a MiCA license from the Malta Financial Services Authority. The license authorises crypto asset trading and custody services.

Founder Dr. Han underscored compliance as central to operations, praising Malta’s progressive regulatory framework. The move aligns with Gate Group’s focus on transparency and user safety across Europe.

Securing the MiCA license enables Gate Europe to initiate EU passporting for broader regional expansion. CEO Giovanni Cunti outlined plans to strengthen compliance while offering secure, professional services.

Gate Group holds regulatory approvals in jurisdictions like Italy, Hong Kong, and Dubai. Malta’s transparent regulations and innovative environment make it an ideal European base. The company seeks to foster sustainable growth in the region’s crypto ecosystem.

Establishing a foothold in Malta positions Gate Group to leverage the country’s role as a crypto hub. Continued investment will support the local digital economy, ensuring long-term development and regulatory adherence in Europe’s crypto market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Instagram head explains why ads feel like eavesdropping

Adam Mosseri has denied long-standing rumours that the platform secretly listens to private conversations to deliver targeted ads. In a video he described as ‘myth busting’, Mosseri said Instagram does not use the phone’s microphone to eavesdrop on users.

He argued that such surveillance would not only be a severe breach of privacy but would also quickly drain phone batteries and trigger visible microphone indicators.

Instead, Mosseri outlined four reasons why adverts may appear suspiciously relevant: online searches and browsing history, the influence of friends’ online behaviour, rapid scrolling that leaves subconscious impressions, and plain coincidence.

According to Mosseri, Instagram users may mistake targeted advertising for surveillance because algorithms incorporate browsing data from advertisers, friends’ interests, and shared patterns across users.

He stressed that the perception of being overheard is often the result of ad targeting mechanics rather than eavesdropping.

Despite his explanation, Mosseri admitted the rumour is unlikely to disappear. Many viewers of his video remained sceptical, with some comments suggesting his denial only reinforced their suspicions about how social media platforms operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft boosts productivity with AI-powered subscriptions

Microsoft has enhanced its Microsoft 365 subscriptions by deeply integrating Copilot, its AI assistant, into apps like Word, Excel, and Outlook. A new Microsoft 365 Premium plan, priced at £19.99 monthly, combines advanced AI features with productivity tools.

The plan targets professionals, entrepreneurs, and families seeking to streamline tasks efficiently.

Microsoft 365 Personal and Family subscribers gain higher usage limits for Copilot features like image generation and deep research at no extra cost. Copilot Chat, now available across these apps, assists with drafting, analysis, and automation.

These updates aim to embed AI seamlessly into daily workflows. Samsung Electronics will provide energy-efficient DRAM for OpenAI’s Stargate, meeting a projected demand of 900,000 wafers monthly.

Meanwhile, Microsoft’s Frontier programme offers subscribers access to experimental AI tools, such as Office Agent, enhancing productivity. A global student offer provides free Microsoft 365 Personal for a year.

Fresh icons for Word, Excel, and other apps highlight Microsoft’s AI-driven evolution. Secure workplace AI use, backed by enterprise data protection, ensures compliance and safety. These innovations establish Microsoft 365 as a leader in AI-powered productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft evolves Sentinel into agentic defence platform

Microsoft is transforming Sentinel from a traditional SIEM into a unified defence platform for the agentic AI era. It now incorporates features such as a data lake, semantic graphs and a Model Context Protocol (MCP) server to enable intelligent agents to reason over security data.

Sentinel’s enhancements allow defenders to combine structured, semi-structured data into vectorised, graph-based relationships. With that, AI agents grounded in Security Copilot and custom tools can automate triage, correlate alerts, reason about attack paths, and initiate response actions, while keeping human oversight.

The platform supports extensibility through open agent APIs, enabling partners and organisations to deploy custom agents through the MCP server.

Microsoft also adds protections for AI agents, such as prompt-injection resilience, task adherence controls, PII guardrails, and identity controls for agent estates. The evolution aims to shift cybersecurity from reactive to predictive operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!