Opal’s global rollout spans 15 countries with major feature upgrades

Google is expanding access to Opal, its no-code AI mini-app builder. Introduced two months ago within Google Labs, the tool enables users to create AI-powered mini-apps through natural language prompts, eliminating the need for coding.

According to Megan Li, Senior Product Manager at Google Labs, the expansion follows strong early engagement from creators. Users can access Opal at opal.withgoogle.com and join its builder community through Discord.

New debugging features aim to make workflows more transparent and efficient. Users can now run workflows step by step in a visual editor or adjust specific steps in the console, with real-time error reporting.

Performance upgrades have been introduced to speed up app creation, while parallel run capabilities enable simultaneous workflow steps. The rollout covers India, Canada, Japan, South Korea, Vietnam, Indonesia, Brazil, Singapore, Colombia, El Salvador, Costa Rica, Panamá, Honduras, and Argentina.

Meanwhile, Google DeepMind has launched Gemini 2.5 Computer Use, a specialised model capable of interacting with user interfaces. Available in preview through the Gemini API, it can be accessed via Google AI Studio and Vertex AI Studio.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle systems targeted in unverified data theft claims, Google warns

Google has warned that hackers are emailing company executives, claiming to have stolen sensitive data from Oracle business applications. The group behind the campaign identifies itself as affiliated with the Cl0p ransomware gang.

In a statement, Google said the attackers target executives at multiple organisations with extortion emails linked to Oracle’s E-Business Suite. The company stated that it lacks sufficient evidence to verify the claims or confirm whether any data has been taken.

Neither Cl0p nor Oracle responded to requests for comment. Google did not provide additional information about the scale or specific campaign targets.

The cl0p ransomware gang has been involved in several high-profile extortion cases, often using claims of data theft to pressure organisations into paying ransoms, even when breaches remain unverified.

Google advised recipients to treat such messages cautiously and report any suspicious emails to security teams while investigations continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack halts Asahi beer production in Japan

Japanese beer maker Asahi Group Holdings has halted production at its main plant following a cyberattack that caused major system failures. Orders, shipments, and call centres were suspended across the company’s domestic operations, affecting most of its 30 breweries in Japan.

Asahi said it is still investigating the cause, believed to be a ransomware infection. The company confirmed there was no external leakage of personal information or employee data, but did not provide a timeline for restoring operations.

The suspension has raised concerns over possible shortages, as beer has limited storage capacity due to freshness requirements. Restaurants and retailers are expected to feel pressure if shipments continue to be disrupted.

The impact has also spread to other beverage companies such as Kirin and Sapporo, which share transport networks. Industry observers warn that supply chain delays could ripple across the food and drinks sectors in Japan.

In South Korea, the effect remains limited for now. Lotte Asahi Liquor, the official importer, declined to comment, but industry officials noted that if the disruption continues, import schedules could also be affected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals abandon Kido extortion attempt amid public backlash

Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.

Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.

The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.

Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.

Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Four new Echo devices debut with Amazon’s next-gen Alexa+

Amazon has unveiled four new Echo devices powered by Alexa+, its next-generation AI assistant. The lineup includes Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11, all designed for personalised, ambient AI-driven experiences. Buyers will automatically gain access to Alexa+.

At the core are the new AZ3 and AZ3 Pro chips, which feature AI accelerators, powering advanced models for speech, vision, and ambient interaction. The Echo Dot Max, priced at $99.99, features a two-speaker system with triple the bass, while the Echo Studio, priced at $219.99, adds spatial audio and Dolby Atmos.

The Echo Show 8 and Echo Show 11 introduce HD displays, enhanced audio, and intelligent sensing capabilities. Both feature 13-megapixel cameras that adapt to lighting and personalise interactions. The Echo Show 8 will cost $179.99, while the Echo Show 11 is priced at $219.99.

Beyond hardware, Alexa+ brings deeper conversational skills and more intelligent daily support, spanning home organisation, entertainment, health, wellness, and shopping. Amazon also introduced the Alexa+ Store, a platform for discovering third-party services and integrations.

The Echo Dot Max and Echo Studio will launch on October 29, while the Echo Show 8 and Echo Show 11 arrive on November 12. Amazon positions the new portfolio as a leap toward making ambient AI experiences central to everyday living.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!