NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack halts Asahi beer production in Japan

Japanese beer maker Asahi Group Holdings has halted production at its main plant following a cyberattack that caused major system failures. Orders, shipments, and call centres were suspended across the company’s domestic operations, affecting most of its 30 breweries in Japan.

Asahi said it is still investigating the cause, believed to be a ransomware infection. The company confirmed there was no external leakage of personal information or employee data, but did not provide a timeline for restoring operations.

The suspension has raised concerns over possible shortages, as beer has limited storage capacity due to freshness requirements. Restaurants and retailers are expected to feel pressure if shipments continue to be disrupted.

The impact has also spread to other beverage companies such as Kirin and Sapporo, which share transport networks. Industry observers warn that supply chain delays could ripple across the food and drinks sectors in Japan.

In South Korea, the effect remains limited for now. Lotte Asahi Liquor, the official importer, declined to comment, but industry officials noted that if the disruption continues, import schedules could also be affected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals abandon Kido extortion attempt amid public backlash

Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.

Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.

The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.

Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.

Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Four new Echo devices debut with Amazon’s next-gen Alexa+

Amazon has unveiled four new Echo devices powered by Alexa+, its next-generation AI assistant. The lineup includes Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11, all designed for personalised, ambient AI-driven experiences. Buyers will automatically gain access to Alexa+.

At the core are the new AZ3 and AZ3 Pro chips, which feature AI accelerators, powering advanced models for speech, vision, and ambient interaction. The Echo Dot Max, priced at $99.99, features a two-speaker system with triple the bass, while the Echo Studio, priced at $219.99, adds spatial audio and Dolby Atmos.

The Echo Show 8 and Echo Show 11 introduce HD displays, enhanced audio, and intelligent sensing capabilities. Both feature 13-megapixel cameras that adapt to lighting and personalise interactions. The Echo Show 8 will cost $179.99, while the Echo Show 11 is priced at $219.99.

Beyond hardware, Alexa+ brings deeper conversational skills and more intelligent daily support, spanning home organisation, entertainment, health, wellness, and shopping. Amazon also introduced the Alexa+ Store, a platform for discovering third-party services and integrations.

The Echo Dot Max and Echo Studio will launch on October 29, while the Echo Show 8 and Echo Show 11 arrive on November 12. Amazon positions the new portfolio as a leap toward making ambient AI experiences central to everyday living.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qwen3-Omni tops Hugging Face as China’s open AI challenge grows

Alibaba’s Qwen3-Omni multimodal AI system has quickly risen to the top of Hugging Face’s trending model list, challenging closed systems from OpenAI and Google. The series unifies text, image, audio, and video processing in a single model, signalling the rapid growth of Chinese open-source AI.

Qwen3-Omni-30B-A3B currently leads Hugging Face’s list, followed by the image-editing model Qwen-Image-Edit-2509. Alibaba’s cloud division describes Qwen3-Omni as the first fully integrated multimodal AI framework built for real-world applications.

Self-reported benchmarks suggest Qwen3-Omni outperforms Qwen2.5-Omni-7B, OpenAI’s GPT-4o, and Google’s Gemini-2.5-Flash, known as ‘Nano Banana’, in audio recognition, comprehension, and video understanding tasks.

Open-source dominance is growing, with Alibaba’s models taking half the top 10 spots on Hugging Face rankings. Tencent, DeepSeek, and OpenBMB filled most of the remaining positions, leaving IBM as the only Western representative.

The ATOM Project warned that US leadership in AI could erode as open models from China gain adoption. It argued that China’s approach draws businesses and researchers away from American systems, which have become increasingly closed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gemini’s image model powers Google’s new Mixboard platform

Google has launched Mixboard, an experimental AI tool designed to help users visually explore, refine, and expand ideas both textually and visually. The Gemini 2.5 Flash model powers the platform and is now available for free in beta for users in the United States.

Mixboard provides an open canvas where users can begin with pre-built templates or custom prompts to create project boards. It can be used for tasks such as planning events, home decoration, or organising inspirational images, presenting an overall mood for a project.

Users can upload their own images or generate new ones by describing what they want to see. The tool supports iterative editing, allowing minor tweaks or combining visuals into new compositions through Google’s Nano Banana image model.

Quick actions like regenerating and others like this enable users to explore variations with a single click. The tool also allows text generation based on context from images placed on the board, helping tie visuals to written ideas.

Google says Mixboard is part of its push to make Gemini more useful for creative work. Since the launch of Nano Banana in August, the Gemini app has overtaken ChatGPT to rank first in the US App Store.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!