NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack halts Asahi beer production in Japan

Japanese beer maker Asahi Group Holdings has halted production at its main plant following a cyberattack that caused major system failures. Orders, shipments, and call centres were suspended across the company’s domestic operations, affecting most of its 30 breweries in Japan.

Asahi said it is still investigating the cause, believed to be a ransomware infection. The company confirmed there was no external leakage of personal information or employee data, but did not provide a timeline for restoring operations.

The suspension has raised concerns over possible shortages, as beer has limited storage capacity due to freshness requirements. Restaurants and retailers are expected to feel pressure if shipments continue to be disrupted.

The impact has also spread to other beverage companies such as Kirin and Sapporo, which share transport networks. Industry observers warn that supply chain delays could ripple across the food and drinks sectors in Japan.

In South Korea, the effect remains limited for now. Lotte Asahi Liquor, the official importer, declined to comment, but industry officials noted that if the disruption continues, import schedules could also be affected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany invests €1.6 billion in AI but profits remain uncertain

In 2025 alone, €1.6 billion is being committed to AI in Germany as part of its AI action plan.

The budget, managed by the Federal Ministry of Research, Technology and Space, has grown more than twentyfold since 2017, underlining Berlin’s ambition to position the country as a European hub for AI.

However, experts warn that the financial returns remain uncertain. Rainer Rehak of the Weizenbaum Institute argues that AI lacks a clear business model, calling the current trend an ‘investment game’ fuelled by speculation.

He cautioned that if real profits do not materialise, the sector could face a bubble similar to past technology hype cycles. Even OpenAI chief Sam Altman has warned of unsustainable levels of investment in AI.

Germany faces significant challenges in computing capacity. A study by the eco Internet Industry Association found that the country’s infrastructure may only expand to 3.7 gigawatts by 2030, while demand from industry could exceed 12 gigawatts.

Deloitte forecasts a capacity gap of around 50% within five years, with the US already maintaining more than twenty times Germany’s capacity. Without massive new investments in data centres, Germany risks lagging further behind.

Some analysts believe the country needs a different approach. Professor Oliver Thomas of Osnabrück University argues that while large-scale AI models are struggling to find profitability, small and medium-sized enterprises could unlock practical applications.

He advocates for speeding up the cycle from research to commercialisation, ensuring that AI is integrated into industry more quickly.

Germany has a history of pioneering research in fields such as computer technology, MP3, and virtual and augmented reality, but much of the innovation was commercialised abroad.

Thomas suggests focusing less on ‘made in Germany’ AI models and more on leveraging existing technologies from global providers, while maintaining digital sovereignty through strong policy frameworks.

Looking ahead, experts see AI becoming deeply integrated into the workplace. AI assistants may soon handle administrative workflows, organise communications, and support knowledge-intensive professions.

Small teams equipped with these tools could generate millions in revenue, reshaping the country’s economic landscape.

Germany’s heavy spending signals a long-term bet on AI. But with questions about profitability, computing capacity, and competition from the US, the path forward will depend on whether investments can translate into sustainable business models and practical use cases across the economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals abandon Kido extortion attempt amid public backlash

Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.

Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.

The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.

Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.

Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Premiere debuts free mobile app for iPhone users

The US software company, Adobe, has launched a free version of its Premiere video-editing software for iPhone, bringing professional-level tools to mobile creators. The app is now available worldwide in Apple’s App Store, with an Android release still in development.

A new mobile Premiere app that allows users to edit videos on a multi-track timeline, enhance audio with AI-powered sound effects, and create studio-quality voiceovers. It also offers millions of free multimedia assets, including images, fonts, stickers, and audio files.

Projects can be exported directly to platforms like Instagram, TikTok, and YouTube Shorts, with the app automatically adjusting video sizes for each platform.

Users can start editing on the iPhone app and then transfer their projects to Premiere Pro on a desktop for more advanced refinements. Adobe has also integrated its generative AI, enabling features such as backdrop expansion, image-to-video conversion, and custom AI stickers.

While the app is free, upgrades are available for additional storage and generative credits.

The launch highlights Adobe’s push to make professional editing more accessible to streamers, podcasters, and vloggers.

By blending mobile flexibility with cross-platform collaboration, the company aims to empower creators to produce high-quality content anytime and anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Portugal to bring AI into bureaucracy to save time

The Portuguese government is preparing to bring AI into public administration to accelerate licensing procedures and cut delays, according to State Reform Minister Gonçalo Matias.

Speaking at a World Tourism Day conference in Tróia, he said AI can play a key role in streamlining decision-making while maintaining human oversight at the final stage.

Matias explained that the reform will reallocate staff from routine tasks to work of higher value, while introducing a system of prior notifications.

Under the plan, citizens and businesses in Portugal will be allowed to begin most activities without a licence, with tacit approval granted if the administration fails to respond within set deadlines.

The minister said the reforms will be tied to strict accountability measures, emphasising a ‘trust contract’ between citizens, businesses and the public administration. He argued the initiative will not only speed up processes but also foster greater efficiency and responsibility across government services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram head explains why ads feel like eavesdropping

Adam Mosseri has denied long-standing rumours that the platform secretly listens to private conversations to deliver targeted ads. In a video he described as ‘myth busting’, Mosseri said Instagram does not use the phone’s microphone to eavesdrop on users.

He argued that such surveillance would not only be a severe breach of privacy but would also quickly drain phone batteries and trigger visible microphone indicators.

Instead, Mosseri outlined four reasons why adverts may appear suspiciously relevant: online searches and browsing history, the influence of friends’ online behaviour, rapid scrolling that leaves subconscious impressions, and plain coincidence.

According to Mosseri, Instagram users may mistake targeted advertising for surveillance because algorithms incorporate browsing data from advertisers, friends’ interests, and shared patterns across users.

He stressed that the perception of being overheard is often the result of ad targeting mechanics rather than eavesdropping.

Despite his explanation, Mosseri admitted the rumour is unlikely to disappear. Many viewers of his video remained sceptical, with some comments suggesting his denial only reinforced their suspicions about how social media platforms operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!