NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe urged to seize AI opportunity through action

Europe faces a pivotal moment to lead in AI, potentially boosting GDP by over €1.2 trillion, according to Google’s Kent Walker. Urgent action is needed to close the gap between ambition and implementation.

Complex EU regulations, with over 100 new digital rules since 2019, hinder businesses, costing an estimated €124 billion annually. Simplifying these, as suggested by Mario Draghi’s report, could unlock €450 billion in AI-driven growth.

Focused, balanced policies must prioritise real-world AI impacts without stifling progress.

Skilling Europe’s workforce is crucial for AI adoption, with only 14% of EU firms using generative AI compared to 83% in China. Google’s initiatives, like its €15 million AI Opportunity Fund, support digital training. Public-private partnerships can scale these efforts, creating new job categories.

Scaling AI demands secure, dependable tools and ongoing momentum. Google’s AlphaFold and GNoME fuel advances in biology and materials science, while partnerships with European companies safeguard data sovereignty. Joint efforts will help Europe lead globally in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok 4 launches on Azure with advanced reasoning features

Microsoft has announced that Grok 4, the latest large language model from Elon Musk’s xAI, is now available in Azure AI Foundry. The collaboration aims to deliver frontier-level reasoning capabilities with enterprise-grade safety and control.

Grok 4 features a 128,000-token context window, integrated web search, and native tool use. According to Microsoft, it excels at first-principles reasoning, handling complex tasks in science, maths, and logic. The model was trained on xAI’s Colossus supercomputer.

Azure says the model can analyse long documents, code repositories, and academic texts simultaneously, reducing the need to split inputs. It also incorporates external data for real-time responses, though Microsoft cautions that outputs should be verified against reliable sources.

The platform includes Azure AI Content Safety by default, and Microsoft stresses responsible use with ongoing monitoring. Pricing starts at $5.5 per million input tokens and $27.5 per million output tokens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spotify removes 75 million tracks in AI crackdown

Spotify has confirmed that it removed 75 million tracks in the past year as part of a crackdown on AI-generated spam, deepfakes, and fake artist uploads. The purge, almost half of its total archive, highlights the scale of the problem facing music streaming.

Executives say they are not banning AI outright. Instead, the company is targeting misuse, such as cloned voices of real artists without permission, fake profiles, and mass-uploaded spam designed to siphon royalties.

New measures include a music spam filter, stricter rules on vocal deepfakes, and tools allowing artists to flag impersonation before publication. Spotify is also testing the DDEX disclosure system so creators can indicate whether and how AI was used in their work.

Despite the scale of removals, Spotify insists AI music engagement remains minimal and has not significantly impacted human artists’ revenue. The platform now faces the challenge of balancing innovation with transparency, while protecting both listeners and musicians.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude Sonnet 4.5 expands developer options with rollbacks and longer-running agents

Anthropic has released Claude Sonnet 4.5, featuring a suite of new upgrades designed to enhance coding, automation, and creativity. The update enhances Claude Code, extends Computer Use, and introduces experimental tools to boost productivity and facilitate real-world applications.

Claude Code now features checkpoints, allowing developers to roll back projects to earlier versions. The Claude API has also been expanded, supporting longer-running agents to generate files such as slides, spreadsheets, and documents directly within chats.

The model’s Computer Use function has been strengthened, enabling agents to operate applications for up to 30 hours autonomously. Anthropic says Claude Sonnet 4.5 built a Slack-style app with 11,000 lines of code in one session.

A new feature, Imagine with Claude, focuses on generating creative software. The system produced a Shakespeare-themed desktop with customised scripts and performance schedules from a single prompt, highlighting its versatility.

Anthropic has maintained steady pricing for free and premium users, positioning Sonnet 4.5 as its most practical and feature-rich release yet, combining reliability with expanded creative and developer-friendly tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Semicon Coalition unites EU on chip strategy and autonomy

European ministers have signed the Declaration of the Semicon Coalition, calling for a revised EU Chips Act 2.0 to boost semiconductor resilience, innovation, and competitiveness. The declaration outlines five priorities: collaboration, investment, skills, sustainability, and global partnerships.

The coalition, launched by the Netherlands in March, includes Austria, Belgium, Finland, France, Germany, Italy, Poland, and Spain. Other EU states joined today in Brussels, where Dutch minister Vincent Karremans presented the declaration to the European Commission.

Over fifty leading European and international semiconductor players have endorsed the declaration. This support strengthens momentum for placing end-markets at the core of the EU’s semiconductor strategy and aligns with Mario Draghi’s report on competitiveness.

The priorities include aligning EU and national funding, accelerating approvals for strategic projects, building a skilled talent pipeline, and promoting circular, energy-efficient manufacturing. International partnerships will also be deepened while safeguarding European strategic autonomy.

Minister Karremans said the strategy demonstrates Europe’s response to global tensions and its commitment to boosting semiconductor capacity, research funding, and readiness for demand in AI, automotive, energy, and defense.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!