Top cybersecurity vendors double down on AI-powered platforms

The cybersecurity market is consolidating as AI reshapes defence strategies. Platform-based solutions replace point tools to cut complexity, counter AI threats, and ease skill shortages. IDC predicts that security spending will rise 12% in 2025 to $377 billion by 2028.

Vendors embed AI agents, automation, and analytics into unified platforms. Palo Alto Networks’ Cortex XSIAM reached $1 billion in bookings, and its $25 billion CyberArk acquisition expands into identity management. Microsoft blends Azure, OpenAI, and Security Copilot to safeguard workloads and data.

Cisco integrates AI across networking, security, and observability, bolstered by its acquisition of Splunk. CrowdStrike rebounds from its 2024 outage with Charlotte AI, while Cloudflare shifts its focus from delivery to AI-powered threat prediction and optimisation.

Fortinet’s platform spans networking and security, strengthened by Suridata’s SaaS posture tools. Zscaler boosts its Zero Trust Exchange with Red Canary’s MDR tech. Broadcom merges Symantec and Carbon Black, while Check Point pushes its AI-driven Infinity Platform.

Identity stays central, with Okta leading access management and teaming with Palo Alto on integrated defences. The companies aim to platformise, integrate AI, and automate their operations to dominate an increasingly complex cyberthreat landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s GPT-5 faces backlash for dull tone

OpenAI’s GPT-5 launched last week to immense anticipation, with CEO Sam Altman likening it to the iPhone’s Retina display moment. Marketing promised state-of-the-art performance across multiple domains, but early user reactions suggested a more incremental step than a revolution.

Many expected transformative leaps, yet improvements mainly were in cost, speed, and reliability. GPT-5’s switch system, which automatically routes queries to the most suitable model, was new, but its writing style drew criticism for being robotic and less nuanced.

Social media buzzed with memes mocking its mistakes, from miscounting letters in ‘blueberry’ to inventing US states. OpenAI quickly reinstated GPT-4 for users who missed its warmer tone, underlining a disconnect between expectations and delivery.

Expert reviews mirrored public sentiment. Gary Marcus called GPT-5 ‘overhyped and underwhelming’, while others saw modest benchmark gains. Coding was the standout, with the model topping leaderboards and producing functional, if simple, applications.

OpenAI emphasised GPT-5’s practical utility and reduced hallucinations, aiming for steadiness over spectacle. At the same time, it may not wow casual users, its coding abilities, enterprise appeal, and affordability position it to generate revenue in the fiercely competitive AI market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seedbox.AI backs re-training AI models to boost Europe’s competitiveness

Germany’s Seedbox.AI is betting on re-training large language models (LLMs) rather than competing to build them from scratch. Co-founder Kai Kölsch believes this approach could give Europe a strategic edge in AI.

The Stuttgart-based startup adapts models like Google’s Gemini and Meta’s Llama for medical chatbots and real estate assistant applications. Kölsch compares Europe’s role in AI to improving a car already on the road, rather than reinventing the wheel.

A significant challenge, however, is access to specialised chips and computing power. The European Union is building an AI factory in Stuttgart, Germany, which Seedbox hopes will expand its capabilities in multilingual AI training.

Kölsch warns that splitting the planned EU gigafactories too widely will limit their impact. He also calls for delaying the AI Act, arguing that regulatory uncertainty discourages established companies from innovating.

Europe’s AI sector also struggles with limited venture capital compared to the United States. Kölsch notes that while the money exists, it is often channelled into safer investments abroad.

Talent shortages compound the problem. Seedbox is hiring, but top researchers are lured by Big Tech salaries, far above what European firms typically offer. Kölsch says talent inevitably follows capital, making EU funding reform essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches small AI model for mobiles and IoT

Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.

Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.

The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.

Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.

Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.

Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.

Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers explore brain signals to restore speech for disabled patients

Researchers have developed a brain-computer interface (BCI) that can decode ‘inner speech’ in patients with severe paralysis, potentially enabling faster and more comfortable communication.

The system, tested by a team led by Stanford University’s Frank Willett, records brain activity from the motor cortex using microelectrode arrays smaller than a baby aspirin, translating neural patterns into words via machine learning.

Unlike earlier BCIs that rely on attempted speech, which can be slow or tiring, the new approach focuses on silent imagined speech. Tests with four participants showed that inner speech produces clear, consistent brain signals, though at a smaller scale than attempted speech.

While accuracy is lower, the findings suggest that future systems could restore rapid communication through thought alone.

Privacy concerns have been addressed through methods that prevent unintended decoding. Current BCIs can be trained to ignore inner speech, and a ‘password’ approach for next-generation devices ensures decoding begins only when a specific imagined phrase is used.

Such safeguards are designed to avoid accidental capture of thoughts the user never intended to express.

The technology remains in early development and is subject to strict regulation.

Researchers are now exploring improved, wireless hardware and additional brain regions linked to language and hearing, aiming to enhance decoding accuracy and make the systems more practical in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age checks slash visits to top UK adult websites

Adult site traffic in the UK has fallen dramatically since the new age verification rules were enacted on 25 July under the Online Safety Act.

Figures from analytics firm Similarweb show Pornhub lost more than one million visitors in just two weeks, with traffic falling by 47%. XVideos saw a similar drop, while OnlyFans traffic fell by more than 10%.

The rules require adult websites to make it harder for under-18s to access explicit material, leading some users to turn to smaller and less regulated sites instead of compliant platforms. Pornhub said the trend mirrored patterns seen in other countries with similar laws.

The clampdown has also triggered a surge in virtual private network (VPN) downloads in the UK, as the tools can hide a user’s location and help bypass restrictions.

Ofcom estimates that 14 million people in the UK watch pornography and has proposed age checks using credit cards, photo ID, or AI analysis of selfies.

Critics argue that instead of improving safety, the measures may drive people towards more extreme or illicit material on harder-to-monitor parts of the internet, including the dark web.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!