BT launches cyber training as small businesses struggle with threats

Cyber attacks aren’t just a problem for big-name brands. Small and medium businesses are increasingly in the crosshairs, according to new research from BT and Be the Business.

Two in five SMEs have never provided cyber security training to their staff, despite a sharp increase in attacks. In the past year alone, 42% of small firms and 67% of medium-sized companies reported breaches.

Phishing remains the most common threat, affecting 85% of businesses. But more advanced tactics are spreading fast, including ransomware and ‘quishing’ scams — where fake QR codes are used to steal data.

Recovering from a breach is costly. Micro and small businesses spend nearly £8,000 on average to recover from their most serious incident. The figure excludes reputational damage and long-term disruption.

To help tackle the issue, BT has launched a new training programme with Be the Business. The course offers practical, low-cost cyber advice designed for companies without dedicated IT support.

The programme focuses on real-world threats, including AI-driven scams, and offers guidance on steps like password hygiene, two-factor authentication, and safe software practices.

Although 69% of SME leaders are now exploring AI tools to help defend their systems, 18% also list AI as one of their top cyber threats — a sign of both potential and risk.

Experts warn that basic precautions still matter most. With free and affordable training options now widely available, small firms have more tools than ever to improve their cyber defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU races to catch up in quantum tech amid cybersecurity fears

The European Union is ramping up efforts to lead in quantum computing, but cybersecurity experts warn that the technology could upend digital security as we know it.

In a new strategy published Wednesday, the European Commission admitted that Europe trails the United States and China in commercialising quantum technology, despite its strong academic presence. The bloc is now calling for more private investment to close the gap.

Quantum computing offers revolutionary potential, from drug discovery to defence applications. But its power poses a serious risk: it could break today’s internet encryption.

Current digital security relies on public key cryptography — complex maths that conventional computers can’t solve. But quantum machines could one day easily break these codes, making sensitive data readable to malicious actors.

Experts fear a ‘store now, decrypt later’ scenario, where adversaries collect encrypted data now and crack it once quantum capabilities mature. That could expose government secrets and critical infrastructure.

The EU is also concerned about losing control over homegrown tech companies to foreign investors. While Europe leads in quantum research output, it only receives 5% of global private funding. In contrast, the US and China attract over 90% combined.

European cybersecurity agencies published a roadmap for transitioning to post-quantum cryptography to address the threat. The aim is to secure critical infrastructure by 2030 — a deadline shared by the US, UK, and Australia.

IBM recently said it could release a workable quantum computer by 2029, highlighting the urgency of the challenge. Experts stress that replacing encryption is only part of the task. The broader transition will affect billions of systems, requiring enormous technical and logistical effort.

Governments are already reacting. Some EU states have imposed export restrictions on quantum tech, fearing their communications could be exposed. Despite the risks, European officials say the worst-case scenarios are not inevitable, but doing nothing is not an option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek gains business traction despite security risks

Chinese AI company DeepSeek is gaining traction in global markets despite growing concerns about national security.

While government bans remain in place across several countries, businesses are turning to DeepSeek’s models for low cost and firm performance, often ranking just behind OpenAI’s ChatGPT and Google’s Gemini in traffic and market share.

DeepSeek’s appeal lies in its efficiency. With advanced engineering techniques like its ‘mixture-of-experts’ system, the company has reduced computing costs by activating fewer parameters without a noticeable drop in performance.

Training costs have reportedly been as low as $5.6 million — a fraction of what rivals like Anthropic spend. As a result, DeepSeek’s models are now available across major platforms, including AWS, Azure, Google Cloud, and even open-source repositories like GitHub and Hugging Face.

However, the way DeepSeek is accessed matters. While companies can safely self-host the models in private environments, using the mobile app or website means sending data to Chinese servers, a key reason for widespread bans on public-sector use.

Individual consumers often lack the technical control enterprises enjoy, making their data more vulnerable to foreign access.

Despite the political tension, demand continues to grow. US firms are exploring DeepSeek as a cost-saving alternative, and its models are being deployed in industries from telecoms to finance.

Even Perplexity, an American AI firm, has used DeepSeek R1 to power a research tool hosted entirely on Western servers. DeepSeek’s open-source edge and rapid technical progress are helping it close the gap with much larger AI competitors — quietly but significantly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S eyes full online recovery by august after cyberattack

Marks & Spencer (M&S) expects its full online operations to be restored within four weeks, following a cyber attack that struck in April. Speaking at the retailer’s annual general meeting, CEO Stuart Machin said the company aims to resolve the majority of the incident’s impact by August.

The cyberattack, attributed to human error, forced M&S to suspend online sales and disrupted supply chain operations, including its Castle Donington distribution centre. The breach also compromised customer personal data and is expected to result in a £300 million hit to the company’s profit.

April marked the beginning of a multi-month recovery process, with M&S confirming by May that the breach involved a supply chain partner. By June, the financial and operational damage became clear, with limited online services restored and key features like click-and-collect still unavailable.

The e-commerce platform in Great Britain is now partially operational, but services such as next-day delivery remain offline. Machin stated that recovery is progressing steadily, with the goal of full functionality within weeks.

Julius Cerniauskas, CEO of web intelligence firm Oxylabs, highlighted the growing risks of social engineering in cyber incidents. He noted that while technical defences are improving, attackers continue to exploit human vulnerabilities to gain access.

Cerniauskas described the planned recovery timeline as a ‘solid achievement’ but warned that long-term reputational effects could persist. ‘It’s not a question of if you’ll be targeted – but when,’ he said, urging firms to bolster both human and technical resilience.

Executive pay may also be impacted by the incident. According to the Evening Standard, chairman Archie Norman said incentive compensation would reflect any related performance shortfalls. Norman added that systems are gradually returning online and progress is being made each week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Billing software firm hit by ransomware attack

Healthcare billing platform Horizon Healthcare RCM has confirmed it suffered a ransomware attack, where threat actors stole sensitive data before encrypting its systems. The cybercriminal group, suspected to be affiliated with LockBit, reportedly demanded a ransom, which the company is believed to have paid to prevent public exposure of the stolen data.

The breach occurred in June 2024 and affected Horizon’s cloud-based revenue-cycle management platform. Although the company has not disclosed how many clients were impacted, it has notified healthcare providers using its services and is working with cybersecurity experts to assess the full scope of the incident.

Security analysts believe the attackers exfiltrated significant data, including protected health information, before deploying ransomware. While systems were eventually restored, concerns remain over long-term privacy risks and potential regulatory consequences for affected healthcare organisations.

Ransomware attacks on third-party vendors pose significant risks to the healthcare sector. Experts stress the importance of vendor risk assessments, data encryption, and secure system configurations to limit exposure.

As ransomware actors increasingly target supply-chain providers, proactive monitoring and resilience strategies are becoming essential for safeguarding critical data infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI won’t replace empathy at work

AI is increasingly being used to improve how organisations measure and support employee performance and well-being.

According to Dr Serena Huang, founder of Data with Serena and author of The Inclusion Equation, AI provides insights that go far beyond traditional annual reviews or turnover statistics.

AI tools can detect early signs of burnout, identify high-potential staff, and even flag overly controlling management styles. More importantly, they offer the potential to personalise development pathways based on employee needs and aspirations.

Huang emphasises, however, that ethical use is vital. Transparency and privacy must remain central to ensure AI empowers rather than surveils workers. Far from making human skills obsolete, Huang argues that AI increases their value.

With machines handling routine analysis, people are free to focus on complex challenges and relationship-building—critical skills in sales, leadership, and team dynamics. AI can assist, but it is emotional intelligence and empathy that truly drive results.

To ensure data-driven efforts align with business goals, Huang urges companies to ask better questions. Understanding what challenges matter to stakeholders helps ensure that any AI deployment addresses real-world needs. Regular check-ins and progress reviews help maintain alignment.

Rather than fear AI as a job threat, Huang encourages individuals to embrace it as a tool for growth. Staying curious and continually learning can ensure workers remain relevant in an evolving market.

She also highlights the strategic advantage of prioritising employee well-being. Companies that invest in mental health, work-life balance, and inclusion enjoy higher productivity and retention.

With younger workers placing a premium on wellness and values, businesses that foster a caring culture will attract top talent and stay competitive. Ultimately, Huang sees AI not as a replacement for people, but as a catalyst for more human-centric, data-informed workplaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!