DeepSeek gains business traction despite security risks

Chinese AI company DeepSeek is gaining traction in global markets despite growing concerns about national security.

While government bans remain in place across several countries, businesses are turning to DeepSeek’s models for low cost and firm performance, often ranking just behind OpenAI’s ChatGPT and Google’s Gemini in traffic and market share.

DeepSeek’s appeal lies in its efficiency. With advanced engineering techniques like its ‘mixture-of-experts’ system, the company has reduced computing costs by activating fewer parameters without a noticeable drop in performance.

Training costs have reportedly been as low as $5.6 million — a fraction of what rivals like Anthropic spend. As a result, DeepSeek’s models are now available across major platforms, including AWS, Azure, Google Cloud, and even open-source repositories like GitHub and Hugging Face.

However, the way DeepSeek is accessed matters. While companies can safely self-host the models in private environments, using the mobile app or website means sending data to Chinese servers, a key reason for widespread bans on public-sector use.

Individual consumers often lack the technical control enterprises enjoy, making their data more vulnerable to foreign access.

Despite the political tension, demand continues to grow. US firms are exploring DeepSeek as a cost-saving alternative, and its models are being deployed in industries from telecoms to finance.

Even Perplexity, an American AI firm, has used DeepSeek R1 to power a research tool hosted entirely on Western servers. DeepSeek’s open-source edge and rapid technical progress are helping it close the gap with much larger AI competitors — quietly but significantly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S eyes full online recovery by august after cyberattack

Marks & Spencer (M&S) expects its full online operations to be restored within four weeks, following a cyber attack that struck in April. Speaking at the retailer’s annual general meeting, CEO Stuart Machin said the company aims to resolve the majority of the incident’s impact by August.

The cyberattack, attributed to human error, forced M&S to suspend online sales and disrupted supply chain operations, including its Castle Donington distribution centre. The breach also compromised customer personal data and is expected to result in a £300 million hit to the company’s profit.

April marked the beginning of a multi-month recovery process, with M&S confirming by May that the breach involved a supply chain partner. By June, the financial and operational damage became clear, with limited online services restored and key features like click-and-collect still unavailable.

The e-commerce platform in Great Britain is now partially operational, but services such as next-day delivery remain offline. Machin stated that recovery is progressing steadily, with the goal of full functionality within weeks.

Julius Cerniauskas, CEO of web intelligence firm Oxylabs, highlighted the growing risks of social engineering in cyber incidents. He noted that while technical defences are improving, attackers continue to exploit human vulnerabilities to gain access.

Cerniauskas described the planned recovery timeline as a ‘solid achievement’ but warned that long-term reputational effects could persist. ‘It’s not a question of if you’ll be targeted – but when,’ he said, urging firms to bolster both human and technical resilience.

Executive pay may also be impacted by the incident. According to the Evening Standard, chairman Archie Norman said incentive compensation would reflect any related performance shortfalls. Norman added that systems are gradually returning online and progress is being made each week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Billing software firm hit by ransomware attack

Healthcare billing platform Horizon Healthcare RCM has confirmed it suffered a ransomware attack, where threat actors stole sensitive data before encrypting its systems. The cybercriminal group, suspected to be affiliated with LockBit, reportedly demanded a ransom, which the company is believed to have paid to prevent public exposure of the stolen data.

The breach occurred in June 2024 and affected Horizon’s cloud-based revenue-cycle management platform. Although the company has not disclosed how many clients were impacted, it has notified healthcare providers using its services and is working with cybersecurity experts to assess the full scope of the incident.

Security analysts believe the attackers exfiltrated significant data, including protected health information, before deploying ransomware. While systems were eventually restored, concerns remain over long-term privacy risks and potential regulatory consequences for affected healthcare organisations.

Ransomware attacks on third-party vendors pose significant risks to the healthcare sector. Experts stress the importance of vendor risk assessments, data encryption, and secure system configurations to limit exposure.

As ransomware actors increasingly target supply-chain providers, proactive monitoring and resilience strategies are becoming essential for safeguarding critical data infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI won’t replace empathy at work

AI is increasingly being used to improve how organisations measure and support employee performance and well-being.

According to Dr Serena Huang, founder of Data with Serena and author of The Inclusion Equation, AI provides insights that go far beyond traditional annual reviews or turnover statistics.

AI tools can detect early signs of burnout, identify high-potential staff, and even flag overly controlling management styles. More importantly, they offer the potential to personalise development pathways based on employee needs and aspirations.

Huang emphasises, however, that ethical use is vital. Transparency and privacy must remain central to ensure AI empowers rather than surveils workers. Far from making human skills obsolete, Huang argues that AI increases their value.

With machines handling routine analysis, people are free to focus on complex challenges and relationship-building—critical skills in sales, leadership, and team dynamics. AI can assist, but it is emotional intelligence and empathy that truly drive results.

To ensure data-driven efforts align with business goals, Huang urges companies to ask better questions. Understanding what challenges matter to stakeholders helps ensure that any AI deployment addresses real-world needs. Regular check-ins and progress reviews help maintain alignment.

Rather than fear AI as a job threat, Huang encourages individuals to embrace it as a tool for growth. Staying curious and continually learning can ensure workers remain relevant in an evolving market.

She also highlights the strategic advantage of prioritising employee well-being. Companies that invest in mental health, work-life balance, and inclusion enjoy higher productivity and retention.

With younger workers placing a premium on wellness and values, businesses that foster a caring culture will attract top talent and stay competitive. Ultimately, Huang sees AI not as a replacement for people, but as a catalyst for more human-centric, data-informed workplaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas cyber attack sparks customer alert

Qantas is investigating a major data breach that may have exposed the personal details of up to six million customers.

The breach affected a third-party platform used by the airline’s contact centre to store sensitive data, including names, phone numbers, email addresses, dates of birth and frequent flyer numbers.

The airline discovered unusual activity on 30 June and responded by immediately isolating the affected system. While the full scope of the breach is still being assessed, Qantas expects the volume of stolen data to be significant.

However, it confirmed that no passwords, PINs, credit card details or passport numbers were stored on the compromised platform.

Qantas has informed the Australian Federal Police, the Cyber Security Centre and the Office of the Information Commissioner. CEO Vanessa Hudson apologised to customers and urged anyone concerned to call a dedicated support line. She added that airline operations and safety remain unaffected.

The incident follows recent cyber attacks on Hawaiian Airlines, WestJet and major UK retailers, reportedly linked to a group known as Scattered Spider. The breach adds to a growing list of Australian organisations targeted in 2025, in what privacy authorities describe as a worsening trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder trials face scans to verify profiles

Tinder is trialling a facial recognition feature to boost user security and crack down on fraudulent profiles. The pilot is currently underway in the US, after initial launches in Colombia and Canada.

New users are now required to take a short video selfie during sign-up, which will be matched against profile photos to confirm authenticity. The app also compares the scan with other accounts to catch duplicates and impersonations.

Verified users receive a profile badge, and Tinder stores a non-reversible encrypted face map to aid in detection. The company claims all facial data is deleted when accounts are removed.

The update follows a sharp rise in catfishing and romance scams, with over 64,000 cases reported in the US last year alone. Other measures introduced in recent years include photo verification, ID checks and location-sharing tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!