TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek gains business traction despite security risks

Chinese AI company DeepSeek is gaining traction in global markets despite growing concerns about national security.

While government bans remain in place across several countries, businesses are turning to DeepSeek’s models for low cost and firm performance, often ranking just behind OpenAI’s ChatGPT and Google’s Gemini in traffic and market share.

DeepSeek’s appeal lies in its efficiency. With advanced engineering techniques like its ‘mixture-of-experts’ system, the company has reduced computing costs by activating fewer parameters without a noticeable drop in performance.

Training costs have reportedly been as low as $5.6 million — a fraction of what rivals like Anthropic spend. As a result, DeepSeek’s models are now available across major platforms, including AWS, Azure, Google Cloud, and even open-source repositories like GitHub and Hugging Face.

However, the way DeepSeek is accessed matters. While companies can safely self-host the models in private environments, using the mobile app or website means sending data to Chinese servers, a key reason for widespread bans on public-sector use.

Individual consumers often lack the technical control enterprises enjoy, making their data more vulnerable to foreign access.

Despite the political tension, demand continues to grow. US firms are exploring DeepSeek as a cost-saving alternative, and its models are being deployed in industries from telecoms to finance.

Even Perplexity, an American AI firm, has used DeepSeek R1 to power a research tool hosted entirely on Western servers. DeepSeek’s open-source edge and rapid technical progress are helping it close the gap with much larger AI competitors — quietly but significantly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon reaches one million warehouse robots

Amazon has reached a major milestone with over one million robots now operating in its warehouses.

The one millionth robot, recently deployed to a facility in Japan, marks 13 years since the tech giant began introducing automation through its acquisition of Kiva Systems in 2012.

The robotic presence is fast approaching parity with Amazon’s human workforce, according to The Wall Street Journal. Robots now assist in around 75% of the company’s global deliveries.

The company continues to upgrade its robotic fleet, recently unveiling Vulcan — a dual-armed model equipped with a suction grip and a sense of touch to handle items more delicately.

Amazon is also introducing DeepFleet, a new generative AI model built using Amazon SageMaker.

Designed to optimise robotic movement within fulfilment centres, DeepFleet is expected to improve fleet speed by 10%. The model is trained on Amazon’s operational data, making it highly tailored to the company’s logistical network.

The expansion comes as Amazon opens next-generation fulfilment centres featuring ten times more robots instead of relying solely on existing warehouse models. The first of these facilities opened in late 2024 in Shreveport, Louisiana, signalling a shift toward even greater automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

More European cities move to replace Microsoft software as part of digital sovereignty efforts

Following similar moves by Denmark, the German state of Schleswig-Holstein and the city of Lyon—France’s third-largest city and a major economic centre—has initiated a migration from Microsoft Windows and Office to a suite of open-source alternatives, including Linux, OnlyOffice, NextCloud, and PostgreSQL.

This transition is part of Lyon’s broader strategy to strengthen digital sovereignty and reduce reliance on foreign technology providers. As with other European initiatives, the decision aligns with wider EU discussions about data governance and digital autonomy. Concerns over control of sensitive data and long-term sustainability have contributed to increased interest in open-source solutions.

Although Microsoft has publicly affirmed its commitment to supporting EU customers regardless of political context, some European public authorities continue to explore alternatives that allow for local control over software infrastructure and data hosting.

In line with the European Commission’s 2025 State of the Digital Decade report—which notes that Europe has yet to fully leverage the potential of open-source technologies—Lyon aims to enhance both transparency and control over its digital systems.

Lyon’s migration also supports regional economic development. Its collaboration platform, Territoire Numérique Ouvert (Open Digital Territory), is being co-developed with local digital organisations and will be hosted in regional data centres. The project provides secure, interoperable tools for communication, office productivity, and document collaboration.

The city has begun gradually replacing Windows with Linux and Microsoft Office with OnlyOffice across municipal workstations. OnlyOffice, developed by Latvia-based Ascensio System SIA, is an open-source productivity suite distributed under the GNU Affero General Public License. While it shares a similar open-source ethos with LibreOffice, which was chosen in Demark to replace Microsoft, the two are not directly related.

It is reported that Lyon anticipates cost savings through extended hardware lifespans, a reduction in electronic waste, and improved environmental sustainability. Over half of the public contracts for this project have been awarded to companies based in the Auvergne-Rhône-Alpes region, with all awarded to French firms—highlighting a preference for local procurement.

Training for approximately 10,000 civil servants began in June 2025. The initiative is being monitored as a potential model for other municipalities aiming to enhance digital resilience and reduce dependency on proprietary software ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grammarly invests in email with Superhuman acquisition

Grammarly announced on Tuesday that it has acquired email client Superhuman to expand its AI capabilities within its productivity suite.

Financial details of the deal were not disclosed by either company. Superhuman, founded by Rahul Vohra, Vivek Sodera and Conrad Irwin, has raised over $114 million from investors such as a16z and Tiger Global, with a last valuation of $825 million.

Grammarly CEO Shishir Mehrotra said the acquisition will enable the company to bring enhanced AI collaboration to millions more professionals, adding that email is not just another app but a crucial platform where users spend significant time.

Superhuman’s CEO Rahul Vohra and his team are joining Grammarly, promising to invest further in improving the Superhuman experience and building AI agents that collaborate across everyday communication tools.

Recently, Superhuman introduced AI-powered features like scheduling, replies and email categorisation. Grammarly aims to leverage the technology to build smarter AI agents for email, which remains a top use case for its customers.

The move follows Grammarly’s acquisition of productivity software Coda last year and the promotion of Shishir Mehrotra to CEO.

In May, Grammarly secured $1 billion from General Catalyst through a non-dilutive investment, repaid by a capped percentage of revenue generated using the funds instead of equity.

The Superhuman deal further signals Grammarly’s commitment to integrating AI deeply into professional communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas cyber attack sparks customer alert

Qantas is investigating a major data breach that may have exposed the personal details of up to six million customers.

The breach affected a third-party platform used by the airline’s contact centre to store sensitive data, including names, phone numbers, email addresses, dates of birth and frequent flyer numbers.

The airline discovered unusual activity on 30 June and responded by immediately isolating the affected system. While the full scope of the breach is still being assessed, Qantas expects the volume of stolen data to be significant.

However, it confirmed that no passwords, PINs, credit card details or passport numbers were stored on the compromised platform.

Qantas has informed the Australian Federal Police, the Cyber Security Centre and the Office of the Information Commissioner. CEO Vanessa Hudson apologised to customers and urged anyone concerned to call a dedicated support line. She added that airline operations and safety remain unaffected.

The incident follows recent cyber attacks on Hawaiian Airlines, WestJet and major UK retailers, reportedly linked to a group known as Scattered Spider. The breach adds to a growing list of Australian organisations targeted in 2025, in what privacy authorities describe as a worsening trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!