New Meta feature floods users with AI slop in TikTok-style feed

Meta has launched a new short-form video feed called Vibes inside its Meta AI app and on meta.ai, offering users endless streams of AI-generated content. The format mimics TikTok and Instagram Reels but consists entirely of algorithmically generated clips.

Mark Zuckerberg unveiled the feature in an Instagram post showcasing surreal creations, from fuzzy creatures leaping across cubes to a cat kneading dough and even an AI-generated Egyptian woman taking a selfie in antiquity.

Users can generate videos from scratch or remix existing clips by adding visuals, music, or stylistic effects before posting to Vibes, sharing via direct message, or cross-posting to Instagram and Facebook Stories.

Meta partnered with Midjourney and Black Forest Labs to support the early rollout, though it plans to transition to its AI models.

The announcement, however, was derided by users, who criticised the platform for adding yet more ‘AI slop’ to already saturated feeds. One top comment under Zuckerberg’s post bluntly read: ‘gang nobody wants this’.

A launch that comes as Meta ramps up its AI investment to catch up with rivals OpenAI, Anthropic, and Google DeepMind.

Earlier during the year, the company consolidated its AI teams into Meta Superintelligence Labs and reorganised them into four units focused on foundation models, research, product integration, and infrastructure.

Despite the strategic shift, many question whether Vibes adds value or deepens user fatigue with generative content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple escalates fight against EU digital law

US tech giant Apple has called for the repeal of the EU’s Digital Markets Act, claiming the rules undermine user privacy, disrupt services, and erode product quality.

The company urged the Commission to replace the legislation with a ‘fit for purpose’ framework, or hand enforcement to an independent agency insulated from political influence.

Apple argued that the Act’s interoperability requirements had delayed the rollout of features in the EU, including Live Translation on AirPods and iPhone mirroring. Additionally, the firm accused the Commission of adopting extreme interpretations that created user vulnerabilities instead of protecting them.

Brussels has dismissed those claims. A Commission spokesperson stressed that DMA compliance is an obligation, not an option, and said the rules guarantee fair competition by forcing dominant platforms to open access to rivals.

A dispute that intensifies long-running friction between US tech firms and the EU regulators.

Apple has already appealed to the courts, with a public hearing scheduled in October, while Washington has criticised the bloc’s wider digital policy.

A clash has deepened transatlantic trade tensions, with the White House recently threatening tariffs after fresh fines against another American tech company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New EU biometric checks set to reshape UK travel from 2026

UK travellers to the EU face new biometric checks from 12 October, but full enforcement is not expected until April 2026. Officials say the phased introduction will help avoid severe disruption at ports and stations.

An entry-exit system that requires non-EU citizens to be fingerprinted and photographed, with the data stored in a central European database for three years. A further 90-day grace period will allow French border officials to ease checks if technical issues arise.

The Port of Dover has prepared off-site facilities to prevent traffic build-up, while border officials stressed the gradual rollout will give passengers time to adapt.

According to Border Force director general Phil Douglas, biometrics and data protection advances have made traditional paper passports increasingly redundant.

These changes come as UK holidaymakers prepare for the busiest winter travel season in years, with full compliance due in time for Easter 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Secrets sprawl flagged as top software supply chain risk in Australia

Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.

Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.

Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.

Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.

Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU demands answers from Apple, Google, Microsoft and Booking.com on scam risks

The European Commission has asked Apple, Booking.com, Google and Microsoft how they tackle financial scams under the Digital Services Act. The inquiry covers major platforms and search engines, including Apple App Store, Google Play, Booking.com, Bing and Google Search.

Officials want to know how these companies detect fraudulent content and what safeguards they use to prevent scams. For app stores, the focus is on fake financial applications imitating legitimate banking or trading services.

For Booking.com, attention is paid to fraudulent accommodation listings, while Bing and Google Search face scrutiny over links and ads, leading to scam websites.

The Commission asked platforms how they verify business identities under ‘Know Your Business Customer’ rules to prevent harm from suspicious actors. Companies must also share details of their ad repositories, enabling regulators and researchers to spot fraudulent ads and patterns.

By taking these steps, the Commission aims to ensure that actions under the DSA complement broader consumer protection measures already in force across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hidden psychological risks and AI psychosis in human-AI relationships

For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of these two forms of intelligence. What once felt like purely amusing fiction now resonates differently, taking on a troubling shape and even has a name: AI psychosis. 

When it was released in 2013, the film Her seemed to depict a world far removed from reality, an almost unimaginable scenario of human-AI intimacy. In the story, a man falls in love with an AI operating system, blurring the line between companionship and emotional dependence. Without giving too much away, the film’s unsettling conclusion serves as a cautionary lens. It hints at the psychological risks that can emerge when the boundary between human and machine becomes distorted, a phenomenon now being observed in real life under a new term in psychology. 

The cinematic scenario, once considered imaginative, now resonates as technology evolves. AI chatbots and generative companions can hold lifelike conversations, respond with apparent empathy, and mimic an understanding of human emotions. We are witnessing a new kind of unusually intense emotional connection forming between people and AI, with more than 70% of US teens already using chatbots for companionship and half engaging with them regularly.

The newly observed mental health concern raises questions about how these systems influence our feelings, behaviours, and relationships in an era marked by isolation and loneliness. How might such AI interactions affect people, particularly children or those already vulnerable to mental health challenges? 

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

AI psychosis: myth or reality? 

It is crucial to clarify that AI psychosis is not an official medical diagnosis. Rather, it describes the amplification of delusional thinking facilitated by AI interactions. Yet, it deserves the full attention and treatment focus of today’s psychologists, given its growing impact. It is a real phenomenon that cannot be ignored. 

At its core, AI psychosis refers to a condition in which vulnerable individuals begin to misinterpret machine responses as evidence of consciousness, empathy, or even divine authority. Symptoms reported in documented cases include grandiose beliefs, attachment-based delusions, obsessive over-engagement with chatbots, social withdrawal, insomnia, and hallucinations. Some users have gone so far as to develop romantic or spiritual attachments, convinced that the AI truly understands them or holds secret knowledge.

Clinicians also warn of cognitive dissonance: users may intellectually know that AI lacks emotions, yet still respond as though interacting with another human being. The mismatch between reality and perception can fuel paranoia, strengthen delusions, and in extreme cases lead to medication discontinuation, suicidal ideation, or violent behaviour. Adolescents appear especially susceptible, given that their emotional and social frameworks are still developing. 

Ultimately, AI psychosis does not mean that AI itself causes psychosis. Instead, it acts as a mirror and magnifier, reinforcing distorted thinking patterns in those already predisposed to psychological vulnerabilities.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

 The dark side: Emotional bonds without reciprocity

Humans are naturally wired to seek connection, drawing comfort and stability from social bonds that help navigate complex emotional landscapes- the fundamental impulse that has ensured the survival of the human race. From infancy, we rely on responsive relationships to learn empathy, trust, and communication, the skills essential for both personal and societal well-being. Yet, in today’s era of loneliness, technology has transformed how we maintain these relationships. 

As AI chatbots and generative companions grow increasingly sophisticated, they are beginning to occupy roles traditionally reserved for human interaction, simulating empathy and understanding despite lacking consciousness or moral awareness. With AI now widely accessible, users often communicate with it as effortlessly as they would with friends, blending curiosity, professional needs, or the desire for companionship into these interactions. Over time, this illusion of connection can prompt individuals to overvalue AI-based relationships, subtly diminishing engagement with real people and reshaping social behaviours and emotional expectations.

These one-sided bonds raise profound concerns about the dark side of AI companionship, threatening the depth and authenticity of human relationships. In a world where emotional support can now be summoned with a tap, genuine social cohesion is becoming increasingly fragile.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Children and teenagers at risk from AI 

Children and teenagers are among the most vulnerable groups in the AI era. Their heightened need for social interaction and emotional connection, combined with developing cognitive and emotional skills, makes them particularly vulnerable. Young users face greater difficulty distinguishing authentic human empathy from the simulated responses of AI chatbots and generative companions, creating fertile ground for emotional reliance and attachment. 

AI toys and apps have become increasingly widespread, making technology an unfiltered presence in children’s lives. We still do not fully understand the long-term effects, though early studies are beginning to explore how these interactions may influence cognitive, emotional, and social development. From smartphones to home assistants, children and youth are spending growing amounts of time interacting with AI, often in isolation from peers or family. These digital companions are more than just games, because they are beginning to shape children’s social and emotional development in ways we are not yet fully aware of.

The rising prevalence of AI in children’s daily experiences has prompted major AI companies to recognise the potential dangers. Some firms have started implementing parental advisory systems, usage limits, and content monitoring to mitigate the risks for younger users. However, these measures are still inconsistent, and the pace at which AI becomes available to children often outstrips safeguards. 

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

The hidden risks of AI to adult mental health

Even adults with strong social networks face growing challenges in managing mental health and are not immune to the risks posed by modern technology. In today’s fast-paced world of constant digital stimulation and daily pressures, the demand for psychotherapy is higher than ever. Generative AI and chatbots are increasingly filling this gap, often in ways they were never intended.

The ease, responsiveness, and lifelike interactions of AI can make human relationships feel slower or less rewarding, with some turning to AI instead of seeking professional therapeutic care. AI’s free and widely accessible nature tempts many to rely on digital companions for emotional support, misusing technology designed to assist rather than replace human guidance.

Overreliance on AI can distort perceptions of empathy, trust, and social reciprocity, contributing to social isolation, emotional dependence, and worsening pre-existing mental health vulnerabilities. There have been documented cases of adults developing romantic feelings for AI in the absence of real-life intimacy.

Left unchecked, these dynamics may trigger symptoms linked to AI psychosis, representing a growing societal concern. Awareness, responsible AI design, and regulatory guidance are essential to ensure digital companions complement, rather than replace, human connection and mental health support, safeguarding both individuals and broader social cohesion.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Urgent call for AI safeguards and regulatory action

Alarmingly, extreme cases have emerged, highlighting the profound risks AI poses to its users. In one tragic instance, a teenager reportedly took his life after prolonged and distressing interactions with an AI chatbot, a case that has since triggered legal proceedings and drawn widespread attention to the psychological impact of generative AI on youth. Similar reports of severe anxiety, depression, and emotional dysregulation linked to prolonged AI use underline that these digital companions can have real-life consequences for vulnerable minds.

Such incidents have intensified calls for stricter regulatory frameworks to safeguard children and teenagers. Across Europe, governments are beginning to respond: Italy, for example, has recently tightened access to AI platforms for minors under 14, mandating explicit parental consent before use. These legislative developments reflect the growing recognition that AI is no longer just a technological novelty but directly intersects with our welfare, mental health, and social development.

As AI continues to penetrate every pore of people’s daily lives, society faces a critical challenge: ensuring that technology complements rather than replaces human interaction. Cases of AI-linked distress serve as stark reminders that legislative safeguards, parental involvement, and psychological guidance are no longer optional but urgent necessities to protect a generation growing up in the era of AI.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Towards a safer human-AI relationship

As humans increasingly form emotional connections with AI, the challenge is no longer theoretical but is unfolding in real time. Generative AI and chatbots are rapidly integrating into everyday life, shaping the way we communicate, seek comfort, and manage emotions. Yet despite their widespread use, society still lacks a full understanding of the psychological consequences, leaving both young people and adults at risk of AI-induced psychosis and the growing emotional dependence on digital companions.

Experts emphasise the urgent need for AI psychoeducation, responsible design, and regulatory frameworks to guide safe AI-human interaction. Overreliance on digital companions can distort empathy, social reciprocity, and emotional regulation, the core challenges of interacting with AI. Awareness is critical because recognising the limits of AI, prioritising real human connection, and fostering critical engagement with technology can prevent the erosion of mental resilience and social skills.

Even if AI may feel like ‘old news’ due to its ubiquity, it remains a rapidly evolving technology we do not yet fully understand and cannot yet properly shield ourselves from. The real threat is not the sci-fi visions of AI ruling the world and dominating humanity, but the subtle, everyday psychological shifts it imposes, like altering how we think, feel, and relate to one another. It remains essential to safeguard the emotional health, social cohesion, and mental resilience of people adapting to a world increasingly structured around artificial minds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spanish joins Google’s global AI Mode expansion

Google is rapidly expanding AI Mode, its generative AI-powered search assistant. The company has announced that the feature is now rolling out globally in Spanish. Spanish speakers can now interact with AI Mode to ask complex questions that traditional Search handles poorly.

AI Mode has seen swift adoption since its launch earlier this year. First introduced in March, the feature was rolled out to users across the US in May, followed by its first language expansion earlier this month.

Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese were the first languages added, and Spanish now joins the list. Google says more languages will follow soon as part of its global AI Mode rollout.

Google says the feature is designed to work alongside Search, not replace it, offering conversational answers with links to supporting sources. The company has stressed that responses are generated with safety filters and fact-checking layers.

The rollout reflects Google’s broader strategy to integrate generative AI into its ecosystem, spanning Search, Workspace, and Android. AI Mode will evolve with multimodal support and tighter integration with other Google services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!