YouTube enlists users to rate videos as AI slop in content quality push

YouTube has introduced a new pop-up survey asking viewers to rate whether videos feel like ‘AI slop’, with users able to score content on a scale from ‘not at all’ to ‘extremely’ sloppy.

The feature began appearing on 17 March 2026 and marks a shift in approach, with YouTube now enlisting its audience directly to help identify low-quality, AI-generated content.

The move adds a third layer of detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep pace with the flood of AI-generated uploads.

Research found that roughly 21% of the first 500 videos recommended to a brand-new YouTube account were identified as AI slop, with a further 33% falling into a broader category of repetitive, low-substance content.

Combating this was named a 2026 priority by YouTube CEO Neal Mohan in his annual letter to the platform.

The survey has not been without controversy.

Critics on social media have pointed out that viewer-labelled ‘slop’ data could be fed into Google’s Veo video generation models, potentially training future AI to avoid the very patterns humans flag as low quality, raising questions about whether YouTube is crowdsourcing content moderation or, inadvertently, AI improvement.

YouTube has not clarified how the feedback data will be used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin moves closer to quantum resistance with BIP-360

BTQ Technologies has deployed Bitcoin Improvement Proposal BIP-360 on its Bitcoin Quantum Testnet v0.3.0, marking the first live test of the proposal. The upgrade introduces a quantum-resistant transaction model, Pay-to-Merkle-Root, designed to strengthen Bitcoin’s long-term security.

BIP-360 focuses on mitigating a vulnerability linked to Taproot’s key-path spending mechanism, which can expose public keys on-chain. Such exposure may become a risk if future quantum computers are capable of exploiting cryptographic weaknesses using advanced algorithms.

The testnet adds new consensus rules, post-quantum signatures, and full transaction lifecycle testing. Faster one-minute block times and adjusted fee structures have been introduced to accommodate larger and more complex signatures.

Growing global attention on quantum threats adds urgency to the development. US, EU, and Canadian authorities are setting timelines for post-quantum cryptography to protect future system security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU scrutiny intensifies over Broadcom VMware licensing dispute

Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.

The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.

The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.

European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.

At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.

Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.

Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.

The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malaysia tightens rules on data centres

Malaysia has quietly restricted new data centre approvals to projects linked to AI, signalling a strategic shift in its digital economy. Authorities confirmed that non-AI development has been halted for nearly 2 years.

The policy reflects mounting pressure on energy and water resources as demand for data centres accelerates. Officials aim to ensure infrastructure supports high-value AI projects rather than lower-impact investments.

Rapid growth has positioned Malaysia as a key regional hub, attracting major global technology firms. Concerns remain over whether the country risks hosting infrastructure without building local innovation capacity.

Leaders say future efforts will focus on balancing investment with domestic benefits and energy sustainability. Plans include expanding power supply and strengthening national AI capabilities to secure long term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amazon upgrades Alexa with AI features

Amazon is rolling out an AI upgrade to its Alexa assistant, aiming to make interactions more conversational and responsive. The new version is designed to follow the context and respond more naturally.

The update comes as Amazon seeks to compete with advanced AI chatbots that have gained popularity in recent years. Critics have argued that smart speakers have fallen behind newer AI tools.

Users in the UK are expected to notice more personalised and proactive responses from the upgraded assistant. This will be based on user and customer personal data. The service will be included with Prime subscriptions or offered as a standalone monthly option.

Analysts say the update could help Amazon gather even more user data and improve engagement by picking up on customers’ habits through conversations. However, questions remain about whether the changes will drive revenue or revive interest in smart speakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI standards and regulation struggle to keep pace with global innovation

Global efforts to regulate AI are accelerating, but innovation continues to outpace formal rules. Policymakers and industry leaders are increasingly turning to standards to help bridge compliance gaps.

At the AI Standards Hub Global Summit, experts highlighted how technical standards support responsible AI development. These tools are seen as essential for scaling AI safely while regulatory frameworks continue to evolve.

Differences across regions remain significant, with the EU relying on formal regulation and the US leaning on flexible standards. This fragmented landscape is raising concerns over compliance costs and barriers to cross-border deployment.

Experts stress that standards must evolve alongside AI while aligning with global frameworks and enforcement efforts. Without coordination, inconsistencies could limit innovation and weaken trust in AI systems.

Calls are growing for shared definitions, measurable benchmarks and stronger international cooperation. Stakeholders argue that aligning standards with regulation will be critical for future AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum cryptography pioneers win top computing prize

Two researchers have been awarded the Turing Award for pioneering work in quantum cryptography. Their research laid the foundations for a new form of secure communication based on quantum physics.

The method, developed in the 1980s, enables encryption keys that cannot be copied without detection. Any attempt to intercept the data alters its physical properties, revealing interference.

Experts say the approach could become vital as quantum computing advances. Traditional encryption methods may become vulnerable as computing power increases.

The award highlights the growing importance of secure data transmission in a digital world. Researchers believe quantum cryptography could play a central role in encrypting and protecting future communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta data processing ruled unlawful in Germany

A Berlin court has ruled that Meta unlawfully processed personal data through its Facebook platform, including information belonging to non-users. Judges found the ‘Find Friends’ feature lacked a valid legal basis for handling third-party data.

The court determined that Meta acted as a data controller and could not rely on consent, contract or legitimate interests to justify the processing. Non-users had no reasonable expectation that their data would be collected or stored.

The German judges also ruled that personalised advertising based on platform data breached GDPR rules. The processing was not considered necessary for providing a social media service and lacked a lawful basis.

However, the court accepted that sensitive personal data entered by users could be processed with explicit consent under the GDPR. The ruling is under appeal and may shape future enforcement of the EU data protection law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!