Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tools make Facebook Reels more engaging than ever

Facebook enhances how users find and share Reels, focusing on personalisation and social interaction.

The platform’s new recommendation engine learns user interests faster, presenting more relevant and up-to-date content. Video viewing time in the US has risen over 20% year-on-year, reflecting the growing appeal of both short and long-form clips.

The update introduces new ‘friend bubbles’ showing which Reels or posts friends have liked, allowing users to start private chats instantly.

A feature that encourages more spontaneous conversation and discovery through shared interests. Facebook’s ‘Save’ option has also been simplified, letting users collect favourite posts and Reels in one place, while improving future recommendations.

AI now plays a larger role in content exploration, offering suggested searches on certain Reels to help users find related topics without leaving the player. By combining smarter algorithms with stronger social cues, Facebook aims to make video discovery more meaningful and community-driven.

Further personalisation tools are expected to follow as the platform continues refining its Reels experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sanders warns AI could erase 100 million US jobs

Senator Bernie Sanders has warned that AI and automation could eliminate nearly 100 million US jobs within the next decade unless stronger worker protections are introduced.

The report, titled The Big Tech Oligarchs’ War Against Workers, claims that companies such as Amazon, Walmart, JPMorgan Chase, and UnitedHealth already use AI to reduce their workforces while rewarding executives with multimillion-dollar pay packages.

According to the findings, nearly 90% of US fast-food workers, two-thirds of accountants, and almost half of truck drivers could see their jobs replaced by automation. Sanders argues that technological progress should enhance people’s lives rather than displace them,

His proposals include introducing a 32-hour workweek without loss of pay, a ‘robot tax’ for companies that replace human labour, and giving workers a share of profits and board representation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scammers use AI to fake British boutiques

Fraudsters are using AI-generated images and back stories to pose as British family businesses, luring shoppers into buying cheap goods from Asia. Websites claiming to be long-standing local boutiques have been linked to warehouses in China and Hong Kong.

Among them is C’est La Vie, which presented itself as a Birmingham jeweller run by a couple called Eileen and Patrick. The supposed owners appeared in highly convincing AI-generated photos, while customers later discovered their purchases were shipped from China.

Victims described feeling cheated after receiving poor-quality jewellery and clothes that bore no resemblance to the advertised items. More than 500 complaints on Trustpilot accuse such companies of exploiting fabricated stories to appear authentic.

Consumer experts at Which? warn that AI tools now enable scammers to create fake brands at an unprecedented scale. The ASA has called on social media platforms to act, as many victims were targeted through Facebook ads.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools reshape how Gen Z approaches buying cars

Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.

Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.

Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.

Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.

Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU digital laws simplified by CEPS Task Force to boost innovation

The Centre for European Policy Studies (CEPS) Task Force, titled ‘Next Steps for EU Law and Regulation for the Digital World’, aims to refine and simplify the EU’s digital rulebook.

This rulebook now covers key legislation, including the Digital Markets Act (DMA), Digital Services Act (DSA), GDPR, Data Act, AI Act, Data Governance Act (DGA), and Cyber Resilience Act (CRA).

While these laws position Europe as a global leader in digital regulation, they also create complexity, overlaps, and legal uncertainty.

The Task Force focuses on enhancing coherence, efficiency, and consistency across digital acts while maintaining strong protections for consumers and businesses.

The CEPS Task Force emphasises targeted reforms to reduce compliance burdens, especially for SMEs, and strengthen safeguards.

It also promotes procedural improvements, including robust impact assessments, independent ex-post evaluations, and the adoption of RegTech solutions to streamline compliance and make regulation more adaptive.

Between November 2025 and January 2026, the Task Force will hold four workshops addressing: alignment of the DMA with competition law, fine-tuning the DSA, improving data governance, enhancing GDPR trust, and ensuring AI Act coherence.

The findings will be published in a Final Report in March 2026, outlining a simpler, more agile EU digital regulatory framework that fosters innovation, reduces regulatory burdens, and upholds Europe’s values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil advances first national cybersecurity law

Brazil is preparing to pass its first national cybersecurity law, aiming to centralise oversight and strengthen protection for citizens and companies. The Cybersecurity Legal Framework would establish a new National Cybersecurity Authority to coordinate defence efforts across government and industry.

The legislation comes after a series of high-profile cyberattacks disrupted hospitals and exposed millions of personal records, highlighting gaps in Brazil’s digital defences. The authority would create nationwide standards, replacing fragmented rules currently managed by individual ministries and agencies.

Under the bill, public procurement will require compliance with official security standards, and suppliers will share responsibility for incidents. Companies meeting the rules could be listed as trusted providers, potentially boosting competitiveness in both public and private sectors.

The framework also includes incentives: financing through the National Public Security Fund and priority for locally developed technologies. While the bill still awaits approval in Congress, its adoption would make Brazil one of Latin America’s first countries with a comprehensive cybersecurity law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot