Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Unapproved AI tools boom in UK workplaces

Microsoft research reveals 71% of UK employees use unapproved AI tools at work, with 51% doing so weekly, raising concerns about data privacy and cybersecurity risks. Organisations face heightened risks to data privacy and cybersecurity as sensitive information enters unregulated platforms.

Despite these dangers, awareness remains low, as only 32% express concern over data privacy and 29% over IT system vulnerabilities.

Workers favour Shadow AI for its simplicity, with 41% citing familiarity from personal use and 28% noting the absence of approved alternatives at their firms. Common applications include drafting communications (49%), creating reports or presentations (40%), and handling finance tasks (22%).

Generative AI assistants now permeate the workforce, saving an average of 7.75 hours weekly per user- equivalent to 12.1 billion hours annually across the economy, valued at £208 billion.

Sector leaders in IT, telecoms, sales, media, marketing, architecture, engineering, and finance report the highest adoption rates. Employees plan to redirect saved time towards better work-life balance (37%), skill development (31%), and more fulfilling tasks (28%).

Darren Hardman, CEO of Microsoft UK and Ireland, urges businesses to prioritise enterprise-grade tools that blend productivity with robust safeguards.

Optimism about AI has climbed, with 57% of staff feeling excited or confident- up from 34% in January 2025. Familiarity grows too, as confusion over starting points drops from 44% to 36%, and clarity on organisational AI strategies rises from 24% to 43%.

Frontier firms leading in adoption see twice the thriving rates, aligning with global trends where 82% of leaders deem 2025 pivotal for AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study links higher screen time to weaker learning results in children

A study by researchers from Toronto’s Hospital for Sick Children and St. Michael’s Hospital has found a correlation between increased screen time before age eight and lower scores in reading and mathematics.

Published in the Journal of the American Medical Association, the study followed over 3,000 Ontario children from 2008 to 2023, comparing reported screen use with their EQAO standardised test results.

Lead author Dr Catherine Birken said each additional hour of daily screen use was associated with about a 10 per cent lower likelihood of meeting provincial standards in reading and maths.

The research did not distinguish between different types of screen activity and was based on parental reports, meaning it shows association rather than causation.

Experts suggest the findings align with previous research showing that extensive screen exposure can affect focus and reduce time spent on beneficial activities such as face-to-face interaction or outdoor play.

Dr Sachin Maharaj from the University of Ottawa noted that screens may condition children’s attention spans in ways that make sustained learning more difficult.

While some parents, such as Surrey’s Anne Whitmore, impose limits to balance digital exposure and development, Birken stressed that the study was not intended to assign blame.

She said encouraging balanced screen habits should be a shared effort among parents, educators and health professionals, with an emphasis on quality content and co-viewing as recommended by the Canadian Paediatric Society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands safeguards economic security through Nexperia intervention

The Dutch Minister of Economic Affairs has invoked the Goods Availability Act in response to serious governance issues at semiconductor manufacturer Nexperia.

The measure, announced on 30 September 2025, seeks to ensure the continued availability of the company’s products in the event of an emergency. Nexperia, headquartered in Nijmegen, will be allowed to maintain its normal production activities.

A decision that follows recent indications of significant management deficiencies and actions within Nexperia that could affect the safeguarding of vital technological knowledge and capacity in the Netherlands and across Europe.

Authorities view these capabilities as essential for economic security, as Nexperia supplies chips for the automotive sector and consumer electronics industries.

Under the order, the Minister of Economic Affairs may block or reverse company decisions considered harmful to Nexperia’s long-term stability or to the preservation of Europe’s semiconductor value chain.

The Netherlands government described the use of the Goods Availability Act as exceptional, citing the urgency and scale of the governance concerns.

Officials emphasised that the action applies only to Nexperia and does not target other companies, sectors, or countries. The decision may be contested through the courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake VPN apps linked to banking malware warn security experts

Security researchers have issued urgent warnings about VPN applications that appear legitimate but secretly distribute banking trojans such as Klopatra and Mobdro.

The apps masquerade as trustworthy privacy tools, but once installed they can steal credentials, exfiltrate data or give attackers backdoor access to devices. Victims may initially notice nothing amiss.

Among the apps flagged, some were available on major app platforms, increasing the risk exposure. Analysts recommend users immediately uninstall any unfamiliar VPN apps, scan devices with a reputable security tool and change banking passwords if suspicious activity is detected.

Developers and platform operators are urged to strengthen vetting of privacy tool submissions. Given that VPNs are inherently powerful (encrypting traffic, accessing network functions), any malicious behaviour can escalate rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft restores services after major outage

Microsoft users around the world faced major disruptions on Thursday after a network configuration error caused a temporary outage across Microsoft 365, Teams, Outlook and Azure. The issue interrupted access to core productivity tools in the middle of the US workday.

The misconfiguration affected data routing in the US but also caused interruptions in Europe, Africa and the Middle East. Microsoft said traffic rebalancing restored normal service after several hours of monitoring.

The outage briefly left businesses without access to Word, Excel, PowerPoint and OneDrive, creating frustration among workers reliant on Microsoft’s cloud ecosystem. Analysts noted the incident was minor compared with the widespread 2024 outage linked to CrowdStrike software.

By Thursday evening, Microsoft confirmed that all affected systems were stable and that a review was underway to prevent recurrence. The company said it remains committed to improving reliability across its global network infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake VPN app drains bank accounts across Europe

Cybersecurity experts are urging Android users to uninstall a fake VPN app capable of stealing banking details and draining accounts. The malware, hidden inside a Mobdro Pro IPTV + VPN app, has already infected more than 3,000 devices across Europe.

The app promises free access to films and live sports, but installs Klopatra, a sophisticated malware designed to gain complete control of a device. Once downloaded, it tricks users into granting access through Android’s Accessibility Services, enabling attackers to read screens and perform actions remotely.

Researchers at Cleafy, the firm that uncovered the operation, said attackers can use the permissions to operate phones as if they were the real owners. The firm believes the campaign originated in Turkey and estimates that around 1,000 people have fallen victim to the scam.

Cybersecurity analysts stress that the attack represents a growing trend in banking malware, where accessibility features are exploited to bypass traditional defences and gain near-total control of infected devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!