YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta leads booming AI smart glasses market in first half of 2025

According to Counterpoint Research, global shipments of smart glasses more than doubled in the first half of 2025, fuelled by soaring demand for AI-powered models.

The segment accounted for 78% of shipments, outpacing basic audio-enabled smart frames.

Meta led the market with over 73% share, primarily driven by the success of its Ray-Ban AI glasses. Rising competition came from Chinese firms, including Huawei, RayNeo, and Xiaomi, emerging as a surprise contender with its new AI glasses.

Analysts attribute the surge to growing consumer interest in AI-integrated wearable tech, with Meta and Xiaomi’s latest releases generating strong sales momentum.

Competition is expected to intensify as companies such as Alibaba and ByteDance enter the space in the second half of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Staff welcome AI but call for clear boundaries

New research shows that most workers are open to using AI tools at work, but resist the idea of being managed by them. Workers are far more positive about AI recommending skills or collaborating alongside them.

The Workday study found that while 82% of organisations are expanding AI agent use, only 30% of employees feel comfortable being overseen by such systems.

Nine in ten respondents believe AI can boost productivity, yet nearly half fear it could erode critical thinking and add to workloads. Trust in the technology grows with experience, with 95% of regular users expressing confidence compared with 36% of those new to AI.

Sensitive functions such as hiring, finance, and legal work remain areas where human oversight is preferred. Many see AI as a partner that complements judgement and empathy rather than replacing them entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santander expands AI-first strategy with OpenAI

Santander is accelerating its AI-first transformation through a new partnership with OpenAI, aiming to embed intelligent technology into every part of the bank.

Over the past two months, ChatGPT Enterprise has been rolled out to nearly 15,000 employees across Europe and the Americas, with plans to double that number by year-end. The move forms part of a broader ambition to become an AI-native institution where all decisions and processes are data-driven.

The bank will plan a mandatory AI training programme for all staff from 2026, with a focus on responsible use, and expects to scale agentic AI to enable fully conversational banking.

Santander says its AI initiatives saved over €200 million last year. In Spain alone, speech analytics now handles 10 million calls annually, automatically updating CRM records and freeing more than 100,000 work hours. Developer productivity has risen by up to 30% on some tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI browsers accused of harvesting sensitive data, according to new study

A new study from researchers in the UK and Italy found that popular AI-powered browsers collect and share sensitive personal data, often in ways that may breach privacy laws.

The team tested ten well-known AI assistants, including ChatGPT, Microsoft’s Copilot, Merlin AI, Sider, and TinaMind, using public websites and private portals like health and banking services.

All but Perplexity AI showed evidence of gathering private details, from medical records to social security numbers, and transmitting them to external servers.

The investigation revealed that some tools continued tracking user activity even during private browsing, sending full web page content, including confidential information, to their systems.

Sometimes, prompts and identifying details, like IP addresses, were shared with analytics platforms, enabling potential cross-site tracking and targeted advertising.

Researchers also found that some assistants profiled users by age, gender, income, and interests, tailoring their responses across multiple sessions.

According to the report, such practices likely violate American health privacy laws and the European Union’s General Data Protection Regulation.

Privacy policies for some AI browsers admit to collecting names, contact information, payment data, and more, and sometimes storing information outside the EU.

The study warns that users cannot be sure how their browsing data is handled once gathered, raising concerns about transparency and accountability in AI-enhanced browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Midlands to train 2.3 million adults in AI skills

All adults in the West Midlands will be offered free training on using AI in daily life, work and community activities. Mayor Richard Parker confirmed the £10m initiative, designed to reach 2.3 million residents, as part of a wider £30m skills package.

A newly created AI Academy will lead the programme, working with tech companies, education providers and community groups. The aim is to equip people with everyday AI know-how and the advanced skills needed for digital and data-driven jobs.

Parker said AI should become as fundamental as English or maths and warned that failure to prioritise training would risk deepening a skills divide. The programme will sit alongside other £10m projects focused on bespoke business training and a more inclusive skills system.

The WMCA, established in 2017, covers Birmingham, Coventry, Wolverhampton and 14 other local authority areas in the UK. Officials say the AI drive is central to the region’s Growth Plan and ambition to become the UK’s leading hub for AI skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

eBay uses AI to attract more marketplace sellers

eBay is introducing a new AI-powered feature to help sellers respond to buyer questions, continuing its AI strategy to streamline selling. Over 10 million sellers have used these tools to create over 200 million listings, with about 500,000 AI-assisted listings generated daily.

The company has launched several AI tools over the past two years, including generative video, listing assistants, bulk upload features and photo background enhancements.

Executives see AI as a way to increase seller retention, expand inventory, and drive buyer traffic, particularly in a competitive market where Amazon, Etsy, and Poshmark offer similar capabilities.

While adoption is optional, eBay tests features with its seller community, making adjustments based on feedback to ensure tone and presentation feel authentic. The company views AI as essential to maintaining its place at the forefront of online marketplaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rollout of GPT-5 proves bumpier than expected

OpenAI’s highly anticipated GPT-5 has encountered a rough debut as users reported that it felt surprisingly less capable than its predecessor, GPT-4o.

The culprit? A malfunctioning real-time router that failed to select the most appropriate model for user queries.

In response, Sam Altman acknowledged the issue and assured users that GPT-5 would ‘seem smarter starting today’.

To ease the transition, OpenAI is restoring access to GPT-4o for Plus subscribers and doubling rate limits to encourage experimentation and feedback gathering.

Beyond technical fixes, the incident has sparked broader debate within the AI community about balancing innovation with emotional resonance. Some users lament GPT-5’s colder tone and tighter alignment, even as developers strive for safer, more responsible AI behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk threatens legal action against Apple over AI App rankings

Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.

Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.

Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.

The dispute highlights growing tensions as AI companies compete for prominence on major platforms.

Apple and Musk’s xAI have not responded yet to requests for comment.

The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small language models gain ground in AI translation

Small language models are emerging as a serious challenger to large, general-purpose AI in translation, offering faster turnaround, lower costs, and greater accuracy for specific industries and language pairs.

Straker, an ASX-listed language technology firm, claims its Tiri model family can outperform larger systems by focusing on domain-specific understanding and terminology rather than broad coverage.

Tiri delivers higher contextual accuracy by training on carefully curated translation memories and sector-specific data, cutting the need for expensive human post-editing. The models also consume less computing power, benefiting finance, healthcare, and law industries.

Straker integrates human feedback directly into its workflows to ensure ongoing improvements and maintain client trust.

The company is expanding its technology into enterprise automation by integrating with the AI workflow platform n8n.

It adds Straker’s Verify tool to a network of over 230,000 users, allowing automated translation checks, real-time quality scores, and seamless escalation to human linguists. Further integrations with platforms like Microsoft Teams are planned.

Straker recently reported record profitability and secured a price target upgrade from broker Ord Minnett. The firm believes the future of AI translation lies not in scale but in specialised models that deliver translations that are both fluent and accurate in context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!