EU pressures Meta over alleged smart glasses privacy breaches

Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.

Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.

The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.

They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.

EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.

Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.

The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.

Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.

Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK launches consultation on possible social media ban for under-16s

Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.

Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.

The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.

Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI helps scientists translate thoughts into speech and images

Breakthroughs in AI and neuroscience are bringing researchers closer to translating human thoughts into words, offering new communication tools for people living with paralysis or severe speech disorders. Experiments with implanted brain electrodes have enabled patients to produce sentences simply by imagining speech.

Machine learning systems analyse neural signals captured from small electrode arrays placed in speech-related brain regions, converting activity into text at increasing speed and accuracy. Recent trials achieved communication rates approaching practical conversation while also capturing tone, rhythm and emotional expression.

Scientists have begun detecting ‘inner speech’, identifying silent counting or imagined phrases without physical attempts to speak. Findings suggest thinking and speaking rely on overlapping neural networks, although spontaneous thoughts remain difficult to decode reliably.

Beyond language, researchers are reconstructing images, music and sensory experiences from brain scans using generative AI models. Studies analysing visual and auditory processing reveal how different brain regions encode perception, opening possibilities for studying hallucinations, dreams and animal cognition.

Technology companies, including Neuralink, are pushing brain-computer interfaces toward commercial use, though current systems sample only a tiny fraction of the brain’s billions of neurons. Experts believe widespread applications such as natural speech restoration or even brain-to-brain communication may emerge within the next two decades, alongside growing ethical debates around privacy and mental autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Does politeness improve AI responses

Research suggests that being polite to AI chatbots such as ChatGPT does not reliably improve accuracy, despite widespread belief to the contrary. Experiments testing flattery, encouragement and even insults found inconsistent results across different large language models.

Experts in the US say many prominent engineering myths have faded as AI systems have improved. Minor wording changes, such as adding ‘please’ or ‘thank you’, are unlikely to influence mainstream generative AI tools consistently.

Computer scientists argue that users should treat AI as a tool rather than a person. Techniques that do work include asking for multiple options, providing concrete examples and requesting step-by-step clarification before generating a final response.

Researchers also warn that role playing can reduce accuracy when a question has one correct answer, potentially increasing hallucinations. For creative tasks, however, role play and iterative questioning can still be effective when used carefully.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsung settles Texas lawsuit over smart TV data collection

Samsung has settled a lawsuit with the Texas Attorney General over allegations that its smart TVs collected viewing data without users’ informed consent.

Texas Attorney General Ken Paxton filed the suit last December, accusing Samsung of using Automated Content Recognition (ACR) technology to capture screenshots of what consumers were watching and using that information for targeted advertising.

As part of the settlement, Samsung must halt any collection or processing of ACR viewing data without first obtaining the express consent of Texas consumers.

The company is also required to update its smart TVs with clear, conspicuous disclosure and consent screens, replacing what a court had previously identified as ‘dark patterns’ requiring over 200 clicks to access privacy settings.

Samsung stated that it does not believe its Viewing Information Services system violated any regulations, but agreed to strengthen its privacy disclosures. Paxton noted that other smart TV manufacturers, including Sony, LG, Hisense, and TCL Technologies, have not yet made similar changes in response to ongoing lawsuits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claws become the new trend in local agentic AI

A new expression has entered the AI vocabulary, with ‘claws’ becoming the latest term to capture the industry’s imagination.

The term refers to a growing family of open-source personal assistants designed to run locally on consumer hardware, often on Apple’s compact Mac mini rather than on cloud-based servers.

These assistants can access calendars, email accounts, coding tools, browsers and external model APIs, enabling them to carry out complex digital tasks autonomously.

Interest increased after AI researcher Andrej Karpathy described his experiments with claws, prompting broader attention across online communities.

Many users have begun adopting the tools as lightweight agentic systems capable of handling real work, from scheduling meetings to writing software overnight by linking to models from providers such as OpenAI.

The name originated with Clawdbot, which was recently rebranded as OpenClaw and became a prominent example in Silicon Valley.

A wave of variants, including NanoClaw, ZeroClaw and IronClaw, has followed, marking a surge in locally run assistants that appeal to users seeking greater autonomy, privacy and experimentation.

Growing enthusiasm for claws highlights a wider shift towards agentic AI running directly on personal devices.

Whether these systems become mainstream or remain a niche developer trend, they show how quickly the AI landscape can evolve and how new concepts often spread long before they fully mature.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum-safe security upgrades SIM and eSIM cards

Thales has successfully demonstrated a world-first capability that prepares 5G networks for the era of quantum computing. The test proved that SIM and eSIM cards can be remotely upgraded to support post-quantum cryptography, boosting security without disrupting services or user experience.

The breakthrough highlights the potential of crypto-agile networks to evolve securely as quantum threats emerge.

Replacing millions of devices is impractical, so Thales enables operators to deploy quantum-safe algorithms directly to existing devices. Remote upgrades preserve data and connectivity while instantly boosting security, keeping 5G networks resilient and trusted.

The demonstration reinforces Thales’ leadership in post-quantum cryptography, with dedicated research teams developing quantum-resistant methods and contributing to international standards, including NIST initiatives.

Operators can now protect long-term investments, secure critical services, and prepare for the next generation of quantum computing without operational disruptions.

Thales’ approach offers a practical roadmap for telecoms to adopt quantum-safe security today, ensuring continuity, trust, and resilience across mobile networks as digital threats evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm unveils AI focused wearable chip

Qualcomm has unveiled its Snapdragon Wear Elite chip at MWC 2026 in Barcelona, positioning it for a new wave of AI-driven wearable devices. The company said the processor is aimed at pins, pendants, and potentially display-free smart glasses.

Built on a 3nm process, the chip includes both an eNPU for low-power AI tasks and a Hexagon NPU for heavier on-device processing. Qualcomm said the platform can handle up to two billion parameters locally, supporting more advanced AI features without relying on the cloud.

The Snapdragon Wear Elite is designed to sit alongside the existing W5 Plus rather than replace it. Qualcomm added that the chip improves power efficiency, with GPS tracking using 40 per cent less power and fast charging that delivers around 50 per cent of battery capacity in 10 minutes.

Connectivity features include satellite support, 5G, ultra wideband and Bluetooth 6.0. Qualcomm signalled that longer battery life and on-device AI performance will be central to the next generation of wearable AI gadgets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Central bank in Russia cracks down on crypto-enabled pyramid schemes

Russia’s central bank reports that two-thirds of pyramid scheme operators use crypto, with funds sent to over 4,600 fraudster-controlled wallets in 2025. Authorities identified 7,087 online scams last year, most of which used crypto and money mules to collect illicit funds.

Officials highlighted that these schemes typically operate without physical offices, engaging victims via social media, chat apps, and phone calls. Nearly 1,500 firms offered fake crypto investments, and 84% of scammers used cryptocurrency to raise funds, up from 77% in 2024.

The central bank has blocked 21,500 web pages and social media posts linked to fraudulent operators.

The government is fast-tracking regulations, warning that only licensed firms can offer investments to Russian retail investors. Authorities plan to continue monitoring sophisticated online schemes and enhance public awareness to combat crypto-enabled fraud.

Crypto markets remain active, with Bitcoin trading at $66,566, up 3.8%, and Ethereum at $1,990, up more than 6% in the past 24 hours.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Finance ministry in South Korea pledges reform for public crypto management

South Korea’s finance minister, Koo Yun-cheol, has pledged urgent reforms to how government agencies manage digital assets following high-profile failures in state custody.

Recent incidents revealed that police and tax authorities mishandled seized cryptocurrency, highlighting weaknesses in oversight and security practices. Authorities will review current management methods and implement measures to prevent future losses.

Operational risks around securing crypto in public institutions have become increasingly apparent. A notable case involved Seoul police in Gangnam losing access to 22 BTC, worth around $1.4 million, after failing to retain private keys and allowing a third-party firm to manage the assets.

Prosecutors are now investigating potential bribery linked to the case.

The government says it holds only digital assets acquired through lawful enforcement, such as seizures for unpaid taxes or criminal cases. The reforms aim to strengthen security, improve operational controls, and restore confidence in the public sector’s handling of crypto amid growing scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot