Amelia brings heads-up guidance to Amazon couriers

Amazon unveiled ‘Amelia’ AI-powered smart glasses for delivery drivers with a built-in display and camera, paired to a vest with a photo button, now piloting with hundreds of drivers across more than a dozen partners.

Designed for last-mile efficiency, Amelia can auto-shut down when a vehicle moves to prevent distraction, includes a hardware kill switch for the camera and mic, and aims to save about 30 minutes per 8–10-hour shift by streamlining repetitive tasks.

Initial availability is planned for the US market and the rest of North America before global expansion, with Amazon emphasizing that Amelia is custom-built for drivers, though consumer versions aren’t ruled out. Pilots involve real routes and live deliveries to customers.

Amazon also showcased a warehouse robotic arm to sort parcels faster and more safely, as well as an AI orchestration system that ingests real-time and historical data to predict bottlenecks, propose fixes, and keep fulfillment operations running smoothly.

The move joins a broader push into wearables from Big Tech. Unlike Meta’s consumer-oriented Ray-Ban smart glasses, Amelia targets enterprise use, promising faster package location, fewer taps, and tighter integration with Amazon’s delivery workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba pushes unified AI with Quark Chat and wearables

Quark, Alibaba’s consumer AI app, has launched an AI Chat Assistant powered by Qwen3 models, merging real-time search with conversational reasoning so users can ask by text or voice, get answers, and trigger actions from a single interface.

On iOS and Android, you can tap ‘assistant’ in the AI Super Box or swipe right to open chat, then use prompts to summarise pages, draft replies, or pull sources, with results easily shared to friends, Stories, or outside the app.

Beyond Q&A, the assistant adds deep search, photo-based problem-solving, and AI writing, while supporting multimodal tasks like photo editing, AI camera, and phone calls. Forthcoming MCP integrations will expand agent execution across Alibaba services.

Quark AI Glasses opened pre-sale in China on October 24 via Tmall with a list price of 4,699 RMB before coupons or memberships, deliveries starting in phases from December, and 1 RMB reservations available on JD.com and Douyin.

Powered by Qwen for hands-free assistance, translation, and meeting transcription, the glasses emphasise lightweight ergonomics, long battery life, and quality imaging, with bundles, accessories, and prescription lens options to broaden fit and daily use.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU warns Meta and TikTok over transparency failures

The European Commission has found that Meta and TikTok violated key transparency obligations under the EU’s Digital Services Act (DSA). According to preliminary findings, both companies failed to provide adequate data access to researchers studying public content on their platforms.

The Commission said Facebook, Instagram, and TikTok imposed ‘burdensome’ conditions that left researchers with incomplete or unreliable data, hampering efforts to investigate the spread of harmful or illegal content online.

Meta faces additional accusations of breaching the DSA’s rules on user reporting and complaints. The Commission said the ‘Notice and Action’ systems on Facebook and Instagram were not user-friendly and contained ‘dark patterns’, manipulative design choices that discouraged users from reporting problematic content.

Moreover, Meta allegedly failed to give users sufficient explanations when their posts or accounts were removed, undermining transparency and accountability requirements set by the law.

Both companies have the opportunity to respond before the Commission issues final decisions. However, if the findings are confirmed, Meta and TikTok could face fines of up to 6% of their global annual revenue.

The EU executive also announced new rules, effective next week, that will expand data access for ‘vetted’ researchers, allowing them to study internal platform dynamics and better understand how large social media platforms shape online information flows.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Deepfake targeting Irish presidential candidate sparks election integrity warning

Irish presidential candidate Catherine Connolly condemned a deepfake AI video that falsely announced her withdrawal from the race. The clip, designed to resemble an RTÉ News broadcast, spread online before being reported and removed from major social media platforms.

Connolly said the video was a disgraceful effort to mislead voters and damage democracy. Her campaign team filed a complaint with the Irish Electoral Commission and requested that all copies be clearly labelled as fake.

Experts at Dublin City University identified slight distortions in speech and lighting as signs of AI manipulation. They warned that the rapid spread of synthetic videos underscores weak content moderation by online platforms.

Connolly urged the public not to share the clip and to respond through civic participation. Authorities are monitoring digital interference as Ireland prepares for its presidential vote on Friday.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT faces EU’s toughest platform rules after 120 million users

OpenAI’s ChatGPT could soon face the EU’s strictest platform regulations under the Digital Services Act (DSA), after surpassing 120 million monthly users in Europe.

A milestone that places OpenAI’s chatbot above the 45 million-user threshold that triggers heightened oversight.

The DSA imposes stricter obligations on major platforms such as Meta, TikTok, and Amazon, requiring greater transparency, risk assessments, and annual fees to fund EU supervision.

The European Commission confirmed it has begun assessing ChatGPT’s eligibility for the ‘very large online platform’ status, which would bring the total number of regulated platforms to 26.

OpenAI reported that its ChatGPT search function alone had 120.4 million monthly active users across the EU in the six months ending 30 September 2025. Globally, the chatbot now counts around 700 million weekly users.

If designated under the DSA, ChatGPT would be required to curb illegal and harmful content more rigorously and demonstrate how its algorithms handle information, marking the EU’s most direct regulatory test yet for generative AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zuckerberg to testify in landmark trial over social media’s harm to youth

A US court has mandated that Mark Zuckerberg, CEO of Meta, must appear and testify in a high-stakes trial about social media’s effects on children and adolescents. The case, brought by parents and school districts, alleges that platforms contributed to mental health harms by deploying addictive algorithms and weak moderation in their efforts to retain user engagement.

The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.

Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.

Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU sets new rules for cloud sovereignty framework

The European Commission has launched its Cloud Sovereignty Framework to assess the independence of cloud services. The initiative defines clear criteria and scoring methods for evaluating how providers meet EU sovereignty standards.

Under the framework, the Sovereign European Assurance Level, or SEAL, will rank services by compliance. Assessments cover strategic, legal, operational, and technological aspects, aiming to strengthen data security and reduce reliance on foreign systems.

Officials say the framework will guide both public authorities and private companies in choosing secure cloud options. It also supports the EU’s broader goal of achieving technological autonomy and protecting sensitive information.

The Commission’s move follows growing concern over extra-EU data transfers and third-country surveillance. Industry observers view it as a significant step toward Europe’s ambition for trusted, sovereign digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU states split over children’s social media rules

European leaders remain divided over how to restrict children’s use of social media platforms. While most governments agree stronger protections are needed, there is no consensus on enforcement or age limits.

Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.

France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.

Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!