Amelia brings heads-up guidance to Amazon couriers

Amazon unveiled ‘Amelia’ AI-powered smart glasses for delivery drivers with a built-in display and camera, paired to a vest with a photo button, now piloting with hundreds of drivers across more than a dozen partners.

Designed for last-mile efficiency, Amelia can auto-shut down when a vehicle moves to prevent distraction, includes a hardware kill switch for the camera and mic, and aims to save about 30 minutes per 8–10-hour shift by streamlining repetitive tasks.

Initial availability is planned for the US market and the rest of North America before global expansion, with Amazon emphasizing that Amelia is custom-built for drivers, though consumer versions aren’t ruled out. Pilots involve real routes and live deliveries to customers.

Amazon also showcased a warehouse robotic arm to sort parcels faster and more safely, as well as an AI orchestration system that ingests real-time and historical data to predict bottlenecks, propose fixes, and keep fulfillment operations running smoothly.

The move joins a broader push into wearables from Big Tech. Unlike Meta’s consumer-oriented Ray-Ban smart glasses, Amelia targets enterprise use, promising faster package location, fewer taps, and tighter integration with Amazon’s delivery workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba pushes unified AI with Quark Chat and wearables

Quark, Alibaba’s consumer AI app, has launched an AI Chat Assistant powered by Qwen3 models, merging real-time search with conversational reasoning so users can ask by text or voice, get answers, and trigger actions from a single interface.

On iOS and Android, you can tap ‘assistant’ in the AI Super Box or swipe right to open chat, then use prompts to summarise pages, draft replies, or pull sources, with results easily shared to friends, Stories, or outside the app.

Beyond Q&A, the assistant adds deep search, photo-based problem-solving, and AI writing, while supporting multimodal tasks like photo editing, AI camera, and phone calls. Forthcoming MCP integrations will expand agent execution across Alibaba services.

Quark AI Glasses opened pre-sale in China on October 24 via Tmall with a list price of 4,699 RMB before coupons or memberships, deliveries starting in phases from December, and 1 RMB reservations available on JD.com and Douyin.

Powered by Qwen for hands-free assistance, translation, and meeting transcription, the glasses emphasise lightweight ergonomics, long battery life, and quality imaging, with bundles, accessories, and prescription lens options to broaden fit and daily use.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines Japan’s AI Blueprint for inclusive economic growth

A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.

Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.

AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.

OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces EU’s toughest platform rules after 120 million users

OpenAI’s ChatGPT could soon face the EU’s strictest platform regulations under the Digital Services Act (DSA), after surpassing 120 million monthly users in Europe.

A milestone that places OpenAI’s chatbot above the 45 million-user threshold that triggers heightened oversight.

The DSA imposes stricter obligations on major platforms such as Meta, TikTok, and Amazon, requiring greater transparency, risk assessments, and annual fees to fund EU supervision.

The European Commission confirmed it has begun assessing ChatGPT’s eligibility for the ‘very large online platform’ status, which would bring the total number of regulated platforms to 26.

OpenAI reported that its ChatGPT search function alone had 120.4 million monthly active users across the EU in the six months ending 30 September 2025. Globally, the chatbot now counts around 700 million weekly users.

If designated under the DSA, ChatGPT would be required to curb illegal and harmful content more rigorously and demonstrate how its algorithms handle information, marking the EU’s most direct regulatory test yet for generative AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia moves to classify crypto as marital property

A Russian lawmaker has proposed recognising crypto as marital property to clarify asset ownership in divorce cases. The bill, introduced by Igor Antropenko of the United Russia party, seeks to amend Articles 34 and 36 of the Family Code to classify crypto acquired during marriage as joint property.

Digital assets obtained before marriage or through gifts would remain individually owned.

The proposal aims to address what Antropenko described as ‘risks to property rights’ arising from the current legal ambiguity surrounding digital currencies. It has been sent to Prime Minister Mikhail Mishustin and Central Bank Chairwoman Elvira Nabiullina for review.

The explanatory note highlights the constitutional obligation to protect property rights and cites the growing use of crypto among Russian citizens for investment and savings.

Russia’s move mirrors South Korea’s approach, where courts already recognise cryptocurrencies as divisible marital assets. Under Article 839-2 of Korea’s Civil Act, spouses can request investigations into hidden crypto holdings and either liquidate or divide tokens directly.

Blockchain transparency has made digital asset tracking easier than tracing cash, closing loopholes in asset concealment during divorce.

The proposal comes as Russia’s crypto activity hit $376.3 billion between July 2024 and June 2025, overtaking all European markets. Growing use of DeFi, stablecoins, and plans for a national crypto bank show increasing state involvement in digital finance.

Legal recognition of crypto as property would bring family law in line with this broader regulatory shift.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto hiring snaps back as AI cools

Tech firms led crypto’s hiring rebound, adding over 12,000 roles since late 2022, according to A16z’s State of Crypto 2025. Finance and consulting contributed 6,000, offsetting talent pulled into AI after ChatGPT’s debut. Net, crypto gained 1,000 positions as workers rotated in from tech, fintech, and education.

The recovery tracks a market turn: crypto capitalisation topping US$4T and new Bitcoin highs. A friendlier US policy stance on stablecoins and digital-asset oversight buoyed sentiment. Institutions from JPMorgan to BlackRock and Fidelity widened offerings beyond pilots.

Hiring is diversifying beyond developers toward compliance, infrastructure, and product. Firms are moving from proofs of concept to production systems with clearer revenue paths. Result: broader role mix and steadier talent pipelines.

A16z contrasts AI centralisation with crypto’s open ethos. OpenAI/Anthropic dominate AI-native revenue; big clouds hold most of the infrastructure share; NVIDIA leads GPUs. Crypto advocates pitch blockchains as a counterweight via verifiable compute and open rails.

Utility signals mature, too. Stablecoins settled around US$9T in 12 months, up 87% year over year. That’s over half of Visa’s annual volume and five times that of PayPal’s.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan to achieve full Internet access for all citizens by 2027

Kazakhstan aims to provide Internet access to its entire population by 2027 as part of the national ‘Affordable Internet’ project.

Deputy Prime Minister and Minister of AI and Digital Development Zhaslan Madiyev outlined the country’s digital transformation goals during a government session, highlighting plans to eliminate digital inequality and expand broadband connectivity.

Over one trillion tenge has been invested in telecommunications in the past three years, bringing average Internet speeds to 94 Mbps. By 2027, Kazakhstan expects to achieve 100% Internet coverage, speeds above 100 Mbps, and fiber-optic access for 90% of rural settlements.

Currently, 84% of villages already have mobile Internet, and 2,606 are connected to main fibre-optic lines.

The plan includes 4G coverage for 92% of settlements, 5G deployment in 20 cities, and 4G connectivity across 40,000 km of highways. Satellite Internet will reach 504 remote villages by 2025.

Madiyev also noted Kazakhstan’s strategic role in global data transit, with projects such as the Caspian Sea undersea fibre-optic line aiming to raise its share of international traffic from 1.5% to 5% by 2027.

An initiative that supports Kazakhstan’s ambition to become a regional IT hub by 2030, with the number of IT racks set to grow from 4,000 to 20,000, and at least nine Tier III-IV data centres planned.

The country has also launched the National Supercomputer Center ‘alem.cloud’ and the ‘Al-Farabium’ tech cluster to strengthen its digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is the world ready for AI to rule justice?

AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As technology reshapes the way we work, communicate, and make decisions, its potential to transform legal processes is becoming increasingly difficult to ignore. The justice system, however, is one of the most ethically sensitive and morally demanding fields in existence. 

For AI to play a meaningful role in it, it must go beyond algorithms and data. It needs to understand the principles of fairness, context, and morality that guide every legal judgement. And perhaps more challengingly, it must do so within a system that has long been deeply traditional and conservative, one that values precedent and human reasoning above all else. Jet, from courts to prosecutors to lawyers, AI promises speed, efficiency, and smarter decision-making, but can it ever truly replace the human touch? 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI in courts: Smarter administration, not robot judges… yet

Courts across the world are drowning in paperwork, delays, and endless procedural tasks, challenges that are well within AI’s capacity to solve efficiently. From classifying cases and managing documentation to identifying urgent filings and analysing precedents, AI systems are beginning to serve as silent assistants within courtrooms. 

The German judiciary, for example, has already shown what this looks like in practice. AI tools such as OLGA and Frauke have helped categorise thousands of cases, extract key facts, and even draft standardised judgments in air passenger rights claims, cutting processing times by more than half. For a system long burdened by backlogs, such efficiency is revolutionary.

Still, the conversation goes far beyond convenience. Justice is not a production line; it is built on fairness, empathy, and the capacity to interpret human intent. Even the most advanced algorithm cannot grasp the nuance of remorse, the context of equality, or the moral complexity behind each ruling. The question is whether societies are ready to trust machine intelligence to participate in moral reasoning.

The final, almost utopian scenario would be a world where AI itself serves as a judge who is unbiased, tireless, and immune to human error or emotion. Yet even as this vision fascinates technologists, legal experts across Europe, including the EU Commission and the OECD, stress that such a future must remain purely theoretical. Human judges, they argue, must always stay at the heart of justice- AI may assist in the process, but it must never be the one to decide it. The idea is not to replace judges but to help them navigate the overwhelming sea of information that modern justice generates.

Courts may soon become smarter, but true justice still depends on something no algorithm can replicate: the human conscience. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for prosecutors: Investigating with superhuman efficiency

Prosecutors today are also sifting through thousands of documents, recordings, and messages for every major case. AI can act as a powerful investigative partner, highlighting connections, spotting anomalies, and bringing clarity to complex cases that would take humans weeks to unravel. 

Especially in criminal law, cases can involve terabytes of documents, evidence that humans can hardly process within tight legal deadlines or between hearings, yet must be reviewed thoroughly. AI tools can sift through this massive data, flag inconsistencies, detect hidden links between suspects, and reveal patterns that might otherwise remain buried. Subtle details that might escape the human eye can be detected by AI, making it an invaluable ally in uncovering the full picture of a case. By handling these tasks at superhuman speed, AI could also help accelerate the notoriously slow pace of legal proceedings, giving prosecutors more time to focus on strategy and courtroom preparation. 

More advanced systems are already being tested in Europe and the US, capable of generating detailed case summaries and predicting which evidence is most likely to hold up in court. Some experimental tools can even evaluate witness credibility based on linguistic cues and inconsistencies in testimony. In this sense, AI becomes a strategic partner, guiding prosecutors toward stronger, more coherent arguments. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for lawyers: Turning routine into opportunity

The adoption of AI and its capabilities might reach their maximum when it comes to the work of lawyers, where transforming information into insight and strategy is at the core of the profession. AI can take over repetitive tasks: reviewing contracts, drafting documents, or scanning case files, freeing lawyers to focus on the work that AI cannot replace, such as strategic thinking, creative problem-solving, and providing personalised client support. 

AI can be incredibly useful for analysing publicly available cases, helping lawyers see how similar situations have been handled, identify potential legal opportunities, and craft stronger, more informed arguments. By recognising patterns across multiple cases, it can suggest creative questions for witnesses and suspects, highlight gaps in the evidence, and even propose potential defence strategies. 

AI also transforms client communication. Chatbots and virtual assistants can manage routine queries, schedule meetings, and provide concise updates, giving lawyers more time to understand clients’ needs and build stronger relationships. By handling the mundane, AI allows lawyers to spend their energy on reasoning, negotiation, and advocacy.

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

Balancing promise with responsibility

AI is transforming the way courts, prosecutors, and lawyers operate, but its adoption is far from straightforward. While it can make work significantly easier, the technology also carries risks that legal professionals cannot ignore. Historical bias in data can shape AI outputs, potentially reinforcing unfair patterns if humans fail to oversee its use. Similarly, sensitive client information must be protected at all costs, making data privacy a non-negotiable responsibility. 

Training and education are therefore crucial. It is essential to understand not only what AI can do but also its limits- how to interpret suggestions, check for hidden biases, and decide when human judgement must prevail. Without this understanding, AI risks being a tool that misleads rather than empowers. 

The promise of AI lies in its ability to free humans from repetitive work, allowing professionals to focus on higher-value tasks. But its power is conditional: efficiency and insight mean little without the ethical compass of the human professionals guiding it.

Ultimately, the justice system is more than a process. It is about fairness, empathy, and moral reasoning. AI can assist, streamline, and illuminate, but the responsibility for decisions, for justice itself, remains squarely with humans. In the end, the true measure of AI’s success in law will be how it enhances human judgement, not how it replaces it.

So, is the world ready for AI to rule justice? The answer remains clear. While AI can transform how justice is delivered, the human mind, heart, and ethical responsibility must remain at the centre. AI may guide the way, but it cannot and should not hold the gavel.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!