Samsung’s AI smart glasses are coming to take on Meta Ray-Ban

Samsung has confirmed key details about its upcoming AI smart glasses, including a camera positioned at ‘eye level’ and smartphone connectivity, ahead of a planned 2026 launch.

The device is being developed in partnership with Qualcomm and Google, building on the same ecosystem that produced the Galaxy XR headset, and will be powered by Google’s Gemini AI.

Samsung executive Jay Kim indicated that the glasses will be able to understand ‘where you’re looking at’, allowing the AI to analyse objects or scenes in the user’s field of view and provide contextual information in real time.

Processing is expected to take place on a connected smartphone rather than within the glasses themselves, and Samsung has not confirmed whether a built-in display will be included, suggesting multiple versions may be in development.

The announcement puts Samsung on a direct collision course with Meta, whose Ray-Ban Meta Gen 2 glasses are already on the market, offering 3K video recording and up to eight hours of battery life. Meta has also launched the Oakley Meta HSTN glasses, aimed at sports and outdoor users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over Grok AI content on X platform

Social media platform X has launched an investigation into racist and offensive posts generated by its Grok AI chatbot in the UK. The review follows a Sky News analysis that flagged troubling responses produced publicly by the system.

Analysis by the broadcaster found Grok generating highly offensive replies, including profanities targeting certain religions. Some responses also repeated false claims blaming Liverpool supporters for the 1989 Hillsborough disaster.

Sky News reporter Rob Harris said X safety teams were urgently examining the chatbot’s behaviour after the posts spread online. The company and its AI developer xAI did not immediately respond to requests for comment.

Concerns around Grok come as governments and regulators increasingly scrutinise AI-generated content on social platforms. Authorities in several countries have already raised alarms about sexually explicit or harmful material created by chatbots.

Earlier this year, xAI introduced new restrictions to limit some image editing features in Grok. Users in certain jurisdictions were also blocked from generating images of people in revealing clothing where such content is illegal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools linked to rise in abuse disclosures

Support organisations in the UK report that some abuse survivors are turning to AI tools such as ChatGPT before contacting helplines. Charities in the UK say individuals increasingly use AI to explore experiences and seek guidance before approaching professional support services.

The National Association of People Abused in Childhood said callers in the UK have recently reported being referred to its helpline after conversations with ChatGPT. Staff say AI is being used as an informal step in processing trauma.

Law enforcement and support groups in the UK have also recorded a rise in disclosures involving ritualistic sexual abuse. Authorities in the UK say only 14 criminal cases since 1982 have formally recognised such practices.

Police and support organisations are responding by improving training and launching specialist working groups. Officials aim to strengthen the identification and investigation of complex cases of abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI method improves transparency in computer vision models

Researchers at MIT have developed a new technique designed to improve how computer vision models explain their predictions while maintaining strong accuracy. Transparency is crucial as AI enters fields like healthcare and autonomous driving, where decisions must be clear.

The method uses concept bottleneck models, which enable AI to base its predictions on human-understandable concepts. Traditional approaches rely on expert-defined concepts that can be incomplete or ill-suited, sometimes lowering model performance.

Researchers instead created a system that extracts concepts the AI learned during training. A sparse autoencoder selects key features, and a multimodal language model turns them into plain-language descriptions and labels.

The resulting module forces the AI to make predictions using only those extracted concepts.

Tests on bird classification and medical image datasets showed that the new method improved accuracy and provided clearer explanations. Findings suggest that using a model’s internal concepts can boost transparency and accountability in AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lenovo introduces rollable laptop and AI agent

Redefining how people interact with technology, Lenovo is advancing through rollable laptops, foldable devices and adaptive AI systems that anticipate user needs.

The company is shifting from manufacturing hardware to creating multi-platform systems that adapt seamlessly to workflows instead of relying solely on traditional devices.

Qira, Lenovo’s personal AI super-agent, transfers tasks across devices while maintaining context and history with user permission. It can suggest actions and predict needs, aiming to improve productivity and employee satisfaction, although security and privacy concerns remain significant.

The rollable laptop features a 14-inch screen that expands vertically to 16.7 inches, providing immersive experiences for gaming and content consumption while remaining portable.

Lenovo is also exploring voice-driven tools, including AI Workmate prototypes, allowing users to create presentations and digital content simply through speech.

By combining innovative screen designs with intelligent AI agents, Lenovo aims to create unified ecosystems that prioritise user experience and adaptability instead of focusing solely on device specifications.

The company believes these technologies will gradually become culturally accepted, similar to self-driving cars.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

The EU faces growing AI copyright disputes

Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.

One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.

Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.

Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pentagon AI dispute raises concerns for startups

A dispute between Anthropic and the Pentagon in the US has raised questions about whether startups will hesitate to pursue defence contracts. Negotiations over the use of Anthropic’s Claude AI technology collapsed, prompting the US administration to label the company a supply chain risk.

The situation in the US escalated as OpenAI secured its own agreement with the Pentagon. The development sparked backlash online, with reports of a surge in ChatGPT uninstalls after the defence partnership announcement.

Technology analysts in the US say the controversy highlights the unusual scrutiny facing high-profile AI firms. Companies such as OpenAI and Anthropic attract intense public attention because widely used AI products place their defence partnerships in the spotlight.

Startup founders in the US are now debating the risks of government contracts, particularly with the Pentagon. Industry observers in the US warn that defence authorities’ contract changes could make government collaboration more uncertain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cursor launches tool to automate agentic coding workflows

Cursor has launched a new tool called Automations, designed to help software engineers manage the growing complexity of overseeing multiple AI coding agents at once.

Rather than requiring a human to initiate each task, the system allows agents to launch automatically in response to events such as a new code addition, a Slack message, or a scheduled timer.

The shift is significant because it breaks the ‘prompt-and-monitor’ model that currently defines most AI-assisted engineering.

As Cursor’s engineering lead for asynchronous agents, Jonas Nelle put it, humans are no longer always the ones initiating; they are called in at the right moments, rather than tracking dozens of processes simultaneously.

Early applications include automated bug reviews, security audits, PagerDuty incident response, and weekly codebase summaries delivered to Slack.

The launch comes as competition in the agentic coding space intensifies, with both OpenAI and Anthropic releasing major updates to their tools in recent weeks. Cursor’s annual recurring revenue has nonetheless doubled over the past three months to more than $2 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!