HTC enters the AI smart glasses race with Vive Eagle

HTC has entered the increasingly competitive world of AI-powered smart glasses with its newly unveiled Vive Eagle. Once a smartphone giant, the Taiwanese company is now betting on wearable tech to reassert itself against rivals like Meta, Google, Samsung, and Apple, all racing to define the next big computing platform.

The Vive Eagle is available only in Taiwan, priced at around $520. Lightweight at just 49 grams, the glasses combine style with function, offering Zeiss sun lenses and frames in multiple colours.

However, their built-in Vive AI voice assistant sets them apart, and it can translate text into 13 languages by pointing the wearer’s gaze. Users can also set reminders, take notes, and even get restaurant recommendations, features modelled to compete with existing rivals.

Meta, the most visible player in this space, has already established global sales with its Ray-Ban smart glasses and is now working on advanced ‘super-sensing’ technology to identify people, places, and objects in real time. Apple, meanwhile, is quietly preparing its entry, with reports suggesting smart glasses powered by Apple Watch–grade chips and integrated Siri, which are expected to debut around 2027.

Google has showcased how its Gemini AI could merge seamlessly with smart wearables, demonstrating live translation, navigation, and object recognition during a TED 2025 preview of its Android XR platform. Samsung, too, is preparing its Project Haean glasses, designed for comfort, gesture control, and fitness tracking, powered by Qualcomm’s latest XR2 Plus Gen 2 chip.

For HTC, the challenge will be to break through this crowded field. While Vive Eagle’s translation and assistant features offer practical appeal, its limited release in Taiwan raises questions about whether HTC intends to scale globally or remain a niche player in its home market. In a sector where timing and reach are everything, the company’s next move will determine whether the Eagle soars or struggles to leave the nest.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sam Altman admits OpenAI holds back stronger AI models

OpenAI recently unveiled GPT-5, a significant upgrade praised for its advances in accuracy, reasoning, writing, coding and multimodal capabilities. The model has also been designed to reduce hallucinations and excessive agreeableness.

Chief executive Sam Altman has admitted that OpenAI has even more powerful systems that cannot be released due to limited capacity.

Altman explained that the company must make difficult choices, as existing infrastructure cannot yet support the more advanced models. To address the issue, OpenAI plans to invest in new data centres, with spending potentially reaching trillions of dollars.

The shortage of computing power has already affected operations, including a cutback in image generation earlier in the year, following the viral Studio Ghibli-style trend.

Despite criticism of GPT-5 for offering shorter responses and lacking emotional depth, ChatGPT has grown significantly.

Altman said the platform is now the fifth most visited website worldwide and is on track to overtake Instagram and Facebook. However, he acknowledged that competing with Google will be far harder.

OpenAI intends to expand beyond ChatGPT with new standalone applications, potentially including an AI-driven social media service.

The company also backs Merge Labs, a brain-computer interface rival to Elon Musk’s Neuralink. It has partnered with former Apple designer Jony Ive to create a new AI device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s promise comes with a heavy environmental price

Governments worldwide are racing to harness the economic potential of AI, but the technology’s environmental toll is growing, and it is impossible to ignore. In the US, President Trump is calling for a ten-year freeze on AI regulation, while the UK is planning ‘AI growth zones’ filled with data centres. Yet these same centres consume enormous amounts of electricity and water, resources already under strain globally.

From 2027, AI systems are expected to use at least four billion cubic metres of water annually, five times Denmark’s yearly consumption. That raises a pressing dilemma: How to support innovation without deepening an already critical global water crisis, where one in four people lacks access to clean drinking water?

The paradox is stark. AI is driving breakthroughs in medicine, agriculture, and climate science, with systems predicting cancer-fighting protein structures and detecting deforestation from space.

AI could save the UK government tens of billions in the public sector by automating routine tasks. But powering these advances requires vast, always-on data centres, which last year in the US alone consumed enough electricity for seven million homes.

Regulators are beginning to respond. The EU now requires big data centres to disclose energy and water use.

The UK, however, has yet to weave AI infrastructure into its climate and planning strategies, a gap that could worsen grid congestion and even delay housing projects. Public concern is mounting: four of five Britons believe AI needs stricter oversight.

Experts say transparency is key. Mandatory reporting on emissions, water use, and energy consumption could give citizens and policymakers the data they need to act. Incentives for efficiency, renewable energy requirements, and creative solutions like using data centre heat to warm homes are already being tested abroad.

The technology’s potential remains enormous. Studies suggest AI could cut more than five billion tonnes of global emissions by 2035, with energy, transport, and food applications. But unless sustainability becomes central to the UK’s AI strategy, the race to innovate risks becoming a sprint into environmental crisis.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Royal Wolverhampton NHS Trust reshapes care with AI and robotic

At The Royal Wolverhampton NHS Trust, AI is beginning to transform how doctors and patients experience healthcare, cutting down on paperwork and making surgery more precise. Hospital leaders say the shift is already delivering tangible results, from faster clinical letters to reduced patient hospital visits.

One of the most impactful innovations is CLEARNotes, a system that records and summarises doctor-patient consultations. Instead of doctors spending days drafting clinic letters, the technology reduces turnaround time from as long as a week to just a day or two. Clinicians report that this tool saves time and improves productivity by as much as 25% in some clinics while ensuring that safety and governance standards remain intact.

Surgery is another area where technology is making its mark. The trust operates two Da Vinci Xi robots, now a regular feature in complex procedures, including urology, colorectal, cardiothoracic, and gynaecology cases. Compared to traditional keyhole surgery, robotic operations give surgeons better control and dexterity through a console linked to a 3D camera, while patients benefit from shorter stays and faster recoveries.

Digital tools also shape the patient’s journey before surgery begins. With My Pre-Op, patients complete their pre-operative questionnaires online from home, reducing unnecessary hospital visits and helping to ensure they are in the best condition for their operation. Hospital staff say this streamlines both efficiency and patient comfort.

The innovations recently drew praise from Science, Innovation and Technology Secretary Peter Kyle, who visited the hospital to see the systems in action. He described the trust’s embrace of AI and robotics as ‘inspiring,’ noting how safe experimentation and digital adoption already translate into improved care and efficiency. For Wolverhampton’s healthcare providers, the changes represent not just a technological upgrade but a glimpse into the future of how the NHS might deliver care across the country.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore sets jobs as top priority amid global uncertainty

Singapore’s Prime Minister Lawrence Wong said employment for citizens will remain the government’s top priority as the nation confronts global trade tensions and the rapid advance of AI.

Speaking at the annual National Day Rally to mark Singapore’s 60th year, Wong pointed to the risks created by the USChina rivalry, renewed tariff policies under President Donald Trump, and the pressure technology places on workers.

In his first primary address since the May election, Wong emphasised the need to reinforce the trade-reliant economy, expand social safety nets and redevelop parts of the island.

He pledged to protect Singaporeans from external shocks by maintaining stability instead of pursuing risky shifts. ‘Ultimately, our economic strategy is about jobs, jobs and jobs. That’s our number one priority,’ he said.

The government has introduced new welfare measures, including the country’s first unemployment benefits and wider subsidies for food, utilities and education.

Wong also announced initiatives to help enterprises use AI more effectively, such as a job-matching platform and a government-backed traineeship programme for graduates.

Looking ahead, Wong said Singapore would draw up a new economic blueprint to secure its future in a world shaped by protectionism, climate challenges and changing energy needs.

After stronger-than-expected results in the first half of the year, the government recently raised its growth forecast for 2025 to between 1.5% and 2.5%.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GenAI app usage up 50% as firms struggle with oversight

Enterprise employees are increasingly building their own AI tools, sparking a surge in shadow AI that raises security concerns.

Netskope reports a 50% rise in generative AI platform use, with over half of current adoption estimated to be unsanctioned by IT.

Platforms like Azure OpenAI, Amazon Bedrock, and Vertex AI lead this trend, allowing users to connect enterprise data to custom AI agents.

The growth of shadow AI has prompted calls for better oversight, real-time user training, and updated data loss prevention strategies.

On-premises deployment is also increasing, with 34% of firms using local LLM interfaces like Ollama and LM Studio. Security risks grow as AI agents retrieve data using API calls beyond browsers, particularly from OpenAI and Anthropic endpoints.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI tool offering flexible travellers cheap flights

Google has rolled out Flight Deals, a new AI‑powered tool for flexible, budget‑conscious travellers within Google Flights. It allows users to type natural‑language descriptions of their ideal trip, such as favourite activities or timeframe, and receive bargain flight suggestions in return.

Powered by Gemini, the feature parses conversational inputs and taps real‑time flight data from multiple airlines and agencies.

The tool identifies low fares and even proposes destinations users might not have considered, ranking options by percentage savings or lowest price.

Currently in beta, Flight Deals is available in the US, Canada, and India without special opt‑in. It is also accessible via the Google Flights menu.

Traditional Google Flights remains available, with a new option to exclude basic economy fares in the US and Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers use AI to speed up quantum computing experiments

AI has been used to rapidly assemble arrays of atoms that could one day power quantum computers. A team led by physicist Jian-Wei Pan at the University of Science and Technology of China demonstrated how an AI model can calculate the best way to arrange neutral atoms, a long-standing challenge in the field.

The researchers showed that their system could rearrange up to 2,024 rubidium atoms into precise grid patterns in just 60 milliseconds. By comparison, a previous attempt last year arranged 800 atoms without AI but required a full second.

To showcase the model’s speed, the team even used it to create an animated image of Schrödinger’s cat by guiding atoms into patterns with laser light.

Neutral atom arrays are one of the most promising approaches to building quantum computers, as the trapped atoms can maintain their fragile quantum states for relatively long periods.

The AI model was trained on different atom configurations and patterns of laser light, allowing it to quickly determine the most efficient hologram needed to reposition atoms into complex 2D and 3D shapes.

Experts in the field have welcomed the breakthrough. Mark Saffman, a physicist at the University of Wisconsin–Madison, noted that producing holograms for larger arrays usually requires intensive calculations.

The ability of AI to handle this process so efficiently, he said, left many colleagues ‘really impressed.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 impresses in reasoning but stumbles in flawless coding

OpenAI’s newly released GPT-5 draws praise and criticism in equal measure, as developers explore its potential for transforming software engineering.

Launched on 7 August 2025, the model has impressed with its ability to reason through complex problems and assist in long-term project planning. Yet, engineers testing it in practice note that while it can propose elegant solutions, its generated code often contains subtle errors, demanding close human oversight.

Benchmark results showcase GPT-5’s strength. The model scored 74.9% on the SWE-bench Verified test, outperforming predecessors in bug detection and analysis. Integrated into tools such as GitHub Copilot, it has already boosted productivity for large-scale refactoring projects, with some testers praising its conversational guidance.

Despite these gains, developers report mixed outcomes: successful brainstorming and planning, but inconsistent when producing flawless, runnable code.

The rollout also includes GPT-5 Mini, a faster version for everyday use in platforms like Visual Studio Code. Early users highlight its speed but point out that effective prompting remains essential, as the model’s re-architected interaction style differs from GPT-4.

Critics argue it still trails rivals such as Anthropic’s Claude 4 Sonnet in error-free generation, even as it shows marked improvements in scientific and analytical coding tasks.

Experts suggest GPT-5 will redefine developer roles rather than replace them, shifting focus toward oversight and validation. By acting as a partner in ideation and review, the model may reduce repetitive coding tasks while elevating strategic engineering work.

For now, OpenAI’s most advanced system sets a high bar for intelligent assistance but remains a tool that depends on skilled humans to achieve reliable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!