Oakley Meta HSTN smart glasses unveiled

Meta and Oakley have revealed the Oakley Meta HSTN, a new AI-powered smart glasses model explicitly designed for athletes and fitness fans. The glasses combine Meta’s advanced AI with Oakley’s signature sporty design, offering features tailored for high-performance settings.

The device is ideal for workouts and outdoor use and is equipped with a 3K ultra-HD camera, open-ear speakers, and IPX4 water resistance.

On-device Meta AI provides real-time coaching, hands-free information and eight hours of active battery life, while a compact charging case adds up to 48 more hours.

The glasses are set for pre-order from 11 July, with a limited-edition gold-accent version priced at 499 dollars. Standard versions will follow later in the summer, with availability expanding beyond North America, Europe and Australia to India and the UAE by year-end.

Sports stars like Kylian Mbappé and Patrick Mahomes are helping introduce the glasses, representing Meta’s move to integrate smart tech into athletic gear. The product marks a shift from lifestyle-focused eyewear to functional devices supporting sports performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple sued over alleged AI misrepresentation

Apple is facing a proposed class action lawsuit in a San Francisco federal court over claims it misled shareholders about its AI plans. The complaint accuses the company of exaggerating the readiness of AI upgrades for Siri, which reportedly harmed iPhone sales and stock value.

The case covers investors who lost money in the year ending 9 June, following Apple’s 2024 Worldwide Developers Conference announcements. Shareholders allege Apple presented the AI features as ready for the iPhone 16 despite having no working prototype or clear timeline.

Problems became clear in March when Apple admitted that some Siri upgrades would be postponed until 2026. The lawsuit names CEO Tim Cook, CFO Kevan Parekh, former CFO Luca Maestri, and Apple as defendants.

Apple has not yet responded to requests for comment. The case highlights growing investor concerns about AI promises made by major tech firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Watson CoPilot brings AI-driven support to small firms

IBM has introduced AI-powered software to help small businesses improve operations and customer engagement. Based on its Watson AI, the tools aim to streamline tasks, reduce costs and offer deeper insights into customer behaviour.

One of the key features is Watson CoPilot, an AI assistant that handles routine customer queries using natural language processing. However, this allows employees to focus on complex tasks while improving response times and customer satisfaction.

IBM highlighted the potential of these tools to strengthen customer loyalty and drive growth in a competitive market. However, small firms may face challenges such as integration costs, data security concerns and the need for staff training.

The company provides support and resources to ease adoption and help businesses customise the technology to their needs. Using AI responsibly allows small businesses to gain a valuable edge in an increasingly digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes quantum computing towards industrial use

A Chinese startup has used quantum computing to improve breast cancer screening accuracy, highlighting how the technology could transform medical diagnostics—based in Hefei, Origin Quantum applied its superconducting quantum processor to analyse medical images faster and more precisely.

China is accelerating efforts to turn quantum research into industrial applications, with companies focusing on areas such as drug discovery, smart cities and finance. Government backing and national policy have driven rapid growth in the sector, with over 150 firms now active in quantum computing.

In addition to medical uses, quantum algorithms are being tested in autonomous parking, which has dramatically cut wait times. Banks and telecom firms have also begun adopting quantum solutions to improve operational efficiency in areas like staff scheduling.

The merging of quantum computing with AI is seen as the next significant step, with Origin Quantum recently fine-tuning a billion-parameter AI model on its quantum system. Experts expect the integration of these technologies to shift from labs to practical use in the next five years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple considers buying Perplexity AI

Apple is reportedly considering the acquisition of Perplexity AI as it attempts to catch up in the fast-moving race for dominance in generative technology.

According to Bloomberg, the discussions involve senior executives, including Eddy Cue and merger head Adrian Perica, who remain at an early stage.

Such a move would significantly shift Apple, which typically avoids large-scale takeovers. However, with investor pressure mounting after an underwhelming developer conference, the tech giant may rethink its traditionally cautious acquisition strategy.

Perplexity has gained prominence for its fast, clear AI chatbot and recently secured funding at a $14 billion valuation.

Should Apple proceed, the acquisition would be the company’s largest ever financially and strategically, potentially transforming its position in AI and reducing its long-standing dependence on Google’s search infrastructure.

Apple’s slow development of Siri and reliance on a $20 billion revenue-sharing deal with Google have left it trailing rivals. With that partnership now under regulatory scrutiny in the US, Apple may view Perplexity as a vital step towards building a more autonomous search and AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rethinking AI in journalism with global cooperation

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media.

Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making.

Ernst Noorman, the Dutch Ambassador for Cyber Affairs, called for AI policies rooted in international human rights law, highlighting Europe’s Digital Services and AI Acts as potential models. Meanwhile, grassroots organisations from the Global South shared real-world challenges, including algorithmic bias, language exclusions, and environmental impacts.

Taysir Mathlouthi of Hamleh detailed efforts to build localised AI models in Arabic and Hebrew, while Nepal’s Yuva organisation, represented by Sanskriti Panday, explained how small NGOs balance ethical use of generative tools like ChatGPT with limited resources. The Global Forum for Media Development’s Laura Becana Ball introduced the Journalism Cloud Alliance, a collective aimed at making AI tools more accessible and affordable for newsrooms.

Despite enthusiasm, participants acknowledged hurdles such as checklist fatigue, lack of capacity, and the need for AI literacy training. Still, there was a shared sense of urgency and optimism, with the consensus that ethical frameworks must be embedded from the outset of AI development and not bolted on as an afterthought.

In closing, organisers invited civil society and media groups to endorse the Harlem Declaration and co-create practical tools for ethical AI governance. While challenges remain, the forum set a clear agenda: ethical AI in media must be inclusive, accountable, and co-designed by those most affected by its implementation.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!