AI glasses deliver real-time theatre subtitles

An innovative trial at Amsterdam’s Holland Festival saw Dutch company Het Nationale Theatre, in partnership with XRAI and Audinate, unveil smart glasses that project real-time subtitles in 223 languages via a Dante audio network and AI software.

Attendees of The Seasons experienced dynamic transcription and translation streamed directly to XREAL AR glasses. Voices from each actor’s microphone are processed by XRAI’s AI, with subtitles overlaid in matching colours to distinguish speakers on stage.

Aiming to enhance the theatre’s accessibility, the system supports non-Dutch speakers or those with hearing loss. Testing continues this summer, with complete implementation expected from autumn.

LiveText discards the dated method of back-of-house captioning. Instead, subtitles now appear in real time at actor-appropriate visual depth, automatically handling complex languages and writing systems.

Proponents believe the glasses mark a breakthrough for inclusion, with potential uses at international conferences, music festivals and other live events worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

McDonald’s faces backlash over AI hiring system security failures

A major security flaw in McDonald’s AI-driven recruitment platform has exposed the personal information of potentially 64 million job applicants.

The McHire platform, developed by Paradox.ai and powered by an AI chatbot named Olivia, suffered from basic authentication vulnerabilities and lacked critical security controls.

Security researchers Ian Carroll and Sam Curry discovered they could access the system using weak default credentials—simply the username and password ‘123456’.

The incident underscores serious cybersecurity lapses in automated hiring systems and raises urgent concerns about data protection in AI-powered HR tools. McHire is designed to streamline recruitment at McDonald’s franchise locations by using AI to screen candidates, collect contact details, and assess suitability.

The chatbot Olivia interacts with applicants using natural language processing, but users have often reported issues with miscommunication and unclear prompts. As a broader shift toward automation in hiring takes shape, McHire represents an attempt to scale recruitment efforts without expanding HR staff.

However, according to the researchers’ findings, the system’s backend infrastructure—housing millions of résumés, chat logs and assessments—was critically unprotected.

After prompt injection attacks failed, the researchers focused on login mechanisms and discovered a Paradox.ai staff portal linked from the McHire homepage.

Using simple password combinations and dictionary attacks, they could access the system with the password ‘123456’, bypassing standard security protocols. More worryingly, the account lacked two-factor authentication, enabling unrestricted access to administrative tools and candidate records.

From there, the researchers found an Insecure Direct Object Reference (IDOR) vulnerability that allowed traversal of the applicant database by manipulating ID numbers.

By increasing the numeric applicant ID above 64 million, they could view multiple records containing names, email addresses, phone numbers and chat logs. Although only seven records were considered during the test, five included personally identifiable information, highlighting the scale of the exposure.

Paradox.ai insisted that only a fraction of records held sensitive data, but the researchers warned of phishing risks linked to impersonation of McDonald’s recruiters. These could be used for payroll-related scams or to harvest further private information under false pretences.

McDonald’s acknowledged the breach and expressed disappointment in its third-party provider’s handling of basic security measures.

Paradox.ai confirmed the vulnerabilities and announced a bug bounty programme to incentivise researchers to report flaws before they are exploited. The exposed account was a dormant test login created in 2019 that had never been properly turned off—evidence of poor development hygiene.

Both companies have pledged to investigate the matter further and implement stronger safeguards, as scrutiny over AI accountability in hiring continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers $200 million to top AI talent as superintelligence race heats up

Meta has reportedly offered over $200 million in compensation to Ruoming Pang, a former senior AI engineer at Apple, as it escalates its bid to dominate the AI arms race.

The offer, which includes long-term stock incentives, far exceeded Apple’s willingness to match and is seen as one of Silicon Valley’s most aggressive poaching efforts.

The move is part of Meta’s broader campaign to build a world-class team under its new Meta Superintelligence Lab (MSL), which is focused on developing artificial general intelligence (AGI).

The division has already attracted prominent names, including ex-GitHub CEO Nat Friedman, AI investor Daniel Gross, and Scale AI co-founder Alexandr Wang, who joined as Chief AI Officer through a $14.3 billion stake deal.

Most compensation offers in the MSL reportedly rival CEO packages at global banks, but they are heavily performance-based and tied to long-term equity vesting.

Meta’s mix of base salary, signing bonuses, and high-value stock options is designed to attract and retain elite AI talent amid a fierce talent war with OpenAI, Google, and Anthropic.

OpenAI CEO Sam Altman recently claimed Meta has dangled bonuses up to $100 million to lure staff away, though he insists many stayed for cultural reasons.

Still, Meta has already hired more than 10 researchers from OpenAI and poached talent from Google DeepMind, including principal researcher Jack Rae.

The AI rivalry could come to a head as Altman and Zuckerberg meet at the Sun Valley conference this week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to release Chromium-based AI browser competing with Chrome

OpenAI is preparing to launch an AI-powered web browser that could challenge Google Chrome’s dominant market position. The browser is expected to debut in the coming weeks and aims to change how users interact with the web fundamentally.

The new browser will reportedly integrate AI capabilities directly into the browsing experience, allowing for more intelligent and task-driven user interactions. Instead of simply directing users to websites, the browser is designed to keep many interactions within a native ChatGPT-style interface.

If adopted by ChatGPT’s 500 million weekly users, the browser could seriously threaten Google’s ad-driven ecosystem. Chrome is critical in Alphabet’s advertising revenue, accounting for nearly three-quarters of the company’s income by collecting user data and directing traffic to Google Search.

By building its browser, OpenAI would gain more direct access to user behaviour data, improving its AI models and enabling new forms of web engagement. However, this move is part of OpenAI’s broader strategy to integrate its services into users’ personal and professional lives.

The browser will reportedly support AI ‘agents’ capable of performing tasks such as making reservations or filling out web forms automatically. These agents could operate directly within websites, making the browsing experience more seamless and productive.

While OpenAI declined to comment, sources suggest the browser is built on Google’s open-source Chromium codebase—the same foundation behind Chrome, Edge, and Opera. However, this allows OpenAI to maintain compatibility while customising user experience and data control.

Competition in the AI-powered browser space is heating up. Startups like Perplexity and Brave have already launched intelligent browsers, and The Browser Company continues to develop features for AI-driven navigation and summarisation.

Despite Chrome’s 3-billion-strong user base and over two-thirds of the browser market share, OpenAI sees an opportunity to disrupt the space. Apple’s Safari holds second place with just 16% of the global share, leaving room for new challengers.

Last year, OpenAI hired two senior Google engineers from the original Chrome team, fueling speculation that the company was eyeing the browser space. One executive even testified that OpenAI would consider buying Chrome if it were made available through antitrust divestiture.

Instead, OpenAI built its browser from the ground up, allowing greater autonomy over features, data collection, and AI integration. A source told Reuters this approach ensures better alignment with OpenAI’s goal of embedding AI across user experiences.

In addition to hardware acquisitions and agent-based interfaces, the browser represents a crucial link in OpenAI’s strategy to deepen user engagement. The company recently acquired the AI hardware firm io, co-founded by Apple’s former design chief Jony Ive, for $6.5 billion.

The browser could become the gateway for OpenAI’s AI agents like ‘Operator,’ enhancing productivity by turning passive browsing into interactive assistance. Such integration could give OpenAI a competitive edge in the evolving consumer AI landscape.

Meanwhile, Google faces legal challenges over Chrome’s central role in its ad monopoly. A US judge ruled that Google maintains an unlawful hold over online search, prompting the Department of Justice to push for divestiture of key assets, including Chrome.

OpenAI’s entry could spark a broader shift in how consumers, businesses, and advertisers engage with the internet as the browser race intensifies. With built-in AI capabilities and task automation, browsing may become a different experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan rises as an AI superpower

Since the launch of its Digital Kazakhstan initiative in 2017, the country has shifted from resource-dependent roots to digital leadership.

It ranks 24th globally on the UN’s e‑government index and among the top 10 in online service delivery. Over 90% of public services, such as registrations, healthcare access, and legal documentation, are digitised, aided by mobile apps, biometric ID and QR authentication.

Central to this is a Tier III data-centre-based AI supercluster, launching in July 2025, and the Alem.AI centre, both designed to supply computing power for universities, startups and enterprises.

Kazakhstan is also investing heavily in talent and innovation. It aims to train up to a million AI-skilled professionals and supports over 1,600 startups at Astana Hub. Venture capital surpassed $250 million in 2024, bolstered by a new $1 billion Qazaqstan Venture Group fund.

Infrastructure upgrades, such as a 3,700 km fibre-optic corridor between China and the Caspian Sea, support a growing tech ecosystem.

Regulatory milestones include planned AI law reforms, data‑sovereignty zones like CryptoCity, and digital identity frameworks. These prepare Kazakhstan to become Central Asia’s digital and AI nexus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI that serves communities, not the other way round

At the WSIS+20 High-Level Event in Geneva, a vivid discussion unfolded around how countries in the Global South can build AI capacity from the ground up, rooted in local realities rather than externally imposed models. Organised by the Diplo and partners, including Kenya’s Permanent Mission to the UN, Microsoft, and IT for Change, the session used the fictional agricultural nation of ‘Landia’ to spotlight the challenges and opportunities of community-centred AI development.

With weak infrastructure, unreliable electricity, and fragmented data ecosystems, Landia embodies the typical constraints many developing nations face as they navigate the AI revolution.

UN Tech Envoy Amandeep Singh Gill presented a forthcoming UN report proposing a five-tiered framework to guide countries from basic AI literacy to full development capacity. He stressed the need for tailored, coordinated international support—backed by a potential global AI fund—to avoid the fragmented aid pitfalls seen in climate and health sectors.

WSIS

Microsoft’s Ashutosh Chadha echoed that AI readiness is not just a tech issue but fundamentally a policy challenge, highlighting the importance of data governance, education systems, and digital infrastructure as foundations for meaningful AI use.

Civil society voices, particularly from IT4Change’s Anita Gurumurthy and Nandini Chami, pushed for ‘regenerative AI’—AI that is indigenous, inclusive, and modular. They advocated for small-scale models that can run on local data and infrastructures, proposing creative use of community media archives and agroecological knowledge.

Speakers stressed that technology should adapt to community needs, not the reverse, and that AI must augment—not displace—traditional practices, especially in agriculture where livelihoods are at stake.

WSIS

Ultimately, the session crystallised around a core principle: AI must be developed with—not for—local communities. Participants called for training unemployed youth to support rural farmers with accessible AI tools, urged governments to invest in basic infrastructure alongside AI capacity, and warned against replicating inequalities through automation.

The session concluded with optimism and a commitment to continue this global-local dialogue beyond Geneva, ensuring AI’s future in the Global South is not only technologically viable, but socially just.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UN leaders chart inclusive digital future at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UN leaders gathered for a pivotal dialogue on shaping an inclusive digital transformation, marking two decades since the World Summit on the Information Society (WSIS). Speakers across the UN system emphasised that technology must serve people, not vice versa.

They highlighted that bridging the digital divide is critical to ensuring that innovations like AI uplift all of humanity, not just those in advanced economies. Without equitable access, the benefits of digital transformation risk reinforcing existing inequalities and leaving millions behind.

The discussion showcased how digital technologies already transform disaster response and climate resilience. The World Meteorological Organization and the UN Office for Disaster Risk Reduction illustrated how AI powers early warning systems and real-time risk analysis, saving lives in vulnerable regions.

Meanwhile, the Food and Agriculture Organization of the UN underscored the need to align technology with basic human needs, reminding the audience that ‘AI is not food,’ and calling for thoughtful, efficient deployment of digital tools to address global hunger and development.

Workforce transformation and leadership in the AI era also featured prominently. Leaders from the International Labour Organization and UNITAR stressed that while AI may replace some roles, it will augment many more, making digital literacy, ethical foresight, and collaborative governance essential skills. Examples from within the UN system itself, such as the digitisation of the Joint Staff Pension Fund through facial recognition and blockchain, demonstrated how innovation can enhance services without sacrificing inclusivity or ethics.

As the session closed, speakers collectively reaffirmed the importance of human rights, international cooperation, and shared digital governance. They stressed that the future of global development hinges on treating digital infrastructure and knowledge as public goods.

With the WSIS framework and Global Digital Compact as guideposts, UN leaders called for sustained, unified efforts to ensure that digital transformation uplifts every community and contributes meaningfully to the Sustainable Development Goals.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!