FlySafair introduces AI interface for smarter bookings

South African airline FlySafair has introduced Lindi, an AI-powered interface to assist customers with booking and travel management. Accessible 24/7 via WhatsApp, Lindi can handle single-passenger flight bookings, seat or name changes, and provide travel details.

FlySafair is the first South African carrier to implement a free AI travel assistant capable of managing bookings and setting a new benchmark for customer service. The initiative reflects the airline’s commitment to affordable, efficient, and tech-driven travel experiences.

Chief marketing officer Kirby Gordon said the technology offers a scalable way to provide each passenger with a virtual assistant. The airline aims to expand Lindi’s capabilities to improve service quality and customer satisfaction further.

FlySafair hopes Lindi’s human-like interaction will redefine digital engagement in the aviation industry and demonstrate practical value as AI becomes more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI rejects Robinhood’s token offering

OpenAI has publicly disavowed Robinhood’s decision to sell so-called ‘OpenAI tokens’, warning that these blockchain-based contracts do not offer real equity in the company.

In a statement posted on X, OpenAI made clear that it had not approved, endorsed, or participated in the initiative and emphasised that any equity transfer requires its direct consent.

Robinhood recently announced plans to offer tokenised access to private firms like OpenAI and SpaceX for investors in the EU. The tokens do not represent actual shares but mimic price movements using blockchain contracts.

Despite OpenAI’s sharp rejection, Robinhood’s stock surged to record highs following the announcement.

A Robinhood spokesperson later claimed the tokens were linked to a special purpose vehicle (SPV) that owns OpenAI shares, though SPVs do not equate to direct ownership either.

The company said the move aims to give everyday investors indirect exposure to high-profile startups through digital contracts.

Robinhood CEO Vlad Tenev defended the strategy on X, saying the token sale was just the beginning of a broader effort to democratise access to private markets.

OpenAI, meanwhile, declined to comment further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT use among students raises concerns over critical thinking

A university lecturer in the United States says many students are increasingly relying on ChatGPT to write essays—even about the ethics of AI—raising concerns about critical thinking in higher education.

Dr Jocelyn Leitzinger from the University of Illinois noticed that nearly half of her 180 students used the tool inappropriately last semester. Some submissions even repeated generic names like ‘Sally’ in personal anecdotes, hinting at AI-generated content.

A recent preprint study by researchers at MIT appears to back those concerns. In a small experiment involving 54 adult learners, those who used ChatGPT produced essays with weaker content and less brain activity, as recorded by EEG headsets.

Researchers found that 80% of the AI-assisted group could not recall anything from their essay afterwards. In contrast, the ‘brain-only’ group—those who wrote without assistance—performed better in both comprehension and neural engagement.

Despite some media headlines suggesting that ChatGPT makes users lazy or less intelligent, the researchers stress the need for caution. They argue more rigorous studies are required to understand how AI affects learning and thinking.

Educators say the tool’s polished writing often lacks originality and depth. One student admitted using ChatGPT for ideas and lecture summaries but drew the line at letting it write his assignments.

Dr Leitzinger worries that relying too heavily on AI skips essential steps in learning. ‘Writing is thinking, thinking is writing,’ she said. ‘When we eliminate that process, what does that mean for thinking?’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Xbox projects cancelled amid Microsoft layoffs

Microsoft has confirmed plans to cut up to 9,000 jobs—roughly 4% of its global workforce—in its latest round of redundancies this year. The company cited the need to adapt to a rapidly evolving market, while pressing ahead with major investments in artificial intelligence.

Although Microsoft did not specify which divisions will be affected, reports suggest its Xbox gaming unit will face significant cuts. According to internal emails, the reboot of Perfect Dark and the game Everwild have been cancelled, and The Initiative, the studio behind Perfect Dark, will shut down.

Additional layoffs are impacting other gaming studios, including Turn 10 and ZeniMax Online Studios. ZeniMax’s long-time director Matt Firor has announced his departure. Meanwhile, Ireland’s Romero Games has also been affected after funding for its project was pulled by a publisher.

The upcoming job cuts will mark Microsoft’s fourth round of layoffs in 2025. Over 800 affected roles are based in Washington state, including in Redmond and Bellevue, key Microsoft hubs. The company is currently investing $80bn in AI infrastructure, including data centres and chips.

Microsoft’s AI push has seen it hire AI pioneer Mustafa Suleyman to lead its Microsoft AI division and deepen ties with OpenAI. However, tensions have reportedly grown in that relationship. Bloomberg noted difficulty in selling Microsoft’s Copilot tool, as many users prefer ChatGPT.

At the same time, AI talent wars are heating up. Meta has reportedly offered huge bonuses to poach researchers, while Amazon’s Andy Jassy said last month that AI would eventually replace certain roles at his company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LFR tech helps catch dangerous offenders, but Liberty urges legal safeguards

Live facial recognition (LFR) technology used by the Metropolitan Police has led to more than 1,000 arrests, including dangerous offenders wanted for serious crimes, such as rape, robbery and child protection breaches.

Among those arrested was David Cheneler, 73, a registered sex offender spotted by LFR cameras in Camberwell, south London. He was found with a young girl and later jailed for two years for breaching a sexual harm prevention order.

Another arrest included Adenola Akindutire, linked to a machete robbery in Hayes that left a man with life-changing injuries. Stopped during an LFR operation in Stratford, he was carrying a false passport and admitted to several violent offences.

LFR also helped identify Darren Dubarry, 50, who was wanted for theft. He was stopped with stolen designer goods after passing an LFR-equipped van in east London.

The Met says the technology has helped arrest over 100 people linked to serious violence against women and girls, including domestic abuse, stalking, and strangulation.

Lindsey Chiswick, who leads the Met’s LFR work, said the system is helping deliver justice more efficiently, calling it a ‘powerful tool’ that is removing dangerous offenders from the streets of London.

While police say biometric data is not retained for those not flagged, rights groups remain concerned. Liberty says nearly 1.9 million faces were scanned between January 2022 and March 2024, and is calling for new laws to govern police use of facial recognition.

Charlie Whelton of Liberty said the tech risks infringing rights and must be regulated. ‘We shouldn’t leave police forces to come up with frameworks on their own,’ he warned, urging Parliament to legislate before further deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI errors are creating new jobs for human experts

A growing number of writers and developers are finding steady work correcting the flawed outputs of AI systems that businesses use.

From bland marketing copy to broken website code, over-reliance on AI tools like ChatGPT is causing costly setbacks that require human intervention.

In Arizona, writer Sarah Skidd was paid $100 an hour to rewrite poor-quality website text initially produced by AI entirely.

Her experience is echoed by other professionals who now spend most of their time reworking AI content rather than writing from scratch.

UK digital agency owner Sophie Warner reports that clients increasingly use AI-generated code, which has sometimes crashed websites and left businesses vulnerable to security risks. The resulting fixes often take longer and cost more than hiring an expert.

Experts warn that businesses adopt AI too hastily, without proper infrastructure or understanding its limitations.

While AI offers benefits, poor implementation can lead to reputational damage, increased costs, and a growing dependence on professionals to clean up the mess.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ari Aster warns of AI’s creeping normality ahead of Eddington release

Ari Aster, the director behind Hereditary and Midsommar, is sounding the alarm on AI. In a recent Letterboxd interview promoting his upcoming A24 film Eddington, Aster described his growing unease with AI.

He framed it as a quasi-religious force reshaping reality in ways that are already irreversible. ‘If you talk to these engineers… they talk about AI as a god,’ said Aster. ‘They’re very worshipful of this thing. Whatever space there was between our lived reality and this imaginal reality — that’s disappearing.’

Aster’s comments suggest concern not just about the technology, but about the mindset surrounding its development. Eddington, set during the COVID-19 pandemic, is a neo-Western dark comedy.
It stars Joaquin Phoenix and Pedro Pascal as a sheriff and a mayor locked in a bitter digital feud.

The film reflects Aster’s fears about the dehumanising impact of modern technology. He drew from the ideas of media theorist Marshall McLuhan, referencing his phrase: ‘Man is the sex organ of the machine world.’ Aster asked, ‘Is this technology an extension of us, are we extensions of this technology, or are we here to usher it into being?’

The implication is clear: AI may not simply assist humanity—it might define it. Aster’s films often explore existential dread and loss of control. His perspective on AI taps into similar fears, but in real life. ‘The most uncanny thing about it is that it’s less uncanny than I want it to be,’ he said.

‘I see AI-generated videos, and they look like life. The longer we live in them, the more normal they become.’ The normalisation of artificial content strikes at the core of Aster’s unease. It also mirrors recent tensions in Hollywood over AI’s role in creative industries.

In 2023, WGA and SAG-AFTRA fought for protections against AI-generated scripts and likenesses. Their strike shut down the industry for months, but won language limiting AI use.

The battles highlighted the same issue Aster warns of—losing artistic agency to machines. ‘What happens when content becomes so seamless, it replaces real creativity?’ he seems to ask.

‘Something huge is happening right now, and we have no say in it,’ he said. ‘I can’t believe we’re actually going to live through this and see what happens. Holy cow.’ Eddington is scheduled for release in the United States on 18 July 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Southern Water uses AI to cut sewer floods

AI used in the sewer system has helped prevent homes in West Sussex from flooding, Southern Water has confirmed. The system was able to detect a fatberg in East Lavington before it caused damage.

The AI monitors sewer flow patterns and distinguishes between regular use, rainfall and developing blockages. On 16 June, digital sensors flagged an anomaly—leading teams to clear the fatberg before wastewater could flood gardens or homes.

‘We’re spotting hundreds of potential blockages before it’s too late,’ said Daniel McElhinney, proactive operations control manager at Southern Water. AI has reduced internal flooding by 40% and external flooding by 15%, the utility said.

Around 32,000 sewer level monitors are in place, checking for unusual flow activity that could signal a blockage or leak. Blocked sewers remain the main cause of pollution incidents, according to the company.

‘Most customers don’t realise the average sewer is only the size of an orange,’ McElhinney added. Even a small amount of cooking fat, combined with unflushable items, can lead to fatbergs and serious disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

United Nations Office for Digital and Emerging Technologies

On 1 January 2025, the Office of the Secretary-General’s Envoy on Technology transitioned to a new UN Office for Digital and Emerging Technologies (ODET). This historic development flows from a decision by the UNGA on 24 December 2024, following the adoption of the GDC at the Summit of the Future in September 2024.

The establishment of ODET reflects the growing importance of a coordinated, inclusive and multistakeholder approach to the governance of technologies anchored in the UN Charter, human rights, and the sustainable development agenda.  With a strengthened mandate, ODET helps the UN address more effectively the opportunities and challenges posed by today’s rapidly evolving technological landscape. A key focus for the office is supporting the follow-up and implementation of the GDC, including its decisions on the governance of AI.

Digital activities

iOn 1 January 2025, the Office of the Secretary-General’s Envoy on Technology transitioned to a new UN Office for Digital and Emerging Technologies (ODET). This historic development flows from a decision by the UNGA on 24 December 2024, following the adoption of the GDC at the Summit of the Future in September 2024.

The establishment of ODET reflects the growing importance of a coordinated, inclusive and multistakeholder approach to the governance of technologies anchored in the UN Charter, human rights, and the sustainable development agenda.  With a strengthened mandate, ODET helps the UN address more effectively the opportunities and challenges posed by today’s rapidly evolving technological landscape. A key focus for the office is supporting the follow-up and implementation of the GDC, including its decisions on the governance of AI.

Digital policy issues

Global Digital Compact

Adopted by world leaders in September 2024 at the Summit of the Future in New York, the GDC is a comprehensive framework for global governance of digital technology and AI. Twenty years after the WSIS, it charts a roadmap for global digital cooperation to harness the immense potential of digital technology and close digital divides.

Negotiated by 193 member states and informed by global consultations, the GDC commits governments to upholding international law and human rights online and to taking concrete steps to make the digital space safe and secure.

The GDC recognises the critical contributions of the private sector, technical communities, researchers, and civil society to digital cooperation. It calls on all stakeholders to engage in realising an open, safe, and secure digital future for all.

The GDC pledges a range of ambitious actions. To close all digital divides and deliver an inclusive digital economy, it calls for connecting all people, schools, and hospitals to the internet; making digital technologies more accessible and affordable to everyone, including in diverse languages and formats; increasing investment in digital public goods and digital public infrastructure; and supporting women, youth innovators, and SMEs.

To build an inclusive, open, safe, and secure digital space, the GDC calls for strengthening legal and policy frameworks to protect children online; ensuring that the internet remains open, global, stable, and secure; and promoting access to independent, fact-based, and timely information to counter mis- and disinformation.

To strengthen international data governance and govern AI for humanity, it supports the development of interoperable national data governance frameworks; the establishment of an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance; and the development of AI capacity-building partnerships, including consideration of a Global Fund on AI.

ODET is facilitating the GDC’s endorsement process and supporting the integration of its commitments into the updated WSIS framework. This approach aims to strengthen existing structures while avoiding duplication, with both processes aligned in their vision of an inclusive, safe, secure, and human-centred digital society. Implementation and the WSIS+20 review will continue through 2025, culminating in a high-level review in 2027.

Turning the GDC into action requires collective effort. Thousands of people and organisations contributed to its development, and all stakeholders are encouraged to engage in shaping a digital future for all.

AI governance 

To foster a globally inclusive approach to the governance of AI, the UN Secretary-General convened a multistakeholder High-level Advisory Body on AI for 12 months starting in October 2023. The 39 members, selected from over 2,000 nominations, and serving in their personal capacity, brought diverse expertise across public policy, science, technology, human rights, and more.

The Body engaged and consulted widely with existing and emerging initiatives and international organisations to bridge perspectives across stakeholder groups and networks. Working at speed, it delivered an interim report in two months, consulted over 2,000 stakeholders in five months, and released its final report, Governing AI for Humanity, in September 2024.

The report outlines a blueprint for addressing AI-related risks and sharing its benefits globally. It urges the UN to lay the foundations of the first globally inclusive and distributed AI governance architecture; proposes seven recommendations to address existing governance gaps; and calls on all governments and stakeholders to work together to foster development and protect human rights. It also proposes light institutional mechanisms to complement existing efforts and enable global cooperation on AI governance that is agile, adaptive, and effective in keeping pace with the technology’s rapid evolution.

An Independent International Scientific Panel on AI and a Global Dialogue on AI Governance, outcomes of the GDC 

Following the adoption of the GDC, member states agreed to continue collaborating on the development of new mechanisms to support the governance of AI. Two key proposals included in the GDC are the establishment of an Independent International Scientific Panel on Artificial Intelligence and the launch of a Global Dialogue on AI Governance.

These mechanisms aim to address the critical gaps identified by the Secretary-General’s High-level Advisory Body on AI. At present, there is no single, impartial source of authoritative scientific knowledge on AI. As a result, policymakers face significant information asymmetries– both among themselves and in relation to leading AI developers. At the same time, international AI governance remains fragmented. Of the 193 UN member states, only seven currently participate in the seven most prominent global AI initiatives, leaving 118 countries, primarily in the Global South, without a voice in shaping global AI norms.

The Independent International Scientific Panel on AI and the Global Dialogue on AI Governance represent an important step toward building a more inclusive and coherent global governance architecture for AI, one grounded in international law and human rights. ODET is engaged in supporting the intergovernmental process co-facilitated by Costa Rica and Spain, appointed by the President of the General Assembly. An elements paper and zero draft were released in April 2025, reflecting inputs from consultations with member states and stakeholders.

ODET is also preparing a report on Innovative Voluntary Financing Options for AI capacity building, drawing on recommendations from the High-level Advisory Body on AI on a global fund to complement existing UN mechanisms. The report will be submitted to the General Assembly at its 80th session.

Understanding the implications of AI 

In June 2024, a special report was developed in partnership with the ILO on the topic of AI and the world of work. The publication, Mind the Divide: Shaping a Global Perspective on the Future of Work, offers recommendations for harnessing the potential of AI while mitigating its impacts on employment. It emphasises the importance of workforce empowerment, AI capacity building, and sustained social dialogue.

Digital Public Infrastructure

In his policy brief on A Global Digital Compact – an Open, Free and Secure Digital Future for All, the UN Secretary-General called for the development of common frameworks and standards for DPI. Like roads and bridges, DPI comprises digital building blocks that enable governments to deliver inclusive and secure services at scale. While some countries are deploying DPI rapidly, others are in the early stages of their digital transformation. Regardless of the stage, robust safeguards are essential to ensure DPI is safe, trusted, and inclusive for all.

To advance this agenda, ODET – together with the Government of Egypt, UNDP, ITU, the World Bank, and Co-Develop – hosted the inaugural Global DPI Summit in October 2024, convening participants from over 100 countries to explore the future of digital public infrastructure and exchange knowledge, practices, and experiences across regions.

In parallel, ODET and UNDP jointly stewarded the development of the Universal DPI Safeguards Framework to help unlock the full potential of DPI while mitigating its risks. The framework was shaped through collaborative, multistakeholder working groups with diverse experts from government, civil society, academia, donor institutions, and the private sector. It was informed by consultations with 12 international organisations and countries, and drew additional input from 13 public consultations and over 100 public contributions.

The resulting Universal Safeguards Framework includes more than 250 recommendations addressing both process and practice. It provides practical guidance to help stakeholders ensure that DPI implementation is inclusive, rights-based, and aligned with the SDGs. In 2025, a second cohort of working group members, along with an advisory body, is refining and advancing the framework toward implementation.

Open source

In his Roadmap for Digital Cooperation, the UN Secretary-General recognised the critical role of open source solutions in advancing the SDGs. Open source acts as a powerful equaliser in the global digital landscape, promoting equitable access to innovation regardless of economic status. By reducing costs and fostering local innovation and skill development, open source technologies enable countries at all levels of development to build tailored, context-specific solutions. Given its convening power and its role as a platform for governments, the UN is uniquely positioned to promote the effective use of open source across the public sector.

To support this effort, ODET has collaborated with OICT to host the OSPOs for Good Symposium—a global convening that brought together stakeholders from governments, civil society, and the open source community. With over 500 participants in 2024, the conference facilitated discussions on the governance, sustainability, and funding of open source technologies, responding to the growing urgency to accelerate digital cooperation in support of the SDGs. The 2024 edition also expanded its focus to explore how open source networks can foster international collaboration around digital public goods and digital public infrastructure, both within and across countries. The 2025 edition, revamped as UN Open Source Week, will take place from 16–20 June and feature a broader range of programming, including the UN Tech Over Hackathon, OSPOs for Good, a dedicated Digital Public Infrastructure Day, and a series of partner-organised side events.

Social media channels

LinkedIn: United Nations Office for Digital and Emerging Technologies

X: ODET_UN

Bluesky: ‪@unodet.bsky.social‬

YouTube: @UNODET

Contact: odet@un.org