Delta Air Lines rolls out AI for personalised airfare

Delta Air Lines is shifting the landscape of airfare by leveraging AI to personalise ticket prices. Moving beyond fixed fares, Delta aims to tailor prices closely to each traveller.

Instead of static prices, the system now analyses customer habits, booking history, and even the time of day to predict an individual’s potential willingness to pay. By the end of the current year, Delta aims to set 20% of its ticket prices using AI dynamically.

The goal represents a significant, sevenfold increase from just twelve months prior. Such a high-tech approach could result in more advantageous deals or elevated costs, depending on a passenger’s unique circumstances and shopping behaviour.

It is crucial to understand how this system operates, Delta’s motivations, and its implications for consumer finances. Traditional ticket pricing has long relied on ‘fare buckets,’ where customers are categorised based on their booking method and timing.

Delta’s new AI ticket pricing system fundamentally shifts away from these static rates. It analyses real-time information to calculate precisely what a specific customer will likely spend on a seat for any given flight.

Glen Hauenstein, Delta’s President, describes this as a complete re-engineering of pricing. He characterises AI as a ‘super analyst’ working continuously, 24/7, to identify the optimal price for every traveller, every time.

The airline has collaborated with Fetcherr, which provides the underlying technological infrastructure and supports other global airlines. Airlines do not adopt advanced, high-tech pricing systems to reduce revenue.

Delta reports that initial results from its AI-driven pricing indicate ‘amazingly favourable’ revenues. The airline believes AI will maximise profits by more accurately aligning fares with each passenger’s willingness to pay.

However, this is determined by a vast array of data inputs, ranging from individual booking history to prevailing market trends. Delta’s core strategy is straightforward: to offer a price available for a specific flight, at a particular time, to you, the individual consumer.

Consumers who have previously observed frequent fluctuations in airfare should now anticipate even greater volatility. Delta’s new system could present a different price to one person compared to another for the same seat, with the calculation performed in real-time by the AI.

Passengers might receive special offers or early discounts if the AI identifies a need to fill seats quickly. However, discerning whether one is securing a ‘fair’ deal becomes significantly more challenging. The displayed price is now a function of what the AI believes an individual will pay, rather than a universal rate applicable to all.

The shift has prompted concerns among some privacy advocates. They worry that such personalised pricing could disadvantage customers who lack the resources or time to search extensively for the most favourable deals.

Consequently, those less able to shop around may be charged the highest prices. Delta has been approached for comment, and a spokesperson stated: ‘There is no fare product Delta has ever used, is testing, or plans to use that targets customers with individualised offers based on personal information or otherwise.

Various market forces have driven the dynamic pricing model used in the global industry for decades, with new tech streamlining this process. Delta always complies with regulations around pricing and disclosures.’

Delta’s openness regarding this significant policy change has attracted considerable national attention. Other airlines are already trialling their AI fare systems, and industry experts widely anticipate that the rest of the sector will soon follow suit.

Nevertheless, privacy advocates and several lawmakers are vocalising strong objections. Critics contend that allowing AI to determine pricing behind the scenes is akin to airlines ‘hacking our brains’ to ascertain the maximum price a customer will accept, as described by Consumer Watchdog.

The legal ramifications of this approach are still unfolding. While price variation based on demand or timing is not novel, the use of AI for ultra-personalised pricing raises uncomfortable questions about potential discrimination and fairness, particularly given prior research suggesting that economically disadvantaged customers frequently receive less favourable deals.

Delta’s AI pricing system personalises every airfare, making each search and price specific to the user. Universal ticket prices are fading as AI analyses booking habits and market conditions. This technology can quickly offer special deals to fill seats or raise prices if demand is detected.

Conversely, the price can increase if the system senses a greater willingness to pay. Shopping around is now an absolute necessity. Utilising a VPN can help outsmart the system by masking location and IP address, which prevents airlines from tracking searches and adjusting prices based on geographic region.

Making quick decisions might result in savings, but procrastination could lead to a price increase. Privacy is paramount; the airline gains insights into a user’s habits with every search. A digital footprint directly influences fares. In essence, consumers now possess both increased power and greater responsibility.

Being astute, flexible, and constantly comparing before purchasing is vital. Delta’s transition to AI-driven ticket pricing significantly shifts how consumers purchase flight tickets.

While offering potential for enhanced flexibility and efficiency, it simultaneously raises substantial questions concerning fairness, privacy, and transparency.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot captures veteran workers’ knowledge to support UK care teams

Peterborough City Council has turned the knowledge of veteran therapy practitioner Geraldine Jinks into an AI chatbot to support adult social care workers.

After 35 years of experience, colleagues frequently approached Jinks seeking advice, leading to time pressures despite her willingness to help.

In response, the council developed a digital assistant called Hey Geraldine, built on the My AskAI platform, which mimics her direct and friendly communication style to provide instant support to staff.

Developed in 2023, the chatbot offers practical answers to everyday care-related questions, such as how to support patients with memory issues or discharge planning. Jinks collaborated with the tech team to train the AI, writing all the responses herself to ensure consistency and clarity.

Thanks to its natural tone and humanlike advice, some colleagues even mistook the chatbot for the honest Geraldine.

The council hopes Hey Geraldine will reduce hospital discharge delays and improve patient access to assistive technology. Councillor Shabina Qayyum, who also works as a GP, said the tool empowers staff to help patients regain independence instead of facing unnecessary delays.

The chatbot is seen as preserving valuable institutional knowledge while improving frontline efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google seeks balance between user satisfaction and ecosystem health

At the Search Central Live Deep Dive 2025 event, Google’s Gary Illyes acknowledged that they are still calibrating how to weigh user needs, especially around AI-powered features, and the health of the broader web publishing community.

The company gathers internal survey data and tracks the adoption of external AI tools to assess satisfaction and guide product decisions.

While Google aims to enrich user experience with AI Overviews, critics warn these features may shrink organic traffic for publishers, as users often consume information without visiting source sites.

Illyes reaffirmed that Google does not intend disruption but is navigating a trade-off between serving users efficiently and maintaining a healthy content ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI model, Aeneas, assists historians in interpreting Roman inscriptions

Thanks to AI, historians studying ancient Rome now have a powerful new tool.

A research team, including scholars from Google DeepMind and the University of Nottingham, developed a generative AI model called Aeneas that can help interpret damaged Latin inscriptions by estimating their location and date and suggesting likely missing text.

Each year, roughly 1,500 new Latin inscriptions are unearthed, ranging from imperial decrees to everyday graffiti. These inscriptions, written by ancient Romans across all social classes, offer rare, first-hand insights into daily life, language, and society.

Yet many of them are incomplete or difficult to contextualise. Traditionally, scholars must compare each inscription against hundreds of others manually — a process described as laborious and requiring exceptional expertise.

Aeneas, trained on over 170,000 Latin texts, can now predict when and where an inscription was written across the Roman Empire’s 62 provinces. In one test case, it analysed the famous Res Gestae Divi Augusti, narrowing down the date to the same two options long debated by historians.

Aeneas significantly improved research outcomes when used alongside human expertise instead of replacing it, helping scholars piece together history more efficiently than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robot artist Ai-Da explores human self-perception

The world’s first ultra-realistic robot artist, Ai-Da, has been prompting profound questions about human-robot interactions, according to her creator.

Designed in Oxford by Aidan Meller, a modern and contemporary art specialist, and built in the UK by Engineered Arts, Ai-Da is a humanoid robot specifically engineered for artistic creation. She recently unveiled a portrait of King Charles III, adding to her notable portfolio.

Aidan Meller, Ai-Da’s creator, stated that working with the robot has evoked ‘lots of questions about our relationship with ourselves.’ He highlighted how Ai-Da’s artwork ‘drills into some of our time’s biggest concerns and thoughts.’

Ai-Da uses cameras in her eyes to capture images, which are then processed by AI algorithms and converted into real-time coordinates for her robotic arm, enabling her to paint and draw.

Mr Meller explained, ‘You can meet her, talk to her using her language model, and she can then paint and draw you from sight.’

He also observed that people’s preconceptions about robots are often outdated: ‘It’s not until you look a robot in the eye and they say your name that the reality of this new sci-fi world that we are now in takes hold.’

Ai-Da’s contributions to the art world continue to grow. She had produced and showcased her work at the AI for Good Global Summit 2024 in Geneva, Switzerland, an event under the auspices of the UN. That same year, her triptych of Enigma code-breaker Alan Turing sold for over £1 million at auction.

Her focus this year shifted to King Charles III, chosen because, as Mr Meller noted, ‘With extraordinary strides that are taking place in technology and again, always questioning our relationship to the environment, we felt that King Charles was an excellent subject.’

Buckingham Palace authorised the display of Ai-Da’s portrait of the King, despite the robot not meeting him. Ai-Da, connected to the internet, uses extensive data to inform her choice of subjects, with Mr Meller revealing, ‘Uncannily, and rather nerve-rackingly, we just ask her.’

The conversations generated inform the artwork. Ai-Da also painted a portrait of King Charles’s mother, Queen Elizabeth II, in 2023. Mr Meller shared that the most significant realisation from six years of working with Ai-Da was ‘not so much about how human she is but actually how robotic we are.’

He concluded, ‘We hope Ai-Da’s artwork can be a provocation for that discussion.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guess AI model sparks fashion world debate

A striking new ‘supermodel’ has appeared in the August print edition of Vogue, featuring in a Guess advert for their summer collection. Uniquely, the flawless blonde model is not real, as a small disclaimer reveals she was created using AI.

While Vogue clarifies the AI model’s inclusion was an advertising decision, not editorial, it marks a significant first for the magazine and has ignited widespread controversy.

The development raises serious questions for real models, who have long campaigned for greater diversity, and consumers, particularly young people, are already grappling with unrealistic beauty standards.

Seraphinne Vallora, the company behind the controversial Guess advert, comprises founders Valentina Gonzalez and Andreea Petrescu. They told the BBC that Guess’s co-founder, Paul Marciano, approached them on Instagram to create an AI model for the brand’s summer campaign.

Valentina Gonzalez explained, ‘We created 10 draft models for him and he selected one brunette woman and one blonde that we developed further.’ Petrescu described AI image generation as a complex process, with their five employees taking up to a month to create a finished product, charging clients like Guess up to the low six figures.

However, plus-size model Felicity Hayward, with over a decade in the industry, criticised the use of AI models, stating it ‘feels lazy and cheap’ and worried it could ‘undermine years of work towards more diversity in the industry.’

Hayward believes the fashion industry, which saw strides in inclusivity in the 2010s, has regressed, leading to fewer bookings for diverse models. She warned, ‘The use of AI models is another kick in the teeth that will disproportionately affect plus-size models.’

Gonzalez and Petrescu insist they do not reinforce narrow beauty standards, with Petrescu claiming, ‘We don’t create unattainable looks – the AI model for Guess looks quite realistic.’ They contended, ‘Ultimately, all adverts are created to look perfect and usually have supermodels in, so what we do is no different.’

While admitting their company’s Instagram shows a lack of diversity, Gonzalez explained to the BBC that attempts to post AI images of women with different skin tones did not gain traction, stating, ‘people do not respond to them – we don’t get any traction or likes.’

They also noted that the technology is not yet advanced enough to create plus-size AI women. However, this mirrors a 2024 Dove campaign that highlighted AI bias by showing image generators consistently producing thin, white, blonde women when asked for ‘the most beautiful woman in the world.’

Vanessa Longley, CEO of eating disorder charity Beat, found the advert ‘worrying,’ telling the BBC, ‘If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder.’

The lack of transparent labelling for AI-generated content in the UK is also a concern, despite Guess having a small disclaimer. Sinead Bovell, a former model and now tech entrepreneur, told the BBC that not clearly labelling AI content is ‘exceptionally problematic’ due to ‘AI is already influencing beauty standards.’

Sara Ziff, a former model and founder of Model Alliance, views Guess’s campaign as “less about innovation and more about desperation and need to cut costs,’ advocating for ‘meaningful protections for workers’ in the industry.

Seraphinne Vallora, however, denies replacing models, with Petrescu explaining, ‘We’re offering companies another choice in how they market a product.’

Despite their website claiming cost-efficiency by ‘eliminating the need for expensive set-ups… hiring models,’ they involve real models and photographers in their AI creation process. Vogue’s decision to run the advert has drawn criticism on social media, with Bovell noting the magazine’s influential position, which means they are ‘in some way ruling it as acceptable.’

Looking ahead, Bovell predicts more AI-generated models but not their total dominance, foreseeing a future where individuals might create personal AI avatars to try on clothes and a potential ‘society opting out’ if AI models become too unattainable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta forms AI powerhouse by appointing Shengjia Zhao as chief scientist

Meta has appointed former OpenAI researcher Shengjia Zhao as Chief Scientist of its newly formed AI division, Meta Superintelligence Labs (MSL).

Zhao, known for his pivotal role in developing ChatGPT, GPT-4, and OpenAI’s first reasoning model, o1, will lead MSL’s research agenda under Alexandr Wang, the former CEO of Scale AI.

Mark Zuckerberg confirmed Zhao’s appointment, saying he had been leading scientific efforts from the start and co-founded the lab.

Meta has aggressively recruited top AI talent to build out MSL, including senior researchers from OpenAI, DeepMind, Apple, Anthropic, and its FAIR lab. Zhao’s presence helps balance the leadership team, as Wang lacks a formal research background.

Meta has reportedly offered massive compensation packages to lure experts, with Zuckerberg even contacting candidates personally and hosting them at his Lake Tahoe estate. MSL will focus on frontier AI, especially reasoning models, in which Meta currently trails competitors.

By 2026, MSL will gain access to Meta’s massive 1-gigawatt Prometheus cloud cluster in Ohio, designed to power large-scale AI training.

The investment and Meta’s parallel FAIR lab, led by Yann LeCun, signal the company’s multi-pronged strategy to catch up with OpenAI and Google in advanced AI research.

The collaboration dynamics between MSL, FAIR, and Meta’s generative AI unit remain unclear, but the company now boasts one of the strongest AI research teams in the industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules for AI to prevent inequality

According to Doreen Bogdan-Martin, head of the UN’s International Telecommunications Union, the world must urgently adopt a unified approach to AI regulation.

She warned that fragmented national strategies could deepen global inequalities and risk leaving billions excluded from the AI revolution.

Bogdan-Martin stressed that only a global framework can ensure AI benefits all of humanity instead of worsening digital divides.

With 85% of countries lacking national AI strategies and 2.6 billion people still offline, she argued that a coordinated effort is essential to bridge access gaps and prevent AI from becoming a tool that advances inequality rather than opportunity.

ITU chief highlighted the growing divide between regulatory models — from the EU’s strict governance and China’s centralised control to the US’s new deregulatory push under Donald Trump.

She avoided direct criticism of the US strategy but called for dialogue between all regions instead of fragmented policymaking.

Despite the rapid advances of AI in sectors like healthcare, agriculture and education, Bogdan-Martin warned that progress must be inclusive. She also urged more substantial efforts to bring women into AI and tech leadership, pointing to the continued gender imbalance in the sector.

As the first woman to lead ITU, she said her role was not just about achievement but setting a precedent for future generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI forces rethink of cloud infrastructure

Cybersecurity experts warn that reliance on traditional firewalls and legacy VPNs may pose greater risks than protection. These outdated tools often lack timely updates, making them prime entry points for cyber attackers exploiting AI-powered techniques.

Many businesses depend on ageing infrastructure, unaware that unpatched VPNs and web servers expose them to significant cybersecurity threats. Experts urge companies to abandon these legacy systems and modernise their defences with more adaptive, zero-trust models.

Meanwhile, OpenAI’s reported plans for a productivity suite challenge Microsoft’s dominance, promising simpler interfaces powered by generative AI. The shift could reshape daily workflows by integrating document creation directly with AI tools.

Agentic AI, which performs autonomous tasks without human oversight, also redefines enterprise IT demands. Experts believe traditional cloud tools cannot support such complex systems, prompting calls to rethink cloud strategies for more tailored, resilient platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US push for AI dominance through openness

In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a sweeping AI Action Plan with 103 recommendations. At its core lies an intriguing paradox: the push for open-source AI, typically associated with collaboration and transparency, is now being positioned as a strategic weapon.

As Jovan Kurbalija points out, this plan marks a turning point where open-weight models are framed not just as tools of innovation, but as instruments of geopolitical influence, with the US aiming to seed the global AI ecosystem with American-built systems rooted in ‘national values.’

The plan champions Silicon Valley by curbing regulations, limiting federal scrutiny, and shielding tech giants from legal liability—potentially reinforcing monopolies. It also underlines a national security-first mentality, urging aggressive safeguards against foreign misuse of AI, cyber threats, and misinformation. Notably, it proposes DARPA-led initiatives to unravel the inner workings of large language models, acknowledging that even their creators often can’t fully explain how these systems function.

Internationally, the plan takes a competitive, rather than cooperative, stance. Allies are expected to align with US export controls and values, while multilateral forums like the UN and OECD are dismissed as bureaucratic and misaligned. That bifurcation risks alienating global partners—particularly the EU, which favours heavy AI regulation—while increasing pressure on countries like India and Japan to choose sides in the US–China tech rivalry.

Despite its combative framing, the strategy also nods to inclusion and workforce development, calling for tax-free employer-sponsored AI training, investment in apprenticeships, and growing military academic hubs. Still, as Kurbalija warns, the promise of AI openness may clash with the plan’s underlying nationalistic thrust—raising questions about whether it truly aims to democratise AI, or merely dominate it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!