Delta Air Lines rolls out AI for personalised airfare

Delta Air Lines is shifting the landscape of airfare by leveraging AI to personalise ticket prices. Moving beyond fixed fares, Delta aims to tailor prices closely to each traveller.

Instead of static prices, the system now analyses customer habits, booking history, and even the time of day to predict an individual’s potential willingness to pay. By the end of the current year, Delta aims to set 20% of its ticket prices using AI dynamically.

The goal represents a significant, sevenfold increase from just twelve months prior. Such a high-tech approach could result in more advantageous deals or elevated costs, depending on a passenger’s unique circumstances and shopping behaviour.

It is crucial to understand how this system operates, Delta’s motivations, and its implications for consumer finances. Traditional ticket pricing has long relied on ‘fare buckets,’ where customers are categorised based on their booking method and timing.

Delta’s new AI ticket pricing system fundamentally shifts away from these static rates. It analyses real-time information to calculate precisely what a specific customer will likely spend on a seat for any given flight.

Glen Hauenstein, Delta’s President, describes this as a complete re-engineering of pricing. He characterises AI as a ‘super analyst’ working continuously, 24/7, to identify the optimal price for every traveller, every time.

The airline has collaborated with Fetcherr, which provides the underlying technological infrastructure and supports other global airlines. Airlines do not adopt advanced, high-tech pricing systems to reduce revenue.

Delta reports that initial results from its AI-driven pricing indicate ‘amazingly favourable’ revenues. The airline believes AI will maximise profits by more accurately aligning fares with each passenger’s willingness to pay.

However, this is determined by a vast array of data inputs, ranging from individual booking history to prevailing market trends. Delta’s core strategy is straightforward: to offer a price available for a specific flight, at a particular time, to you, the individual consumer.

Consumers who have previously observed frequent fluctuations in airfare should now anticipate even greater volatility. Delta’s new system could present a different price to one person compared to another for the same seat, with the calculation performed in real-time by the AI.

Passengers might receive special offers or early discounts if the AI identifies a need to fill seats quickly. However, discerning whether one is securing a ‘fair’ deal becomes significantly more challenging. The displayed price is now a function of what the AI believes an individual will pay, rather than a universal rate applicable to all.

The shift has prompted concerns among some privacy advocates. They worry that such personalised pricing could disadvantage customers who lack the resources or time to search extensively for the most favourable deals.

Consequently, those less able to shop around may be charged the highest prices. Delta has been approached for comment, and a spokesperson stated: ‘There is no fare product Delta has ever used, is testing, or plans to use that targets customers with individualised offers based on personal information or otherwise.

Various market forces have driven the dynamic pricing model used in the global industry for decades, with new tech streamlining this process. Delta always complies with regulations around pricing and disclosures.’

Delta’s openness regarding this significant policy change has attracted considerable national attention. Other airlines are already trialling their AI fare systems, and industry experts widely anticipate that the rest of the sector will soon follow suit.

Nevertheless, privacy advocates and several lawmakers are vocalising strong objections. Critics contend that allowing AI to determine pricing behind the scenes is akin to airlines ‘hacking our brains’ to ascertain the maximum price a customer will accept, as described by Consumer Watchdog.

The legal ramifications of this approach are still unfolding. While price variation based on demand or timing is not novel, the use of AI for ultra-personalised pricing raises uncomfortable questions about potential discrimination and fairness, particularly given prior research suggesting that economically disadvantaged customers frequently receive less favourable deals.

Delta’s AI pricing system personalises every airfare, making each search and price specific to the user. Universal ticket prices are fading as AI analyses booking habits and market conditions. This technology can quickly offer special deals to fill seats or raise prices if demand is detected.

Conversely, the price can increase if the system senses a greater willingness to pay. Shopping around is now an absolute necessity. Utilising a VPN can help outsmart the system by masking location and IP address, which prevents airlines from tracking searches and adjusting prices based on geographic region.

Making quick decisions might result in savings, but procrastination could lead to a price increase. Privacy is paramount; the airline gains insights into a user’s habits with every search. A digital footprint directly influences fares. In essence, consumers now possess both increased power and greater responsibility.

Being astute, flexible, and constantly comparing before purchasing is vital. Delta’s transition to AI-driven ticket pricing significantly shifts how consumers purchase flight tickets.

While offering potential for enhanced flexibility and efficiency, it simultaneously raises substantial questions concerning fairness, privacy, and transparency.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot captures veteran workers’ knowledge to support UK care teams

Peterborough City Council has turned the knowledge of veteran therapy practitioner Geraldine Jinks into an AI chatbot to support adult social care workers.

After 35 years of experience, colleagues frequently approached Jinks seeking advice, leading to time pressures despite her willingness to help.

In response, the council developed a digital assistant called Hey Geraldine, built on the My AskAI platform, which mimics her direct and friendly communication style to provide instant support to staff.

Developed in 2023, the chatbot offers practical answers to everyday care-related questions, such as how to support patients with memory issues or discharge planning. Jinks collaborated with the tech team to train the AI, writing all the responses herself to ensure consistency and clarity.

Thanks to its natural tone and humanlike advice, some colleagues even mistook the chatbot for the honest Geraldine.

The council hopes Hey Geraldine will reduce hospital discharge delays and improve patient access to assistive technology. Councillor Shabina Qayyum, who also works as a GP, said the tool empowers staff to help patients regain independence instead of facing unnecessary delays.

The chatbot is seen as preserving valuable institutional knowledge while improving frontline efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google seeks balance between user satisfaction and ecosystem health

At the Search Central Live Deep Dive 2025 event, Google’s Gary Illyes acknowledged that they are still calibrating how to weigh user needs, especially around AI-powered features, and the health of the broader web publishing community.

The company gathers internal survey data and tracks the adoption of external AI tools to assess satisfaction and guide product decisions.

While Google aims to enrich user experience with AI Overviews, critics warn these features may shrink organic traffic for publishers, as users often consume information without visiting source sites.

Illyes reaffirmed that Google does not intend disruption but is navigating a trade-off between serving users efficiently and maintaining a healthy content ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Allianz breach affects most US customers

Allianz Life has confirmed a major cyber breach that exposed sensitive data from most of its 1.4 million customers in North America.

The attack was traced back to 16 July, when a threat actor accessed a third-party cloud system using social engineering tactics.

The cybersecurity breach affected a customer relationship management platform but did not compromise the company’s core network or policy systems.

Allianz Life acted swiftly by notifying the FBI and other regulators, including the attorney general’s office in Maine.

Those impacted are offered two years of credit monitoring and identity theft protection. The company has begun contacting affected individuals but declined to reveal the full number involved due to an ongoing investigation.

No other Allianz subsidiaries were affected by the breach. Allianz Life employs around 2,000 staff in the US and remains a key player within the global insurer’s North American operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK enforces age checks to block harmful online content for children

The United Kingdom has introduced new age verification laws to prevent children from accessing harmful online content, marking a significant shift in digital child protection.

The measures, enforced by media regulator Ofcom, require websites and apps to implement strict age checks such as facial recognition and credit card verification.

Around 6,000 pornography websites have already agreed to the new regulations, which stem from the 2023 Online Safety Act. The rules also target content related to suicide, self-harm, eating disorders and online violence, instead of just focusing on pornography.

Companies failing to comply risk fines of up to £18 million or 10% of global revenue, and senior executives could face criminal charges if they ignore Ofcom’s directives.

Technology Secretary Peter Kyle described the move as a turning point, saying children will now experience a ‘different internet for the first time’.

Ofcom data shows that around 500,000 children aged eight to fourteen encountered online pornography in just one month, highlighting the urgency of the reforms. Campaigners, including the NSPCC, called the new rules a ‘milestone’, though they warned loopholes could remain.

The UK government is also exploring further restrictions, including a potential daily two-hour time limit on social media use for under-16s. Kyle has promised more announcements soon, as Britain moves to hold tech platforms accountable instead of leaving children exposed to harmful content online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI forces rethink of cloud infrastructure

Cybersecurity experts warn that reliance on traditional firewalls and legacy VPNs may pose greater risks than protection. These outdated tools often lack timely updates, making them prime entry points for cyber attackers exploiting AI-powered techniques.

Many businesses depend on ageing infrastructure, unaware that unpatched VPNs and web servers expose them to significant cybersecurity threats. Experts urge companies to abandon these legacy systems and modernise their defences with more adaptive, zero-trust models.

Meanwhile, OpenAI’s reported plans for a productivity suite challenge Microsoft’s dominance, promising simpler interfaces powered by generative AI. The shift could reshape daily workflows by integrating document creation directly with AI tools.

Agentic AI, which performs autonomous tasks without human oversight, also redefines enterprise IT demands. Experts believe traditional cloud tools cannot support such complex systems, prompting calls to rethink cloud strategies for more tailored, resilient platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US push for AI dominance through openness

In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a sweeping AI Action Plan with 103 recommendations. At its core lies an intriguing paradox: the push for open-source AI, typically associated with collaboration and transparency, is now being positioned as a strategic weapon.

As Jovan Kurbalija points out, this plan marks a turning point where open-weight models are framed not just as tools of innovation, but as instruments of geopolitical influence, with the US aiming to seed the global AI ecosystem with American-built systems rooted in ‘national values.’

The plan champions Silicon Valley by curbing regulations, limiting federal scrutiny, and shielding tech giants from legal liability—potentially reinforcing monopolies. It also underlines a national security-first mentality, urging aggressive safeguards against foreign misuse of AI, cyber threats, and misinformation. Notably, it proposes DARPA-led initiatives to unravel the inner workings of large language models, acknowledging that even their creators often can’t fully explain how these systems function.

Internationally, the plan takes a competitive, rather than cooperative, stance. Allies are expected to align with US export controls and values, while multilateral forums like the UN and OECD are dismissed as bureaucratic and misaligned. That bifurcation risks alienating global partners—particularly the EU, which favours heavy AI regulation—while increasing pressure on countries like India and Japan to choose sides in the US–China tech rivalry.

Despite its combative framing, the strategy also nods to inclusion and workforce development, calling for tax-free employer-sponsored AI training, investment in apprenticeships, and growing military academic hubs. Still, as Kurbalija warns, the promise of AI openness may clash with the plan’s underlying nationalistic thrust—raising questions about whether it truly aims to democratise AI, or merely dominate it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings Gemini AI shortcut to Android home screens

Google has launched a new AI Mode shortcut in Android Search, offering direct home-screen access to its Gemini-powered tools. The upgrade brings conversational AI to everyday mobile searches, enabling users to ask complex questions and receive context-rich responses without leaving the home screen.

AI Mode, introduced in Google Labs and now available on a wide range of Android devices, marks a leap in integrating AI across Android’s ecosystem. The feature’s rise from a limited beta to mass adoption follows enhancements powered by Gemini 2.5 Pro and Deep Search, now with 100 million monthly users.

Key functions include multimodal inputs, advanced planning tools, and even the ability for AI to call businesses to verify local information. These capabilities are already live for paid subscribers, while core features remain free, drawing comparisons with rivals such as ChatGPT and Bing AI.

Privacy concerns surfaced as real-time interactions expand, but Google claims strong data protection controls are in place. As AI-powered results blend into traditional search, SEO strategies and user trust will be tested, signalling a new era in mobile discovery and digital engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women-only dating app Tea suffers catastrophic data leak

Tea, a women-only dating app, has suffered a massive data breach after its backend was found completely unsecured. Over 72,000 private images and more than 13,000 government-issued IDs were leaked online.

Some documents were dated as recently as 2025, contradicting the company’s claim that only ‘old data’ was affected. The data, totalling 59.3 GB, included verification selfies, DMs, and public posts. It spread rapidly through 4chan and decentralised platforms like BitTorrent.

Critics have blamed Tea’s use of ‘vibe coding’, AI-generated code with no proper review, which reportedly left its Firebase database open with no authentication.

Experts warn that relying on AI tools to build apps without security checks is becoming increasingly risky. Research shows nearly half of AI-generated code contains vulnerabilities, yet many startups still use it for core features. Tea users are now urged to monitor their identity and financial data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI startup enables context across thousands of hours of video

Samsung Next has invested in Memories.ai, a startup specialising in long-duration video analysis capable of processing up to 10 million hours of footage.

The tool uses AI to transform massive video archives into searchable, structured datasets, even across multiple videos spanning hours or days.

The solution employs a layered pipeline: it filters noise, compresses critical segments, indexes content for natural-language queries, segments into meaningful units, and aggregates those insights into digestible reports. This structure enables users to search and analyse complex visual datasets seamlessly.

Memories.ai’s co-founders, Dr Shawn Shen and Enmin (Ben) Zhou, bring backgrounds from Meta’s Reality Labs and machine learning engineering.

The company raised $8 million in seed funding, surpassing its $4 million goal, led by Susa Ventures, including Samsung Next, Fusion Fund, Crane Ventures, Seedcamp, and Creator Ventures.

Samsung is banking on Memories.ai’s edge computing strengths, particularly to enable privacy-conscious applications such as home security analytics without cloud dependency. Its startup focus includes security firms and marketers needing scalable tools to sift through extensive video content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!