Delta’s personalised flight costs under scrutiny

Delta Air Lines’ recent revelation about using AI to price some airfares is drawing significant criticism. The airline aims to increase AI-influenced pricing to 20 per cent of its domestic flights by late 2025.

While Delta’s president, Glen Hauenstein, noted positive results from their Fetcherr-supplied AI tool, industry observers and senators are voicing concerns. Critics worry that AI-driven pricing, similar to rideshare surge models, could lead to increased fares for travellers and raise serious data privacy issues.

Senators like Ruben Gallego, Mark Warner, and Richard Blumenthal, highlighted fears that ‘surveillance pricing’ could utilise extensive personal data to estimate a passenger’s willingness to pay.

Despite Delta’s spokesperson denying individualised pricing based on personal information, AI experts suggest factors like device type and Browse behaviour are likely influencing prices, making them ‘deeply personalised’.

Different travellers could be affected unevenly. Bargain hunters with flexible dates might benefit, but business travellers and last-minute bookers may face higher costs. Other airlines like Virgin Atlantic also use Fetcherr’s technology, indicating a wider industry trend.

Pricing experts like Philip Carls warn that passengers won’t know if they’re getting a fair deal, and proving discrimination, even if unintended by AI, could be almost impossible.

American Airlines’ CEO, Robert Isom, has publicly criticised Delta’s move, stating American won’t copy the practice, though past incidents show airlines can adjust fares based on booking data even without AI.

With dynamic pricing technology already permitted, experts anticipate lawmakers will soon scrutinise AI’s role more closely, potentially leading to new transparency mandates.

For now, travellers can try strategies like using incognito mode, clearing cookies, or employing a VPN to obscure their digital footprint and potentially avoid higher AI-driven fares.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gulf states reframe AI as the ‘new oil’ in post‑petroleum push

Gulf states are actively redefining national strategy by embracing AI as a cornerstone of post-oil modernization. Saudi Arabia, through its AI platform Humain, a subsidiary of the Public Investment Fund, has committed state resources to build core infrastructure and develop Arabic multimodal models. Concurrently, the UAE is funding its $100 billion MGX initiative and supporting projects like G42 and the Falcon open-source model from Abu Dhabi’s Technology Innovation Institute.

Economic rationale underpins this ambition. Observers suggest that broad AI adoption across GCC sectors, including energy, healthcare, aviation, and government services, could add as much as $150 billion to regional GDP. Yet, concerns persist around workforce limitations, regulatory maturation, and geopolitical complications tied to supply chain dependencies.

Interest in AI has also reached geopolitical levels. Gulf leaders have struck partnerships with US firms to secure advanced AI chips and infrastructure, as seen during high-profile agreements with Nvidia, AMD, and Amazon. Critics caution that hosting major data centres in geopolitically volatile zones introduces physical and strategic risks, especially in contexts of rising regional tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zuckerberg says future AI glasses will give wearers a cognitive edge

Mark Zuckerberg framed smart glasses as the future of human–AI interaction during Meta’s Q2 2025 earnings call, saying anyone without such a device may be at a cognitive disadvantage compared to those using them.

He described the eyewear as the ideal way for AI to observe users visually and aurally, and to communicate information seamlessly during daily life.

Company leaders view smart eyewear such as Ray‑Ban Meta and Oakley Meta as early steps toward this vision, noting sales have more than tripled year-over-year.

Reality Labs, Meta’s AR/AI hardware unit, has accumulated nearly $70 billion in losses but continues investing in the form factor. Zuckerberg likened AI glasses to contact lenses for cognition, which is essential rather than optional.

While Meta remains committed to wearable AI, critics flag privacy and social risks around persistent camera-equipped glasses.

The strategy reflects a bet that wearable tech will reshape daily computing and usher in what Zuckerberg calls ‘personal superintelligence’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Nscale to build an AI super hub in Norway

OpenAI has revealed its first European data centre project in partnership with British startup Nscale, selecting Norway as the location for what is being called ‘Stargate Norway’.

The initiative mirrors the company’s ambitious $500 billion US ‘Stargate’ infrastructure plan and reflects Europe’s growing demand for large-scale AI computing capacity.

Nscale will lead the development of a $1 billion AI gigafactory in Norway, with engineering firm Aker matching the investment. These advanced data centres are designed to meet the heavy processing requirements of cutting-edge AI models.

OpenAI expects the facility to deliver 230MW of computing power by the end of 2026, making it a significant strategic foothold for the company on the continent.

Sam Altman, CEO of OpenAI, stated that Europe needs significantly more computing to unlock AI’s full potential for researchers, startups, and developers. He said Stargate Norway will serve as a cornerstone for driving innovation and economic growth in the region.

Nscale confirmed that Norway’s AI ecosystem will receive priority access to the facility, while remaining capacity will be offered to users across the UK, Nordics and Northern Europe.

The data centre will support 100,000 of NVIDIA’s most advanced GPUs, with long-term plans to scale as demand grows.

The move follows broader European efforts to strengthen AI infrastructure, with the UK and France pushing for major regulatory and funding reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS trial shows AI app halves treatment delays

An AI-powered physiotherapy app has significantly reduced NHS back pain treatment waiting lists in Cambridgeshire and Peterborough by 55%.

The trial, run by Cambridgeshire Community Services NHS Trust, diverted 2,500 clinician hours to more complex cases while offering digital care to routine patients.

The app assesses musculoskeletal (MSK) pain through questions and provides personalised video-guided exercises. It became the first AI physiotherapy tool regulated by the Care Quality Commission and is credited with cutting average MSK wait times from 18 to under 10 weeks.

Patients like Annys Bossom, who initially doubted its effectiveness, found the tool more engaging and valuable than traditional paper instructions.

Data showed that 98% of participants were treated and discharged digitally, while only 2% needed a face-to-face referral.

With growing demand and staff shortages in NHS MSK services, physiotherapists and developers say the technology offers scalable support.

Experts emphasise the need for human oversight and public trust as AI continues to play a larger role in UK healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI annual revenue doubles to 12 billion

OpenAI has doubled its revenue in the first seven months of 2025, reaching an annualised run rate of about $12 billion.

Surging demand for both consumer ChatGPT products and enterprise-level AI services is the main driver for this rapid growth.

Weekly active users of ChatGPT have soared to approximately 700 million, reflecting the platform’s expanding global reach and wide penetration. 

At the same time, costs have risen sharply, with cash burn projected around $8 billion in 2025, up from previous estimates.

OpenAI is preparing to release its next-generation AI model GPT‑5 in early August, underscoring its focus on innovation to maintain leadership in the AI market.

Despite growing competition from rival firms like DeepSeek, OpenAI remains confident that its technological edge and expanding product portfolio will sustain momentum.

Financial projections suggest potential revenue of $11 billion this year, with continued expansion into enterprise services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cloaking helps hackers dodge browser defences

Cybercriminals increasingly use AI-powered cloaking tools to bypass browser security systems and trick users into visiting scam websites.

These tools conceal malicious content from automated scanners, showing it only to human visitors, making it harder to detect phishing attacks and malware delivery.

Platforms such as Hoax Tech and JS Click Cloaker are being used to filter web traffic and serve fake pages to victims while hiding them from security systems.

The AI behind these services analyses a visitor’s browser, location, and behaviour before deciding which version of a site to display.

Known as white page and black page cloaking, the technique shows harmless content to detection tools and harmful pages to real users. However, this allows fraudulent sites to live longer, boosting the effectiveness and lifespan of cyberattacks.

Experts warn that cloaking is no longer a fringe method but a core part of cybercrime, now available as a commercial service. As these tactics grow more sophisticated, the pressure increases on browser developers to improve detection and protect users more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adopts crowd‑sourced verification tool to combat misinformation

TikTok has rolled out Footnotes in the United States, its crowd‑sourced debunking initiative to supplement existing misinformation controls.

Vetted contributors will write and rate explanatory notes beneath videos flagged as misleading or ambiguous. If a note earns broad support, it becomes visible to all US users.

The system uses a ‘bridging‑based’ ranking framework to encourage agreement between users with differing viewpoints, making the process more robust and reducing partisan bias. Initially launched as a pilot, the platform has already enlisted nearly 80,000 eligible US users.

Footnotes complements TikTok’s integrity setup, including automated detection, human moderation, and partnerships with fact‑checking groups like AFP. Platform leaders note that effectiveness improves as contributors engage more across various topics.

Past research shows comparable crowd‑sourced systems often struggle to publish most submissions, with fewer than 10% of Notes appearing publicly on other platforms. Concerns remain over the system’s scalability and potential misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!