Delta’s personalised flight costs under scrutiny

Delta Air Lines’ recent revelation about using AI to price some airfares is drawing significant criticism. The airline aims to increase AI-influenced pricing to 20 per cent of its domestic flights by late 2025.

While Delta’s president, Glen Hauenstein, noted positive results from their Fetcherr-supplied AI tool, industry observers and senators are voicing concerns. Critics worry that AI-driven pricing, similar to rideshare surge models, could lead to increased fares for travellers and raise serious data privacy issues.

Senators like Ruben Gallego, Mark Warner, and Richard Blumenthal, highlighted fears that ‘surveillance pricing’ could utilise extensive personal data to estimate a passenger’s willingness to pay.

Despite Delta’s spokesperson denying individualised pricing based on personal information, AI experts suggest factors like device type and Browse behaviour are likely influencing prices, making them ‘deeply personalised’.

Different travellers could be affected unevenly. Bargain hunters with flexible dates might benefit, but business travellers and last-minute bookers may face higher costs. Other airlines like Virgin Atlantic also use Fetcherr’s technology, indicating a wider industry trend.

Pricing experts like Philip Carls warn that passengers won’t know if they’re getting a fair deal, and proving discrimination, even if unintended by AI, could be almost impossible.

American Airlines’ CEO, Robert Isom, has publicly criticised Delta’s move, stating American won’t copy the practice, though past incidents show airlines can adjust fares based on booking data even without AI.

With dynamic pricing technology already permitted, experts anticipate lawmakers will soon scrutinise AI’s role more closely, potentially leading to new transparency mandates.

For now, travellers can try strategies like using incognito mode, clearing cookies, or employing a VPN to obscure their digital footprint and potentially avoid higher AI-driven fares.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS trial shows AI app halves treatment delays

An AI-powered physiotherapy app has significantly reduced NHS back pain treatment waiting lists in Cambridgeshire and Peterborough by 55%.

The trial, run by Cambridgeshire Community Services NHS Trust, diverted 2,500 clinician hours to more complex cases while offering digital care to routine patients.

The app assesses musculoskeletal (MSK) pain through questions and provides personalised video-guided exercises. It became the first AI physiotherapy tool regulated by the Care Quality Commission and is credited with cutting average MSK wait times from 18 to under 10 weeks.

Patients like Annys Bossom, who initially doubted its effectiveness, found the tool more engaging and valuable than traditional paper instructions.

Data showed that 98% of participants were treated and discharged digitally, while only 2% needed a face-to-face referral.

With growing demand and staff shortages in NHS MSK services, physiotherapists and developers say the technology offers scalable support.

Experts emphasise the need for human oversight and public trust as AI continues to play a larger role in UK healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cloaking helps hackers dodge browser defences

Cybercriminals increasingly use AI-powered cloaking tools to bypass browser security systems and trick users into visiting scam websites.

These tools conceal malicious content from automated scanners, showing it only to human visitors, making it harder to detect phishing attacks and malware delivery.

Platforms such as Hoax Tech and JS Click Cloaker are being used to filter web traffic and serve fake pages to victims while hiding them from security systems.

The AI behind these services analyses a visitor’s browser, location, and behaviour before deciding which version of a site to display.

Known as white page and black page cloaking, the technique shows harmless content to detection tools and harmful pages to real users. However, this allows fraudulent sites to live longer, boosting the effectiveness and lifespan of cyberattacks.

Experts warn that cloaking is no longer a fringe method but a core part of cybercrime, now available as a commercial service. As these tactics grow more sophisticated, the pressure increases on browser developers to improve detection and protect users more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adopts crowd‑sourced verification tool to combat misinformation

TikTok has rolled out Footnotes in the United States, its crowd‑sourced debunking initiative to supplement existing misinformation controls.

Vetted contributors will write and rate explanatory notes beneath videos flagged as misleading or ambiguous. If a note earns broad support, it becomes visible to all US users.

The system uses a ‘bridging‑based’ ranking framework to encourage agreement between users with differing viewpoints, making the process more robust and reducing partisan bias. Initially launched as a pilot, the platform has already enlisted nearly 80,000 eligible US users.

Footnotes complements TikTok’s integrity setup, including automated detection, human moderation, and partnerships with fact‑checking groups like AFP. Platform leaders note that effectiveness improves as contributors engage more across various topics.

Past research shows comparable crowd‑sourced systems often struggle to publish most submissions, with fewer than 10% of Notes appearing publicly on other platforms. Concerns remain over the system’s scalability and potential misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple’s $20B Google deal under threat as AI lags behind rivals

Apple is set to release Q3 earnings on Thursday amid scrutiny over its Google search deal dependencies and ongoing struggles with AI progress.

Typically, Apple’s fiscal Q3 garners less investor attention, with anticipation focused instead on the upcoming iPhone launch in Q4. However, this quarter is proving to be anything but ordinary.

Analysts and shareholders alike are increasingly concerned about two looming threats: a potential $20 billion hit to Apple’s Services revenue tied to the US Department of Justice’s (DOJ) antitrust case against Google, and ongoing delays in Apple’s AI efforts.

Ahead of the earnings report, Apple shares were mostly unchanged, reflecting investor caution rather than enthusiasm. Apple’s most pressing challenge stems from its lucrative partnership with Google.

In 2022, Google paid Apple approximately $20 billion to remain the default search engine in the Safari browser and across Siri.

The exclusivity deal has formed a significant portion of Apple’s Services segment, which generated $78.1 billion in revenue that year, making Google’s contribution alone account for more than 25% of that figure.

However, a ruling expected next month from Judge Amit Mehta in the US District Court for the District of Columbia could threaten the entire arrangement. Mehta previously found Google guilty of operating an illegal monopoly in the search market.

The forthcoming ‘remedies’ ruling could force Google to end exclusive search deals, divest its Chrome browser, and provide data access to rivals. Should the DOJ’s proposed remedies stand and Google fails to overturn the ruling, Apple could lose a critical source of Services revenue.

According to Morgan Stanley’s Erik Woodring, Apple could see a 12% decline in its full-year 2027 earnings per share (EPS) if it pivots to less lucrative partnerships with alternative search engines.

The user experience may also deteriorate if customers can no longer set Google as their default option. A more radical scenario, Apple launching its search engine, could dent its 2024 EPS by as much as 20%, though analysts believe this outcome is the least likely.

Alongside regulatory threats, Apple is also facing growing doubts about its ability to compete in AI. Apple has not yet set a clear timeline for releasing an upgraded version of Siri, while rivals accelerate AI hiring and unveil new capabilities.

Bank of America analyst Wamsi Mohan noted this week that persistent delays undermine confidence in Apple’s ability to deliver innovation at the pace. ‘Apple’s ability to drive future growth depends on delivering new capabilities and products on time,’ he wrote to investors.

‘If deadlines keep slipping, that potentially delays revenue opportunities and gives competitors a larger window to attract customers.’

While Apple has teased upcoming AI features for future software updates, the lack of a commercial rollout or product roadmap has made investors uneasy, particularly as rivals like Microsoft, Google, and OpenAI continue to set the AI agenda.

Although Apple’s stock remained stable before Thursday’s earnings release, any indication of slowing services growth or missed AI milestones could shake investor confidence.

Analysts will be watching closely for commentary from CEO Tim Cook on how Apple plans to navigate regulatory risks and revive momentum in emerging technologies.

The company’s current crossroads is pivotal for the tech sector more broadly. Regulators are intensifying scrutiny on platform dominance, and AI innovation is fast becoming the new battleground for long-term growth.

As Apple attempts to defend its business model and rekindle its innovation edge, Thursday’s earnings update could serve as a bellwether for its direction in the post-iPhone era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!