Apple’s $20B Google deal under threat as AI lags behind rivals

Apple is set to release Q3 earnings on Thursday amid scrutiny over its Google search deal dependencies and ongoing struggles with AI progress.

Typically, Apple’s fiscal Q3 garners less investor attention, with anticipation focused instead on the upcoming iPhone launch in Q4. However, this quarter is proving to be anything but ordinary.

Analysts and shareholders alike are increasingly concerned about two looming threats: a potential $20 billion hit to Apple’s Services revenue tied to the US Department of Justice’s (DOJ) antitrust case against Google, and ongoing delays in Apple’s AI efforts.

Ahead of the earnings report, Apple shares were mostly unchanged, reflecting investor caution rather than enthusiasm. Apple’s most pressing challenge stems from its lucrative partnership with Google.

In 2022, Google paid Apple approximately $20 billion to remain the default search engine in the Safari browser and across Siri.

The exclusivity deal has formed a significant portion of Apple’s Services segment, which generated $78.1 billion in revenue that year, making Google’s contribution alone account for more than 25% of that figure.

However, a ruling expected next month from Judge Amit Mehta in the US District Court for the District of Columbia could threaten the entire arrangement. Mehta previously found Google guilty of operating an illegal monopoly in the search market.

The forthcoming ‘remedies’ ruling could force Google to end exclusive search deals, divest its Chrome browser, and provide data access to rivals. Should the DOJ’s proposed remedies stand and Google fails to overturn the ruling, Apple could lose a critical source of Services revenue.

According to Morgan Stanley’s Erik Woodring, Apple could see a 12% decline in its full-year 2027 earnings per share (EPS) if it pivots to less lucrative partnerships with alternative search engines.

The user experience may also deteriorate if customers can no longer set Google as their default option. A more radical scenario, Apple launching its search engine, could dent its 2024 EPS by as much as 20%, though analysts believe this outcome is the least likely.

Alongside regulatory threats, Apple is also facing growing doubts about its ability to compete in AI. Apple has not yet set a clear timeline for releasing an upgraded version of Siri, while rivals accelerate AI hiring and unveil new capabilities.

Bank of America analyst Wamsi Mohan noted this week that persistent delays undermine confidence in Apple’s ability to deliver innovation at the pace. ‘Apple’s ability to drive future growth depends on delivering new capabilities and products on time,’ he wrote to investors.

‘If deadlines keep slipping, that potentially delays revenue opportunities and gives competitors a larger window to attract customers.’

While Apple has teased upcoming AI features for future software updates, the lack of a commercial rollout or product roadmap has made investors uneasy, particularly as rivals like Microsoft, Google, and OpenAI continue to set the AI agenda.

Although Apple’s stock remained stable before Thursday’s earnings release, any indication of slowing services growth or missed AI milestones could shake investor confidence.

Analysts will be watching closely for commentary from CEO Tim Cook on how Apple plans to navigate regulatory risks and revive momentum in emerging technologies.

The company’s current crossroads is pivotal for the tech sector more broadly. Regulators are intensifying scrutiny on platform dominance, and AI innovation is fast becoming the new battleground for long-term growth.

As Apple attempts to defend its business model and rekindle its innovation edge, Thursday’s earnings update could serve as a bellwether for its direction in the post-iPhone era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI sparks fears over future of dubbing

Voice actors across Europe are pushing back against the growing use of AI in dubbing, fearing it could replace human talent in film and television. Many believe dubbing is a creative profession beyond simple voice replication, requiring emotional nuance and cultural sensitivity.

In Germany, France, Italy and the UK, nearly half of viewers prefer dubbed content over subtitles, according to research by GWI. Yet studios are increasingly testing AI tools that replicate actors’ voices or generate synthetic speech, sparking concern across the dubbing industry.

French voice actor Boris Rehlinger, known for dubbing Hollywood stars, says he feels threatened even though AI has not replaced him. He is part of TouchePasMaVF, an initiative defending the value of human dubbing and calling for protection against AI replication.

Voice artists argue that digital voice cloning ignores the craftsmanship behind their performances. As legal frameworks around voice ownership lag behind the technology, many in the industry demand urgent safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey finds developers value AI for ideas, not final answers

As AI becomes more integrated into developer workflows, a new report shows that trust in AI-generated results erodes. According to Stack Overflow’s 2025 Developer Survey, the use of AI has increased to 84%, up from 76% last year. However, trust in its output has dropped, especially among experienced professionals.

The survey found that 46% of developers now lack trust in AI-generated answers.

That figure marks a sharp increase from 31% in 2024, suggesting growing scepticism despite higher adoption. By contrast, only 3.1% of developers trust AI responses.

Interestingly, trust varies with experience. Beginners were twice as likely to express high confidence in AI, with 6.1% reporting strong trust, compared with just 2.5% among seasoned developers. The results indicate a divide in how AI is perceived across the developer landscape.

Despite doubts, developers continue to use AI tools across various tasks. The vast majority – 78.5% – use AI on an infrequent basis, such as once a month. The pattern holds across experience levels, suggesting cautious and situational usage.

While trust is lacking, developers still see AI as a helpful starting point. Three in five respondents reported favourable views of AI tools overall. One in five viewed them negatively, with the remaining 20% remaining neutral.

However, that usefulness has limits. Developers were quick to seek human input when unsure about AI responses. Seventy-five percent said they would ask someone when they didn’t trust an AI-generated answer. Fifty-eight percent preferred human advice when they didn’t fully understand a solution.

Ethics and security were also areas where developers preferred human judgement. Again, 58% reported turning to colleagues or mentors to evaluate such risks. Such cases show a continued reliance on human expertise in high-stakes decisions.

Stack Overflow CEO Prashanth Chandrasekar acknowledged the limitations of AI in the development process. ‘AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,’ he said. He added that AI best uses a ‘trusted human intelligence layer’.

The data also revealed that developers may not trust AI entirely but use it to support learning.

Forty-four percent of respondents admitted using AI tools to learn how to code, up from 37% last year.

A further 36% use it for work-related growth or career advancement.

The results highlight the role of AI as an educational companion rather than a coding authority.

It can help users understand concepts or generate basic examples, but most still want human review.

That distinction matters as teams consider how to integrate AI into production workflows.

Some developers are concerned that overreliance on AI could reduce the depth of their problem-solving skills. Others worry about hallucinations — AI-generated content that appears accurate but is misleading or incorrect. Such risks have led to a cautious, layered approach to using AI tools in real-life projects.

Stack Overflow’s findings align with broader AI adoption and trust industry trends. Tech firms are exploring ways to integrate AI safely, but many prioritise transparency and human oversight. Chandrasekar believes developers are uniquely positioned to help shape AI’s future revolution.

‘By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value,’ he said. ‘They’ll help build the AI technologies and products of tomorrow.’

As AI continues to expand into software development, one thing is clear: trust matters. Developers are open to using AI – but only when it supports, rather than replaces, human judgement. The challenge now is building systems that earn and maintain that trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!