Apple’s $20B Google deal under threat as AI lags behind rivals

Apple is set to release Q3 earnings on Thursday amid scrutiny over its Google search deal dependencies and ongoing struggles with AI progress.

Typically, Apple’s fiscal Q3 garners less investor attention, with anticipation focused instead on the upcoming iPhone launch in Q4. However, this quarter is proving to be anything but ordinary.

Analysts and shareholders alike are increasingly concerned about two looming threats: a potential $20 billion hit to Apple’s Services revenue tied to the US Department of Justice’s (DOJ) antitrust case against Google, and ongoing delays in Apple’s AI efforts.

Ahead of the earnings report, Apple shares were mostly unchanged, reflecting investor caution rather than enthusiasm. Apple’s most pressing challenge stems from its lucrative partnership with Google.

In 2022, Google paid Apple approximately $20 billion to remain the default search engine in the Safari browser and across Siri.

The exclusivity deal has formed a significant portion of Apple’s Services segment, which generated $78.1 billion in revenue that year, making Google’s contribution alone account for more than 25% of that figure.

However, a ruling expected next month from Judge Amit Mehta in the US District Court for the District of Columbia could threaten the entire arrangement. Mehta previously found Google guilty of operating an illegal monopoly in the search market.

The forthcoming ‘remedies’ ruling could force Google to end exclusive search deals, divest its Chrome browser, and provide data access to rivals. Should the DOJ’s proposed remedies stand and Google fails to overturn the ruling, Apple could lose a critical source of Services revenue.

According to Morgan Stanley’s Erik Woodring, Apple could see a 12% decline in its full-year 2027 earnings per share (EPS) if it pivots to less lucrative partnerships with alternative search engines.

The user experience may also deteriorate if customers can no longer set Google as their default option. A more radical scenario, Apple launching its search engine, could dent its 2024 EPS by as much as 20%, though analysts believe this outcome is the least likely.

Alongside regulatory threats, Apple is also facing growing doubts about its ability to compete in AI. Apple has not yet set a clear timeline for releasing an upgraded version of Siri, while rivals accelerate AI hiring and unveil new capabilities.

Bank of America analyst Wamsi Mohan noted this week that persistent delays undermine confidence in Apple’s ability to deliver innovation at the pace. ‘Apple’s ability to drive future growth depends on delivering new capabilities and products on time,’ he wrote to investors.

‘If deadlines keep slipping, that potentially delays revenue opportunities and gives competitors a larger window to attract customers.’

While Apple has teased upcoming AI features for future software updates, the lack of a commercial rollout or product roadmap has made investors uneasy, particularly as rivals like Microsoft, Google, and OpenAI continue to set the AI agenda.

Although Apple’s stock remained stable before Thursday’s earnings release, any indication of slowing services growth or missed AI milestones could shake investor confidence.

Analysts will be watching closely for commentary from CEO Tim Cook on how Apple plans to navigate regulatory risks and revive momentum in emerging technologies.

The company’s current crossroads is pivotal for the tech sector more broadly. Regulators are intensifying scrutiny on platform dominance, and AI innovation is fast becoming the new battleground for long-term growth.

As Apple attempts to defend its business model and rekindle its innovation edge, Thursday’s earnings update could serve as a bellwether for its direction in the post-iPhone era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI won’t replace coaches, but it will replace coaching without outcomes

Many coaches believe AI could never replace the human touch. They pride themselves on emotional intelligence — their empathy, intuition, and ability to read between the lines. They consider these traits irreplaceable. But that belief could be costing them their business.

The reason AI poses a real threat to coaching isn’t because machines are becoming more human. It’s because they’re becoming more effective. And clients aren’t hiring coaches for human connection — they’re hiring them for outcomes.

People seek coaches to overcome challenges, make decisions, or experience a transformation. They want results — and they want them as quickly and painlessly as possible. If AI can deliver those results faster and more conveniently, many clients will choose it without hesitation.

So what should coaches do? They shouldn’t ignore AI, fear it, or dismiss it as a passing fad. Instead, they should learn how to integrate it. Live, one-to-one sessions still matter. They provide the deepest insights and most lasting impact. But coaching must now extend beyond the session.

Coaching must be supported by systems that make success inevitable — and AI is the key to building those systems. Here lies a fundamental disconnect: coaches often believe their value lies in personal connections.

Clients, on the other hand, value results. The gap is where AI is stepping in — and where forward-thinking coaches are stepping up. Currently, most coaches are trapped in a model that trades time for money. More sessions, they assume, equals more transformation.

However, this model doesn’t scale. Many are burning out trying to serve everyone personally. Meanwhile, the most strategic among them are turning their coaching into scalable assets: digital products, automated workflows, and AI-trained tools that do their job around the clock.

They’re not being replaced by AI. They’re being amplified by it. The coaches are packaging their methods into online courses that clients can revisit between sessions. They’re building tools that track client progress automatically, offering midnight reassurance when doubts creep in.

The coaches are even training AI on their own frameworks, allowing clients to access support informed by the coach’s actual thinking — not generic chatbot responses. The business model in question isn’t science fiction. It’s already happening.

AI can be trained on your transcripts, methodologies, and session notes. It can conduct initial assessments and reinforce your teachings between meetings. Your clients receive consistent, on-demand support — and you free up time for the deep, human work only you can do.

Coaches who embrace this now will dominate their niches tomorrow. Even the content generated from coaching sessions is underutilised. Every call contains valuable insights — breakthroughs, reframes, moments of clarity.

The insights shouldn’t stay confined to just one client. Strip away personal details, extract the universal truths, and turn those insights into content that attracts your next ideal client. AI can also help you uncover patterns across your coaching history.

Feed your notes into analysis tools, and you might find that 80% of your executive clients hit the same obstacle in month three. Or that a particular intervention consistently delivers rapid breakthroughs.

The insights help you refine your practice and anticipate challenges before they arise — making your coaching more effective and less predictable. Then there’s the admin. Scheduling, invoicing, progress tracking — all of it can be automated.

Tools like Zapier or Make can optimise such repetitive tasks, giving you back hours each week. That’s time better spent on transformation, not operations. Your clients don’t want tradition. They want transformation.

The coaches who succeed in this new era will be those who understand that human insight and AI systems are not in competition. They’re complementary. Choose one area where AI could support your work — a progress tracker, a digital guide, or a content workflow. Start there.

The future of coaching doesn’t belong to the ones who resist AI. It belongs to those who combine wisdom with scalability. Your enhanced coaching model is waiting to be built — and your future clients are waiting to experience it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini aids crypto traders with research and strategy

Google Gemini Flash 2.5 is emerging as a helpful AI assistant for crypto traders seeking smarter, data-driven decisions. It simplifies complex project details, compares tokens, and analyses social media sentiment to provide deeper market insights.

While Gemini offers useful summaries and strategy suggestions, it does not predict prices or access live blockchain data, so traders must still verify its output with current sources.

The AI tool also helps in understanding technical analysis patterns. It assists in spotting correlations between assets like Bitcoin and traditional markets, and supports managing portfolio risks through diversification advice.

Gemini can review past trades to highlight lessons and improve timing, making it a valuable companion for both new and experienced traders.

Despite its capabilities, Gemini’s limitations mean it should be used alongside live charting, onchain analytics, and news platforms. Traders should combine AI insights with their own judgement and real-time data to navigate crypto’s fast-moving market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan invests NT$50 million to train AI-ready professionals

Taiwan’s Ministry of Economic Affairs has announced the launch of the first phase of its 2025 AI talent training programme, set to begin in August.

The initiative aims to develop 152 skilled professionals capable of supporting businesses in adopting AI technologies across a vast range of sectors.

Chiu Chiu-hui, Director-General of the Industrial Development Administration, said the programme has attracted over 60 domestic and international companies that will contribute instructors and offer internship placements.

Notable participating firms include Microsoft Taiwan, ASE Group, and Acer. Students will be selected from leading universities, such as National Taipei University, National Taipei University of Technology, National Formosa University, and National Cheng Kung University.

Structured as a one-year curriculum, the training is divided into three four-month phases. The initial stage will focus on theoretical foundations and current industry trends.

The first training stage will be followed by four months of practical application, and finally, four months of on-site corporate internships. Graduates of the programme are required to commit to working for one of the participating companies for a minimum of two years upon completion.

Participants will receive financial support throughout their training. A monthly stipend of NT$20,000 (approximately US$673) will be provided during the academic and practical stages, increasing to NT$30,000 during the internship period.

The government has earmarked NT$50 million for the first phase of the programme, and additional co-investment from private companies is being actively encouraged.

According to Chiu, some Taiwanese firms are struggling to find qualified talent to support their AI ambitions. In response, the ministry trained approximately 70,000 AI professionals last year and has set a slightly lower target of over 50,000 for 2025.

However, the long-term vision remains ambitious — to develop a total of 200,000 AI specialists within the next four years.

Registration for the second phase of the initiative is now open and will close in September. Training will expand to include universities and research institutions across Taiwan, with the next round of classes scheduled to start in October.

Industry leaders have praised the initiative as a timely response to the rapidly evolving technological landscape.

Lee Shu-hsia, Vice President of Human Resources at ASE Group, noted that AI is no longer confined to manufacturing but is increasingly being integrated into various functions such as procurement, human resources, and management.

The cross-departmental adoption is creating demand for AI-literate professionals who can bridge technical knowledge with operational needs.

Danny Chen, General Manager of Microsoft Taiwan’s public business group, added that the digital transformation underway in many companies has led to a significant increase in demand for AI-related talent.

Chen expressed optimism that the training programme will help companies not only recruit but also retain skilled personnel. The Ministry of Economic Affairs has expressed its expectation for participation to grow in the coming years and plans to expand both the scope and scale of the training.

In addition to co-investment, the ministry is exploring partnerships with international institutions to further enhance the programme’s global relevance and ensure alignment with emerging industry standards.

While the government’s long-term goal is to future-proof Taiwan’s workforce, the immediate focus is on plugging the talent gap that threatens to slow industrial innovation.

By linking academic institutions with real-world corporate challenges, the programme aims to produce graduates who are not only technically proficient but also industry-ready from day one.

Observers say the initiative represents a proactive strategy in preparing Taiwan’s economy for the next wave of AI-driven transformation. With AI applications becoming increasingly prevalent in sectors ranging from logistics to administration, building a robust talent pipeline is now viewed as a national priority.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum-resistant crypto wallets now possible without address changes

Sui Research revealed a way for blockchain wallets to upgrade for quantum safety without a hard fork or address changes. The approach, based on EdDSA cryptography, allows compatible networks like Solana, Sui and Near to transition securely with minimal disruption.

Cryptographer Kostas Chalkias described the breakthrough as the first backward-compatible path to quantum safety for wallets. The upgrade uses zero-knowledge proofs to verify private key control without exposing data, keeping original public keys and supporting dormant accounts.

While praised as one of the most important cryptographic advancements in recent years, the upgrade method does not apply to Bitcoin or Ethereum. These networks use different signature methods, which may need bigger changes to stay secure as quantum tech evolves.

Although quantum computers are not yet capable of breaking blockchain encryption, researchers and developers are racing to prepare. The risk of millions of wallets becoming vulnerable has triggered serious debate in the crypto community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scientists use quantum AI to solve chip design challenge

Scientists in Australia have used quantum machine learning to model semiconductor properties more accurately, potentially transforming how microchips are designed and manufactured.

The hybrid technique combines AI with quantum computing to solve a long-standing challenge in chip production: predicting electrical resistance where metal meets semiconductor.

The Australian researchers developed a new algorithm, the Quantum Kernel-Aligned Regressor (QKAR), which uses quantum methods to detect complex patterns in small, noisy datasets, a common issue in semiconductor research.

By improving how engineers predict Ohmic contact resistance, the approach could lead to faster, more energy-efficient chips. It also offers real-world compatibility, meaning it can eventually run on existing quantum machines as the hardware matures.

The findings highlight the growing role of quantum AI in hardware design and suggest the method could be adopted in commercial chip production in the near future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!