Apple boosts AI investment with new hires and acquisitions

Apple is ramping up its AI efforts, with CEO Tim Cook confirming that the company is significantly increasing its investments in the technology. During the Q3 2025 earnings call, Cook said AI would be embedded across Apple’s devices, platforms and internal operations.

The firm has reallocated staff to focus on AI and continues to acquire smaller companies to accelerate progress, completing seven acquisitions this year alone. Capital expenditure has also risen, partly due to the growing focus on AI.

Despite criticism that Apple has lagged behind in the AI race, the company insists it will not rush features to market. More than 20 Apple Intelligence tools have already been released, with additional features like live translation and an AI fitness assistant expected by year-end.

The updated version of Siri, which promises greater personalisation, has been pushed to 2026. Cook dismissed suggestions that AI-powered hardware, like glasses, would replace the iPhone, instead positioning future devices as complementary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK universities urged to act fast on AI teaching

UK universities risk losing their competitive edge unless they adopt a clear, forward-looking approach to ΑΙ in teaching. Falling enrolments, limited funding, and outdated digital systems have exposed a lack of AI literacy across many institutions.

As AI skills become essential for today’s workforce, employers increasingly expect graduates to be confident users rather than passive observers.

Many universities continue relying on legacy technology rather than exploring the full potential of modern learning platforms. AI tools can enhance teaching by adapting to individual student needs and helping educators identify learning gaps.

However, few staff have received adequate training, and many universities lack the resources or structure to embed AI into day-to-day teaching effectively.

To close the growing gap between education and the workplace, universities must explore flexible short courses and microcredentials that develop workplace-ready skills.

Introducing ethical standards and data transparency from the start will ensure AI is used responsibly without weakening academic integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon reports $18.2B profit boost as AI strategy takes off

Amazon has reported a 35% increase in quarterly profit, driven by rapid growth in its AI-powered services and cloud computing arm, Amazon Web Services (AWS).

The tech and e-commerce giant posted net income of $18.2 billion for Q2 2025, up from $13.5 billion a year earlier, while net sales rose 13% to $167.7 billion and exceeded analyst expectations.

CEO Andy Jassy attributed the strong performance to the company’s growing reliance on AI. ‘Our conviction that AI will change every customer experience is starting to play out,’ Jassy said, referencing Amazon’s AI-powered Alexa+ upgrades and new generative AI shopping tools.

AWS remained the company’s growth engine, with revenue climbing 17.5% to $30.9 billion and operating profit rising to $10.2 billion. The surge reflects the increasing demand for cloud infrastructure to support AI deployment across industries.

Despite the solid earnings, Amazon’s share price dipped more than 3% in after-hours trading. Analysts pointed to concerns over the company’s heavy capital spending, particularly its aggressive $100 billion AI investment strategy.

Free cash flow over the past year fell to $18.2 billion, down from $53 billion a year earlier. In Q2 alone, Amazon spent $32.2 billion on infrastructure, nearly double the previous year’s figure, much of it aimed at expanding its data centre and logistics capabilities to support AI workloads.

For the current quarter, Amazon projected revenue of $174.0 to $179.5 billion and operating income between $15.5 and $20.5 billion, slightly below investor hopes but still reflecting double-digit year-on-year growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gulf states reframe AI as the ‘new oil’ in post‑petroleum push

Gulf states are actively redefining national strategy by embracing AI as a cornerstone of post-oil modernization. Saudi Arabia, through its AI platform Humain, a subsidiary of the Public Investment Fund, has committed state resources to build core infrastructure and develop Arabic multimodal models. Concurrently, the UAE is funding its $100 billion MGX initiative and supporting projects like G42 and the Falcon open-source model from Abu Dhabi’s Technology Innovation Institute.

Economic rationale underpins this ambition. Observers suggest that broad AI adoption across GCC sectors, including energy, healthcare, aviation, and government services, could add as much as $150 billion to regional GDP. Yet, concerns persist around workforce limitations, regulatory maturation, and geopolitical complications tied to supply chain dependencies.

Interest in AI has also reached geopolitical levels. Gulf leaders have struck partnerships with US firms to secure advanced AI chips and infrastructure, as seen during high-profile agreements with Nvidia, AMD, and Amazon. Critics caution that hosting major data centres in geopolitically volatile zones introduces physical and strategic risks, especially in contexts of rising regional tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DW Weekly #223 – AI race heats: The US AI Action Plan, China’s push for a global AI cooperation organisation, and the EU’s regulatory response

 Logo, Text

25 July – 1 August 2025


 Book, Comics, Publication, Adult, Male, Man, Person, Face, Head, Clothing, Coat, Hat, James Montgomery Flagg

Dear readers,

Over the past week, the White House has launched a sweeping AI initiative through its new publication Winning the Race: America’s AI Action Plan, an ambitious strategy to dominate global AI leadership by promoting open-source technology and streamlining regulatory frameworks. America’s ‘open-source gambit’, analysed in detail by Dr Jovan Kurbalija in Diplo’s blog, signals a significant shift in digital policy, intending to democratise AI innovation to outpace competitors, particularly China.

Supporting this bold direction, major tech giants have endorsed President Trump’s AI deregulation plans, despite widespread public concerns regarding potential societal impacts. Trump’s policies notably include an explicit push for ‘anti-woke’ AI frameworks within US government contracts, raising contentious debates about the ideological neutrality and ethical implications of AI systems in governance.

In parallel, China has responded with its own global AI governance plan, proposing the establishment of an international AI cooperation organisation to enhance worldwide coordination and standard-setting. Thus, it is not hard to conclude that there is an escalating AI governance competition between the two technological superpowers, each advocating distinctly different visions for the future of global AI development.

On the multilateral stage, the UN’s Economic and Social Council (ECOSOC) adopted a resolution: ‘Assessment of the progress made in the implementation of and follow-up to the outcomes of the World Summit on the Information Society’, through the Commission on Science and Technology for Development (CSTD), reaffirming commitments to implement the outcomes of the World Summit on the Information Society (WSIS).

Corporate strategies have also reflected these geopolitical undercurrents. Samsung Electronics has announced a landmark $16.5 billion chip manufacturing deal with Tesla, generating optimism about Samsung’s capability to revive its semiconductor foundry business. Yet, execution risks remain substantial, prompting Samsung’s Chairman Jay Y. Lee to promptly travel to Washington to solidify bilateral trade relations and secure the company’s position amid potential trade tensions.

Similarly, Nvidia has placed a strategic order for 300,000 chipsets from Taiwanese giant TSMC, driven by robust Chinese demand and shifting US trade policies.

Meanwhile, the EU has intensified regulatory scrutiny, accusing e-commerce platform Temu of failing mandatory Digital Services Act (DSA) checks, citing serious risks related to counterfeit and unsafe goods.

In the USA, similar scrutiny arose as Senator Maggie Hassan urged Elon Musk to take decisive action against Southeast Asian criminal groups using Starlink services to defraud American citizens.

Finally, the EU’s landmark AI Act commenced its implementation phase this week, despite considerable pushback from tech firms concerned about regulatory compliance burdens.

Diplo Blog – The open-source gambit: How America plans to outpace AI rivals by democratising tech

On 23 July, the US unveiled an AI Action Plan featuring 103 recommendations focused on winning the AI race against China. Key themes include promoting open-source AI to establish global standards, reducing regulations to support tech firms, and emphasising national security. The plan addresses labour displacement, AI biases, and cybersecurity threats, advocating for reskilling workers and maintaining tech leadership through private sector flexibility. Additionally, it aims to align US allies within an AI framework while expressing scepticism toward multilateral regulations. Overall, the plan positions open-source AI as a strategic asset amid geopolitical competition. Read the full blog!

For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.

Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.

DW Team


RADAR

Highlights from the week of 25 July – 1 August 2025

registration 3938434 1280

But worries rise as many free VPNs exploit users or carry hidden malware

australia flag is depicted on the screen with the program code

From December, YouTube must block accounts for Australians under 16 or face massive fines.

aeroflot cyberattack silent crow belarus cyber partisans ukraine conflict

Belarusian and Ukrainian hackers claim responsibility for strategic cyber sabotage of Aeroflot.

2025%2F07%2Fbeautiful silhouette port machinery sunset

A NATO policy brief warns that civilian ports across Europe face increasing cyber threats from state-linked actors and calls for updated maritime strategies to strengthen cybersecurity and civil–military coordination.

whatsapp Italy Meta antitrust EU AGCM

AGCM says Meta may have harmed competition by embedding AI features into WhatsApp.

eu and google

The EU AI Code could add €1.4 trillion to Europe’s economy, Google says.

0a560533 72a4 4acc a3a6 84f2162044df

Tether and Circle dominate the fiat-backed stablecoin market, now valued at over $227 billion combined.

microsoft logo png seeklogo 168319

Brussels updates Microsoft terms to curb risky data transfers

66b3be5a4f47d87bdc945227 image1 min

AI use in schools is weakening the connection between students and teachers by permitting students to bypass genuine effort through shortcuts.

AI Depression Design MT 1200x900 1

Use of AI surveillance, including monitoring software, intensifies burnout, micromanagement feelings, and disengagement.

cybersecurity risks of generative ai

A majority of Fortune 500 companies now mention AI in their annual reports as a risk factor instead of citing its benefits.

man using laptop night workplace top view

The platforms lost more than $3.1 billion in the first half of 2025, with AI-powered hacks and phishing scams leading the surge.

US AI jobs Brookings Lightcast survey

AI jobs now span marketing, finance, and HR—not just tech.

jojickajoja27 quantum computing 91572e2a e7d0 40d4 b6e5 12dc1bca48c6 11zon

Google and Microsoft lead investment in advanced AI and quantum infrastructure.


READING CORNER
BLOG featured image 2025 The open source gambit

On 23 July, the US unveiled an AI Action Plan featuring 103 recommendations focused on winning the AI race against China. Key themes include promoting open-source AI to establish global standards, reducing regulations to support tech firms, and emphasising national security.

ChatGPT Image Jul 28 2025 at 10 13 23 PM

Tracking technologies shape our online experience in often invisible ways, yet profoundly impactful, raising important questions about transparency, control, and accountability in the digital age.

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS trial shows AI app halves treatment delays

An AI-powered physiotherapy app has significantly reduced NHS back pain treatment waiting lists in Cambridgeshire and Peterborough by 55%.

The trial, run by Cambridgeshire Community Services NHS Trust, diverted 2,500 clinician hours to more complex cases while offering digital care to routine patients.

The app assesses musculoskeletal (MSK) pain through questions and provides personalised video-guided exercises. It became the first AI physiotherapy tool regulated by the Care Quality Commission and is credited with cutting average MSK wait times from 18 to under 10 weeks.

Patients like Annys Bossom, who initially doubted its effectiveness, found the tool more engaging and valuable than traditional paper instructions.

Data showed that 98% of participants were treated and discharged digitally, while only 2% needed a face-to-face referral.

With growing demand and staff shortages in NHS MSK services, physiotherapists and developers say the technology offers scalable support.

Experts emphasise the need for human oversight and public trust as AI continues to play a larger role in UK healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI annual revenue doubles to 12 billion

OpenAI has doubled its revenue in the first seven months of 2025, reaching an annualised run rate of about $12 billion.

Surging demand for both consumer ChatGPT products and enterprise-level AI services is the main driver for this rapid growth.

Weekly active users of ChatGPT have soared to approximately 700 million, reflecting the platform’s expanding global reach and wide penetration. 

At the same time, costs have risen sharply, with cash burn projected around $8 billion in 2025, up from previous estimates.

OpenAI is preparing to release its next-generation AI model GPT‑5 in early August, underscoring its focus on innovation to maintain leadership in the AI market.

Despite growing competition from rival firms like DeepSeek, OpenAI remains confident that its technological edge and expanding product portfolio will sustain momentum.

Financial projections suggest potential revenue of $11 billion this year, with continued expansion into enterprise services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cloaking helps hackers dodge browser defences

Cybercriminals increasingly use AI-powered cloaking tools to bypass browser security systems and trick users into visiting scam websites.

These tools conceal malicious content from automated scanners, showing it only to human visitors, making it harder to detect phishing attacks and malware delivery.

Platforms such as Hoax Tech and JS Click Cloaker are being used to filter web traffic and serve fake pages to victims while hiding them from security systems.

The AI behind these services analyses a visitor’s browser, location, and behaviour before deciding which version of a site to display.

Known as white page and black page cloaking, the technique shows harmless content to detection tools and harmful pages to real users. However, this allows fraudulent sites to live longer, boosting the effectiveness and lifespan of cyberattacks.

Experts warn that cloaking is no longer a fringe method but a core part of cybercrime, now available as a commercial service. As these tactics grow more sophisticated, the pressure increases on browser developers to improve detection and protect users more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!