AI is the next iPhone moment, says Apple CEO Tim Cook

Any remaining doubts about Apple’s commitment to AI have been addressed directly by its CEO, Tim Cook.

At an all-hands meeting on Apple’s Cupertino campus, Cook told employees that the AI revolution is as big as the internet, smartphones, cloud computing, and apps.

According to Bloomberg’s Power On newsletter, Cook clarified that Apple sees AI as an imperative. ‘Apple must do this,’ he said, describing the opportunity as ‘ours to grab’.

Despite Apple unveiling its AI suite, Apple Intelligence, only in June, well after competitors, Cook remains optimistic about Apple’s ability to take the lead.

‘We’ve rarely been first,’ he told staff. ‘There was a PC before the Mac; a smartphone before the iPhone; many tablets before the iPad; an MP3 player before the iPod.’

Cook stressed that Apple had redefined these categories and suggested a similar future for AI, declaring, ‘This is how I feel about AI.’

Cook also outlined concrete steps the company is taking. Around 40% of the 12,000 hires made last year were allocated to research and development, with much of the focus on AI.

According to Bloomberg, Apple is also reportedly developing a new cloud-computing chip, code-named Baltra, designed to support AI features. In a recent interview with CNBC, Cook stated that Apple is open to acquisitions that could accelerate its progress in the AI sector.

Apple is not alone in its intense focus on AI. Rival firms are also increasing expectations and pressure. Sergey Brin, the former Google CEO who has returned to the company, told employees that 60-hour in-office work weeks may be necessary to win the AI race.

Reports of burnout and extreme workloads are becoming more frequent across leading AI firms. Former OpenAI engineer Calvin French-Owen recently described the company’s high-pressure and secretive culture.

French-Owen noted that the environment had become so intense that leadership offered the entire staff a week off to recover, according to Wired.

AI has become the next major battleground in big tech, with companies ramping up investment and reshaping internal structures to secure dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eswatini advances digital vision with new laws, 5G and skills training

Eswatini is moving forward with a national digital transformation plan focused on infrastructure, legislation and skills development.

The country’s Minister of ICT, Savannah Maziya, outlined key milestones during the 2025 Eswatini Economic Update, co-hosted with the World Bank.

In her remarks, Maziya said that digital technology plays a central role in job creation, governance and economic development. She introduced several regulatory frameworks, including a Cybersecurity Bill, a Critical Infrastructure Bill and an E-Commerce Strategy.

Additional legislation is planned for emerging technologies such as AI, robotics and satellite systems.

Infrastructure improvements include the nationwide expansion of fibre optic networks and a rise in international connectivity capacity from 47 Gbps to 72 Gbps.

Mbabane, the capital, is being developed as a Smart City with 5G coverage, AI-enabled surveillance and public Wi-Fi access.

The Ministry of ICT has launched more than 11 digital public services and plans to add 90 more in the next three years.

A nationwide coding initiative will offer digital skills training to over 300,000 citizens, supporting wider efforts to increase access and participation in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple’s quiet race to replace Google Search with its own AI

Apple occasionally seems out of step with public sentiment, particularly when it comes to AI. A revealing example, highlighted by Bloomberg’s Mark Gurman in his Power On newsletter, involves Apple’s initial reluctance to build a ChatGPT-style chatbot for the iPhone.

Engineers within Apple’s AI division reportedly concluded that creating such a chatbot was unnecessary. Executives in both software and marketing agreed, suggesting there was only limited consumer interest in these tools.

However, chatbots have already demonstrated strong capabilities in answering user queries—something Siri still struggles with. While Siri can manage simple phone tasks, such as setting timers and alarms, it falls short in terms of the depth and accuracy of modern generative AI models.

Currently, Siri can redirect questions to ChatGPT, but only with user consent on a case-by-case basis. The responses, however, are brief and lack the detail found in the standalone ChatGPT app.

For richer answers, users are better off installing ChatGPT or Google’s Gemini directly. Siri’s limited integration does not extend to older models, such as the iPhone 15 or 15 Plus, which lack Apple Intelligence.

Users of these devices are strongly encouraged to install the AI apps manually for a more capable assistant experience. AI is also transforming search. Apple Services chief Eddy Cue has acknowledged that AI-driven search is the future.

Nonetheless, Apple remains financially bound to Google, which pays approximately $20 billion annually to be the default search engine on Apple devices. The US Department of Justice may soon intervene, potentially disrupting a partnership crucial to the growth of Apple’s Services division.

In a bid to modernise its search experience, Apple is developing its answer engine through an internal team known as AKI (Answers, Knowledge and Information).

The goal is to build a web-crawling system capable of delivering accurate responses to general knowledge queries, similar to what ChatGPT offers. Apple is considering deploying this answer engine not only within Siri but also across Spotlight and Safari.

A standalone app may also be developed to complement these efforts. Apple has also shown interest in external AI tools, such as Perplexity. Its iOS app, boasting a near-perfect rating from almost 230,000 reviews, promises clear, up-to-date answers, a long-time demand from users frustrated with Siri’s limitations.

The success of Apple’s in-house AI search project will be closely watched. Many iPhone users are hopeful that the next wave of AI tools will finally deliver the intelligence and responsiveness long expected from Apple’s digital assistant.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI breaches push data leak costs to new heights despite global decline

IBM’s 2025 Cost of a Data Breach Report revealed a sharp gap between rapid AI adoption and the oversight needed to secure it.

Although the global average data breach cost fell slightly to $4.44 million, security incidents involving AI systems remain more severe and disruptive.

Around 13% of organisations reported breaches involving AI models or applications, while 8% were unsure whether they had been compromised.

Alarmingly, nearly all AI-related breaches occurred without access controls, leading to data leaks in 60% of cases and operational disruption in almost one-third. Shadow AI (unsanctioned or unmanaged systems) played a central role, with one in five breaches traced back to it.

Organisations without AI governance policies or detection systems faced significantly higher costs, especially when personally identifiable information or intellectual property was exposed.

Attackers increasingly used AI tools such as deepfakes and phishing, with 16% of studied breaches involving AI-assisted threats.

Healthcare remained the costliest sector, with an average breach price of $7.42 million and the most extended recovery timeline of 279 days.

Despite the risks, fewer organisations plan to invest in post-breach security. Only 49% intend to strengthen defences, down from 63% last year.

Even fewer will prioritise AI-driven security tools. With many organisations also passing costs on to consumers, recovery now often includes long-term financial and reputational fallout, not just restoring systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia refutes chip backdoor allegations as China launches probe

Nvidia has firmly denied claims that its AI chips contain backdoors allowing remote control or tracking, following questioning by China’s top cybersecurity agency.

The investigation, which focuses on the H20 chip explicitly designed for the Chinese market, comes as Beijing intensifies scrutiny over foreign tech used in sensitive systems.

The H20 was initially blocked from export in April under US restrictions, but is now expected to return to Chinese shelves.

China’s Cyberspace Administration (CAC) summoned Nvidia officials to explain whether the chip enables unauthorised access or surveillance. The agency cited demands from US lawmakers for mandatory tracking features in advanced AI hardware as grounds for its concern.

In a statement, Nvidia insisted it does not include remote access capabilities in its products, reaffirming its commitment to cybersecurity.

Meanwhile, China’s state-backed People’s Daily questioned the company’s trustworthiness, stating that ‘network security is as vital as national territory’ and warning against reliance on what it described as ‘sick chips’.

The situation highlights Nvidia’s delicate position as it attempts to maintain dominance in China’s AI chip market while complying with mounting US export rules.

Tensions have escalated since similar actions were taken against other US firms, including a 2022 ban on Micron’s chips and recent antitrust scrutiny over Nvidia’s Mellanox acquisition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns public to avoid scanning QR codes on unsolicited packages

The FBI has issued a public warning about a rising scam involving QR codes placed on packages delivered to people who never ordered them.

According to the agency, these codes can lead recipients to malicious websites or prompt them to install harmful software, potentially exposing sensitive personal and financial data.

The scheme is a variation of the so-called brushing scam, in which online sellers send unordered items and use recipients’ names to post fake product reviews. In the new version, QR codes are added to the packaging, increasing the risk of fraud by directing users to deceptive websites.

While not as widespread as other fraud attempts, the FBI urges caution. The agency recommends avoiding QR codes from unknown sources, especially those attached to unrequested deliveries.

It also advised consumers to pay close attention to the web address that appears before tapping on any QR code link.

Authorities have noted broader misuse of QR codes, including cases where criminals place fake codes over legitimate ones in public spaces.

In one recent incident, scammers used QR stickers on parking meters in New York to redirect people to third-party payment pages requesting card details.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK universities urged to act fast on AI teaching

UK universities risk losing their competitive edge unless they adopt a clear, forward-looking approach to ΑΙ in teaching. Falling enrolments, limited funding, and outdated digital systems have exposed a lack of AI literacy across many institutions.

As AI skills become essential for today’s workforce, employers increasingly expect graduates to be confident users rather than passive observers.

Many universities continue relying on legacy technology rather than exploring the full potential of modern learning platforms. AI tools can enhance teaching by adapting to individual student needs and helping educators identify learning gaps.

However, few staff have received adequate training, and many universities lack the resources or structure to embed AI into day-to-day teaching effectively.

To close the growing gap between education and the workplace, universities must explore flexible short courses and microcredentials that develop workplace-ready skills.

Introducing ethical standards and data transparency from the start will ensure AI is used responsibly without weakening academic integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!