Rod Stewart honours Ozzy Osbourne with AI fantasy

At a recent Atlanta concert, Rod Stewart honoured the late Ozzy Osbourne in a strikingly unconventional way, by showing an AI-generated video of Ozzy taking selfies in heaven with late music icons. The tribute played on a giant screen behind Stewart as he performed ‘Forever Young,’ depicting a cartoonish Ozzy grinning alongside legends like Kurt Cobain, Prince, Michael Jackson, and Bob Marley, all united by a floating selfie stick among the clouds.

The video, originally captured by a concertgoer on TikTok, featured Ozzy smiling and posing with other departed stars like Tina Turner and Freddie Mercury, turning heaven into an eternal celebrity photo op. Instead of a traditional photo montage, Stewart’s new approach created a digital afterlife where jam sessions and selfies with rock’s finest never end, implying perhaps that Ozzy has already joined them.

That marks a notable shift from Stewart’s earlier tributes to Osbourne, which relied on simple archival photographs. The AI animation, however strange, seems to reflect a deeper attempt to celebrate Ozzy’s spirit in a uniquely modern way, courtesy, presumably, of a tech-savvy relative.

Following Ozzy’s death on 22 July, Stewart shared a heartfelt farewell on Instagram: ‘Bye, Ozzy. Sleep well, my friend. I’ll see you up there, later rather than sooner.’ Judging by this tribute, he’s already imagining what that reunion might look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Prisons trial AI to forecast conflict and self‑harm risk

UK Justice Secretary Shabana Mahmood has rolled out an AI-driven violence prediction tool across prisons and probation services. One system evaluates inmates’ profiles, factoring in age, past behaviour, and gang ties, to flag those likely to become violent. Matching prisoners to tighter supervision or relocation aims to reduce attacks on staff and fellow inmates.

Another feature actively scans content from seized mobile phones. AI algorithms sift through over 33,000 devices and 8.6 million messages, detecting coded language tied to contraband, violence, or escape plans. When suspicious content is flagged, staff receive alerts for preventive action.

Rising prison violence and self-harm underscore the urgency of such interventions. Assaults on staff recently reached over 10,500 a year, the highest on record, while self-harm incidents reached nearly 78,000. Overcrowding and drug infiltration have intensified operational challenges.

Analysts compare the approach to ‘pre‑crime’ models, drawing parallels with sci-fi narratives, raising concerns around civil liberties. Without robust governance, predictive tools may replicate biases or punish potential rather than actual behaviour. Transparency, independent audit, and appeals processes are essential to uphold inmate rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft study flags 40 jobs highly vulnerable to AI automation

Microsoft Research released a comprehensive AI impact assessment, ranking 80 occupations by exposure to generative AI tools such as Copilot and ChatGPT. Roles heavily involved in language, writing, client communication, and routine digital tasks showed the highest AI overlap. Notable examples include translators, historians, customer service agents, political scientists, and data scientists.

By contrast, jobs requiring hands-on work, empathy, real-time physical or emotional engagement, such as nurses, phlebotomists, construction trades, embalmers, and housekeeping staff, were classified as low risk under current AI capabilities. Experts suggest that these kinds of positions remain essential because they involve physical presence, human interaction, and complex real-time decision making.

Although certain professions scored high for AI exposure, Microsoft and independent analysts emphasise that most jobs won’t disappear entirely. Instead, generative AI tools are expected to augment workflows, creating hybrid roles where human judgement and oversight remain critical, especially in sectors such as financial services, healthcare, and creative industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon plans to bring ads to Alexa+ chats

Amazon is exploring ways to insert ads into conversations with its AI assistant Alexa+, according to CEO Andy Jassy. Speaking during the company’s latest earnings call, he described the feature as a potential tool for product discovery and future revenue.

Alexa+ is Amazon’s upgraded digital assistant designed to support more natural, multi-step conversations using generative AI. It is already available to millions of users through Prime subscriptions or as a standalone service.

Jassy said longer interactions open the door for embedded advertising, although the approach has not yet been fully developed. Industry observers see this as part of a wider trend, with companies like Google and OpenAI also weighing ad-based business models.

Alexa+ has received mixed reviews so far, with delays in feature delivery and technical challenges like hallucinations raising concerns. Privacy advocates have warned that ad targeting within personal conversations may worry users, given the data involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple boosts AI investment with new hires and acquisitions

Apple is ramping up its AI efforts, with CEO Tim Cook confirming that the company is significantly increasing its investments in the technology. During the Q3 2025 earnings call, Cook said AI would be embedded across Apple’s devices, platforms and internal operations.

The firm has reallocated staff to focus on AI and continues to acquire smaller companies to accelerate progress, completing seven acquisitions this year alone. Capital expenditure has also risen, partly due to the growing focus on AI.

Despite criticism that Apple has lagged behind in the AI race, the company insists it will not rush features to market. More than 20 Apple Intelligence tools have already been released, with additional features like live translation and an AI fitness assistant expected by year-end.

The updated version of Siri, which promises greater personalisation, has been pushed to 2026. Cook dismissed suggestions that AI-powered hardware, like glasses, would replace the iPhone, instead positioning future devices as complementary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK universities urged to act fast on AI teaching

UK universities risk losing their competitive edge unless they adopt a clear, forward-looking approach to ΑΙ in teaching. Falling enrolments, limited funding, and outdated digital systems have exposed a lack of AI literacy across many institutions.

As AI skills become essential for today’s workforce, employers increasingly expect graduates to be confident users rather than passive observers.

Many universities continue relying on legacy technology rather than exploring the full potential of modern learning platforms. AI tools can enhance teaching by adapting to individual student needs and helping educators identify learning gaps.

However, few staff have received adequate training, and many universities lack the resources or structure to embed AI into day-to-day teaching effectively.

To close the growing gap between education and the workplace, universities must explore flexible short courses and microcredentials that develop workplace-ready skills.

Introducing ethical standards and data transparency from the start will ensure AI is used responsibly without weakening academic integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon reports $18.2B profit boost as AI strategy takes off

Amazon has reported a 35% increase in quarterly profit, driven by rapid growth in its AI-powered services and cloud computing arm, Amazon Web Services (AWS).

The tech and e-commerce giant posted net income of $18.2 billion for Q2 2025, up from $13.5 billion a year earlier, while net sales rose 13% to $167.7 billion and exceeded analyst expectations.

CEO Andy Jassy attributed the strong performance to the company’s growing reliance on AI. ‘Our conviction that AI will change every customer experience is starting to play out,’ Jassy said, referencing Amazon’s AI-powered Alexa+ upgrades and new generative AI shopping tools.

AWS remained the company’s growth engine, with revenue climbing 17.5% to $30.9 billion and operating profit rising to $10.2 billion. The surge reflects the increasing demand for cloud infrastructure to support AI deployment across industries.

Despite the solid earnings, Amazon’s share price dipped more than 3% in after-hours trading. Analysts pointed to concerns over the company’s heavy capital spending, particularly its aggressive $100 billion AI investment strategy.

Free cash flow over the past year fell to $18.2 billion, down from $53 billion a year earlier. In Q2 alone, Amazon spent $32.2 billion on infrastructure, nearly double the previous year’s figure, much of it aimed at expanding its data centre and logistics capabilities to support AI workloads.

For the current quarter, Amazon projected revenue of $174.0 to $179.5 billion and operating income between $15.5 and $20.5 billion, slightly below investor hopes but still reflecting double-digit year-on-year growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!