Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman shares first glimpse of GPT-5 via Pantheon screenshot

OpenAI CEO Sam Altman shared a screenshot on X showing GPT-5 in action. The post casually endorsed the animated sci-fi series Pantheon, a cult tech favourite exploring general AI.

When asked if GPT-5 also recommends the show, Altman replied with a screenshot: ‘turns out yes’. It marked one of the earliest public glimpses of the new model, hinting at expanded capabilities.

GPT-5 is expected to outperform its predecessors, with a larger context window, multimodal abilities, and more agentic task handling. The screenshot also shows that some quirks remain, such as its fondness for the em dash.

The model identified Pantheon as having a 100% critic rating on Rotten Tomatoes and described it as ‘cerebral, emotional, and philosophically intense’. Business Insider verified the score and tone of the reviews.

OpenAI faces mounting pressure to keep pace with rivals like Google DeepMind, Meta, xAI, and Anthropic. Public teasers such as this one suggest GPT-5 will soon make a broader debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s transformation of work habits, mindset and lifestyle

At Mindvalley’s AI Summit, former Google Chief Decision Scientist Cassie Kozyrkov described AI as not a substitute for human thought but a magnifier of what the human mind can produce. Rather than replacing us, AI lets us offload mundane tasks and focus on deeper cognitive and creative work.

Work structures are being transformed, not just in factories, but behind computer screens. AI now handles administrative ‘work about work,’ multitasking, scheduling, and research summarisation, lowering friction in knowledge work and enabling people to supervise agents rather than execute tasks manually.

Personal life is being reshaped, too. AI tools for finance or health, such as budgeting apps or personalised diagnostics, move decisions into data-augmented systems with faster insight and fewer human biases.

Meanwhile, creativity is co-authored via AI-generated design, music or writing, requiring humans to filter, refine and ideate beyond the algorithm.

Recognising cognitive change, AI thought leaders envision a new era where ‘blended work’ prevails: humans manage AI agents, call the shots, and wield ethical oversight, while the AI executes pipelines of repetitive or semi-intelligent tasks.

Scholars warn that this model demands new fairness, transparency, and collaboration skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Science removes concern from Microsoft quantum paper

The journal Science will replace an editorial expression of concern (EEoC) on a 2020 Microsoft quantum computing paper with a correction. The update notes incomplete explanations of device tuning and partial data disclosure, but no misconduct.

Co-author Charles Marcus welcomed the decision but lamented the four-year dispute.

Sergey Frolov, who raised concerns about data selection, disagrees with the correction and believes the paper should be retracted. The debate centres on Microsoft’s claims about topological superconductors using Majorana particles, a critical step for quantum computing.

Several Microsoft-backed papers on Majoranas have faced scrutiny, including retractions. Critics accuse Microsoft of cherry-picking data, while supporters stress the research’s complexity and pioneering nature.

The controversy reveals challenges in peer review and verifying claims in a competitive field.

Microsoft defends the integrity of its research and values open scientific debate. Critics warn that selective reporting risks misleading the community. The dispute highlights the difficulty of confirming breakthrough quantum computing claims in an emerging industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK Online Safety Act under fire amid free speech and privacy concerns

The UK’s Online Safety Act, aimed at protecting children and eliminating illegal content online, is stirring a strong debate due to its stringent requirements on social media platforms and websites hosting adult content.

Critics argue that the act’s broad application could unintentionally suppress free speech, as highlighted by social media platform X.

X claims the act results in the censorship of lawful content, reflecting concerns shared by politicians, free-speech campaigners, and content creators.

Moreover, public unease is evident, with over 468,000 individuals signing a petition for the act’s repeal, citing privacy concerns over mandatory age checks requiring personal data on adult content sites.

Despite mounting criticism, the UK government is resolute in its commitment to the legislation. Technology Secretary Peter Kyle equates opposition to siding with online predators, emphasising child protection.

The government asserts that the act also mandates platforms to uphold freedom of expression alongside child safety obligations.

While X criticises both the broad scope and the tight compliance timelines of the act, warning of pressures towards over-censorship, it calls for significant statutory revisions to protect personal freedoms while safeguarding children.

The government rebuffs claims that the Online Safety Act compromises free speech, with assurances that the law equally protects freedom of expression.

Meanwhile, Ofcom, the UK’s communications regulator, has initiated investigations into the compliance of several companies managing pornography sites, highlighting the rigorous enforcement.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Prisons trial AI to forecast conflict and self‑harm risk

UK Justice Secretary Shabana Mahmood has rolled out an AI-driven violence prediction tool across prisons and probation services. One system evaluates inmates’ profiles, factoring in age, past behaviour, and gang ties, to flag those likely to become violent. Matching prisoners to tighter supervision or relocation aims to reduce attacks on staff and fellow inmates.

Another feature actively scans content from seized mobile phones. AI algorithms sift through over 33,000 devices and 8.6 million messages, detecting coded language tied to contraband, violence, or escape plans. When suspicious content is flagged, staff receive alerts for preventive action.

Rising prison violence and self-harm underscore the urgency of such interventions. Assaults on staff recently reached over 10,500 a year, the highest on record, while self-harm incidents reached nearly 78,000. Overcrowding and drug infiltration have intensified operational challenges.

Analysts compare the approach to ‘pre‑crime’ models, drawing parallels with sci-fi narratives, raising concerns around civil liberties. Without robust governance, predictive tools may replicate biases or punish potential rather than actual behaviour. Transparency, independent audit, and appeals processes are essential to uphold inmate rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon plans to bring ads to Alexa+ chats

Amazon is exploring ways to insert ads into conversations with its AI assistant Alexa+, according to CEO Andy Jassy. Speaking during the company’s latest earnings call, he described the feature as a potential tool for product discovery and future revenue.

Alexa+ is Amazon’s upgraded digital assistant designed to support more natural, multi-step conversations using generative AI. It is already available to millions of users through Prime subscriptions or as a standalone service.

Jassy said longer interactions open the door for embedded advertising, although the approach has not yet been fully developed. Industry observers see this as part of a wider trend, with companies like Google and OpenAI also weighing ad-based business models.

Alexa+ has received mixed reviews so far, with delays in feature delivery and technical challenges like hallucinations raising concerns. Privacy advocates have warned that ad targeting within personal conversations may worry users, given the data involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!