Odyssey presents immersive AI-powered streaming

Odyssey, a startup founded by self-driving veterans Oliver Cameron and Jeff Hawke, has unveiled an AI model that allows users to interact with streaming video in real time.

The technology generates video frames every 40 milliseconds, enabling users to move through scenes like a 3D video game instead of passively watching. A demo is currently available online, though it is still in its early stages.

The system relies on a new kind of ‘world model’ that predicts future visual states based on previous actions and environments. Odyssey claims its model can maintain spatial consistency, learn motion from video, and sustain coherent video output for five minutes or more.

Unlike models trained solely on internet data, Odyssey captures real-world environments using a custom 360-degree, backpack-mounted camera to build higher-fidelity simulations.

Tech giants and AI startups are exploring world models to power next-generation simulations and interactive media. Yet creative professionals remain wary. A 2024 study commissioned by the Animation Guild predicted significant job disruptions across film and animation.

Game studios like Activision Blizzard have been scrutinised for using AI while cutting staff.

Odyssey, however, insists its goal is collaboration instead of replacement. The company is also developing software to let creators edit scenes using tools like Unreal Engine and Blender.

Backed by $27 million in funding and supported by Pixar co-founder Ed Catmull, Odyssey aims to transform video content across entertainment, education, and advertising through on-demand interactivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google email will reply by using your voice

Google is building a next-generation email system that uses generative AI to reply to mundane messages in your own tone, according to DeepMind CEO Demis Hassabis.

Speaking at SXSW London, Hassabis said the system would handle everyday emails instead of requiring users to write repetitive responses themselves.

Hassabis called email ‘the thing I really want to get rid of,’ and joked he’d pay thousands each month for that luxury. He emphasised that while AI could help cure diseases or combat climate change, it should also solve smaller daily annoyances first—like managing inbox overload.

The upcoming feature aims to identify routine emails and draft replies that reflect the user’s writing style, potentially making decisions on simpler matters.

While details are still limited, the project remains under development and could debut as part of Google’s premium AI subscription model before reaching free-tier users.

Gmail already includes generative tools that adjust message tone, but the new system goes further—automating replies instead of just suggesting edits.

Hassabis also envisioned a universal AI assistant that protects users’ attention and supports digital well-being, offering personalised recommendations and taking care of routine digital tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon and Silk Typhoon reveal weaknesses

Recent revelations about Salt Typhoon and Silk Typhoon have exposed severe weaknesses in how organisations secure their networks.

These state-affiliated hacking groups have demonstrated that modern cyber threats come from well-resourced and coordinated actors instead of isolated individuals.

Salt Typhoon, responsible for one of the largest cyber intrusions into US infrastructure, exploited cloud network vulnerabilities targeting telecom giants like AT&T and Verizon, forcing companies to reassess their reliance on traditional private circuits.

Many firms continue to believe private circuits offer better protection simply because they are off the public internet. Some even add MACsec encryption for extra defence. However, MACsec’s ‘hop-by-hop’ design introduces new risks—data is repeatedly decrypted and re-encrypted at each routing point.

Every one of these hops becomes a possible target for attackers, who can intercept, manipulate, or exfiltrate data without detection, especially when third-party infrastructure is involved.

Beyond its security limitations, MACsec presents high operational complexity and cost, making it unsuitable for today’s cloud-first environments. In contrast, solutions like Internet Protocol Security (IPSec) offer simpler, end-to-end encryption.

Although not perfect in cloud settings, IPSec can be enhanced through parallel connections or expert guidance. The Cybersecurity and Infrastructure Security Agency (CISA) urges organisations to prioritise complete encryption of all data in transit, regardless of the underlying network.

Silk Typhoon has further amplified concerns by exploiting privileged credentials and cloud APIs to infiltrate both on-premise and cloud systems. These actors use covert networks to maintain long-term access while remaining hidden.

As threats evolve, companies must adopt Zero Trust principles, strengthen identity controls, and closely monitor their cloud environments instead of relying on outdated security models.

Collaborating with cloud security experts can help shut down exposure risks and protect sensitive data from sophisticated and persistent threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

HMRC got targeted in a £47 million UK fraud

A phishing scheme run by organised crime groups cost the UK government £47 million, according to officials from His Majesty’s Revenue and Customs.

Criminals posed as taxpayers to claim payments using fake or hijacked credentials. Rather than a cyberattack, the operation relied on impersonation and did not involve the theft of taxpayer data.

Angela MacDonald, HMRC’s deputy chief executive, confirmed to Parliament’s Treasury Committee that the fraud took place in 2024. The stolen funds were taken through three separate payments, though HMRC managed to block an additional £1.9 million attempt.

Officials began a cross-border criminal investigation soon after discovering the scam, which has led to arrests.

Around 100,000 PAYE accounts — typically used by employers for employee tax and national insurance payments — were either created fraudulently or accessed illegally.

Banks were also targeted through the use of HMRC-linked identity information. Customers first flagged the issue when they noticed unusual activity.

HMRC has shut down the fake accounts and removed false data as part of its response. John-Paul Marks, HMRC’s chief executive, assured the committee that the incident is now under control and contained. ‘That is a lot of money and unacceptable,’ MacDonald told MPs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attack hits Lee Enterprises staff data

Thousands of current and former employees at Lee Enterprises have had their data exposed following a cyberattack earlier this year.

Hackers accessed to the company’s systems in early February, compromising sensitive information such as names and Social Security numbers before the breach was contained the same day.

Although the media firm, which operates over 70 newspapers across 26 US states, swiftly secured its networks, a three-month investigation involving external cybersecurity experts revealed that attackers accessed databases containing employee details.

The breach potentially affects around 40,000 individuals — far more than the company’s 4,500 current staff — indicating that past employees were also impacted.

The stolen data could be used for identity theft, fraud or phishing attempts. Criminals may even impersonate affected employees to infiltrate deeper into company systems and extract more valuable information.

Lee Enterprises has notified those impacted and filed relevant disclosures with authorities, including the Maine Attorney General’s Office.

Headquartered in Iowa, Lee Enterprises draws over 200 million monthly online page views and generated over $611 million in revenue in 2024. The incident underscores the ongoing vulnerability of media organisations to cyber threats, especially when personal employee data is involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cognitive offloading and the future of the mind in the AI age

AI reshapes work and learning

The rapid advancement of AI is bringing to light a range of emerging phenomena within contemporary human societies.

The integration of AI-driven tools into a broad spectrum of professional tasks has proven beneficial in many respects, particularly in terms of alleviating the cognitive and physical burdens traditionally placed on human labour.

By automating routine processes and enhancing decision-making capabilities, AI has the potential to significantly improve efficiency and productivity across various sectors.

In response to these accelerating technological changes, a growing number of nations are prioritising the integration of AI technologies into their education systems to ensure students are prepared for future societal and workforce transformations.

China advances AI education for youth

China has released two landmark policy documents aimed at integrating AI education systematically into the national curriculum for primary and secondary schools.

The initiative not only reflects the country’s long-term strategic vision for educational transformation but also seeks to position China at the forefront of global AI literacy and talent development.

chinese flag with the city of shanghai in the background and digital letters ai somewhere over the flag

The two guidelines, formally titled the Guidelines for AI General Education in Primary and Secondary Schools and the Guidelines for the Use of Generative AI in Primary and Secondary Schools, represent a scientific and systemic approach to cultivating AI competencies among school-aged children.

Their release marks a milestone in the development of a tiered, progressive AI education system, with carefully delineated age-appropriate objectives and ethical safeguards for both students and educators.

The USA expands AI learning in schools

In April, the US government outlined a structured national policy to integrate AI literacy into every stage of the education system.

By creating a dedicated federal task force, the administration intends to coordinate efforts across departments to promote early and equitable access to AI education.

Instead of isolating AI instruction within specialised fields, the initiative seeks to embed AI concepts across all learning pathways—from primary education to lifelong learning.

The plan includes the creation of a nationwide AI challenge to inspire innovation among students and educators, showcasing how AI can address real-world problems.

The policy also prioritises training teachers to understand and use AI tools, instead of relying solely on traditional teaching methods. It supports professional development so educators can incorporate AI into their lessons and reduce administrative burdens.

The strategy encourages public-private partnerships, using industry expertise and existing federal resources to make AI teaching materials widely accessible.

European Commission supports safe AI use

As AI becomes more common in classrooms around the globe, educators must understand not only how to use it effectively but also how to apply it ethically.

Rather than introducing AI tools without guidance or reflection, the European Commission has provided ethical guidelines to help teachers use AI and data responsibly in education.

european union regulates ai

Published in 2022 and developed with input from educators and AI experts, the EU guidelines are intended primarily for primary and secondary teachers who have little or no prior experience with AI.

Instead of focusing on technical complexity, the guidelines aim to raise awareness about how AI can support teaching and learning, highlight the risks involved, and promote ethical decision-making.

The guidelines explain how AI can be used in schools, encourage safe and informed use by both teachers and students, and help educators consider the ethical foundations of any digital tools they adopt.

Rather than relying on unexamined technology, they support thoughtful implementation by offering practical questions and advice for adapting AI to various educational goals.

AI tools may undermine human thinking

However, technological augmentation is not without drawbacks. Concerns have been raised regarding the potential for job displacement, increased dependency on digital systems, and the gradual erosion of certain human skills.

As such, while AI offers promising opportunities for enhancing the modern workplace, it simultaneously introduces complex challenges that must be critically examined and responsibly addressed.

One significant challenge that must be addressed in the context of increasing reliance on AI is the phenomenon known as cognitive offloading. But what exactly does this term entail?

What happens when we offload thinking?

Cognitive offloading refers to the practice of using physical actions or external tools to modify the information processing demands of a task, with the aim of reducing the cognitive load on an individual.

In essence, it involves transferring certain mental functions—such as memory, calculation, or decision-making—to outside resources like digital devices, written notes, or structured frameworks.

digital brain

While this strategy can enhance efficiency and performance, it also raises concerns about long-term cognitive development, dependency on technological aids, and the potential degradation of innate mental capacities.

How AI may be weakening critical thinking

A study, led by Dr Michael Gerlich, Head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, published in the journal Societies raises serious concerns about the cognitive consequences of AI augmentation in various aspects of life.

The study suggests that frequent use of AI tools may be weakening individuals’ capacity for critical thinking, a skill considered fundamental to independent reasoning, problem-solving, and informed decision-making.

More specifically, Dr Gerlich adopted a mixed-methods approach, combining quantitative survey data from 666 participants with qualitative interviews involving 50 individuals.

Participants were drawn from diverse age groups and educational backgrounds and were assessed on their frequency of AI tool use, their tendency to offload cognitive tasks, and their critical thinking performance.

The study employed both self-reported and performance-based measures of critical thinking, alongside statistical analyses and machine learning models, such as random forest regression, to identify key factors influencing cognitive performance.

Younger users, who rely more on AI, think less critically

The findings revealed a strong negative correlation between frequent AI use and critical thinking abilities. Individuals who reported heavy reliance on AI tools—whether for quick answers, summarised explanations, or algorithmic recommendations—scored lower on assessments of critical thinking.

The effect was particularly pronounced among younger users aged 17 to 25, who reported the highest levels of cognitive offloading and showed the weakest performance in critical thinking tasks.

In contrast, older participants (aged 46 and above) demonstrated stronger critical thinking skills and were less inclined to delegate mental effort to AI.

Higher education strengthens critical thinking

The data also indicated that educational attainment served as a protective factor: those with higher education levels consistently exhibited more robust critical thinking abilities, regardless of their AI usage levels.

These findings suggest that formal education may equip individuals with better tools for critically engaging with digital information rather than uncritically accepting AI-generated responses.

Now, we must understand that while the study does not establish direct causation, the strength of the correlations and the consistency across quantitative and qualitative data suggest that AI usage may indeed be contributing to a gradual decline in cognitive independence.

However, in his study, Gerlich also notes the possibility of reverse causality—individuals with weaker critical thinking skills may be more inclined to rely on AI tools in the first place.

Offloading also reduces information retention

While cognitive offloading can enhance immediate task performance, it often comes at the cost of reduced long-term memory retention, as other studies show.

The trade-off has been most prominently illustrated in experimental tasks such as the Pattern Copy Task, where participants tasked with reproducing a pattern typically choose to repeatedly refer to the original rather than commit it to memory.

Even when such behaviours introduce additional time or effort (e.g., physically moving between stations), the majority of participants opt to offload, suggesting a strong preference for minimising cognitive strain.

These findings underscore the human tendency to prioritise efficiency over internalisation, especially under conditions of high cognitive demand.

The tendency to offload raises crucial questions about the cognitive and educational consequences of extended reliance on external aids. On the one hand, offloading can free up mental resources, allowing individuals to focus on higher-order problem-solving or multitasking.

On the other hand, it may foster a kind of cognitive dependency, weakening internal memory traces and diminishing opportunities for deep engagement with information.

Within the framework, cognitive offloading is not a failure of memory or attention but a reconfiguration of cognitive architecture—a process that may be adaptive rather than detrimental.

However, the perspective remains controversial, especially in light of findings that frequent offloading can impair retention, transfer of learning, and critical thinking, as Gerlich’s study argues.

If students, for example, continually rely on digital devices to recall facts or solve problems, they may fail to develop the robust mental models necessary for flexible reasoning and conceptual understanding.

The mind may extend beyond the brain

The tension has also sparked debate among cognitive scientists and philosophers, particularly in light of the extended mind hypothesis.

Contrary to the traditional view that cognition is confined to the brain, the extended mind theory argues that cognitive processes often rely on, and are distributed across, tools, environments, and social structures.

digital brain spin

As digital technologies become increasingly embedded in daily life, this hypothesis raises profound questions about human identity, cognition, and agency.

At the core of the extended mind thesis lies a deceptively simple question: Where does the mind stop, and the rest of the world begin?

Drawing an analogy to prosthetics—external objects that functionally become part of the body—Clark and Chalmers argue that cognitive tools such as notebooks, smartphones, and sketchpads can become integrated components of our mental system.

These tools do not merely support cognition; they constitute it when used in a seamless, functionally integrated manner. This conceptual shift has redefined thinking not as a brain-bound process but as a dynamic interaction between mind, body, and world.

Balancing AI and human intelligence

In conclusion, cognitive offloading represents a powerful mechanism of modern cognition, one that allows individuals to adapt to complex environments by distributing mental load.

However, its long-term effects on memory, learning, and problem-solving remain a subject of active investigation. Rather than treating offloading as inherently beneficial or harmful, future research and practice should seek to balance its use, leveraging its strengths while mitigating its costs.

Human VS Ai Background Brain and heart hd background 1024x576 1

Ultimately, we -as educators, policymakers, and technologists- have to shape the future of learning, work and confront a central tension: how to harness the benefits of AI without compromising the very faculties—critical thought, memory, and independent judgment—that define human intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eminem sues Meta over copyright violations

Eminem has filed a major lawsuit against Meta, accusing the tech giant of knowingly enabling widespread copyright infringement across its platforms. The rapper’s publishing company, Eight Mile Style, is seeking £80.6 million in damages, claiming 243 of his songs were used without authorisation.

The lawsuit argues that Meta, which owns Facebook, Instagram and WhatsApp, allowed tools such as Original Audio and Reels to encourage unauthorised reproduction and use of Eminem’s music.

The filing claims it occurred without proper licensing or attribution, significantly diminishing the value of his copyrights.

Eminem’s legal team contends that Meta profited from the infringement instead of ensuring his works were protected. If a settlement cannot be reached, the artist is demanding the maximum statutory damages — $150,000 per song — which would amount to over $109 million.

Meta has faced similar lawsuits before, including a high-profile case in 2022 brought by Epidemic Sound, which alleged the unauthorised use of thousands of its tracks. The latest claim adds to growing pressure on social media platforms to address copyright violations more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!