Google launches Project Genie allowing users to create interactive AI-generated worlds

Google has launched Project Genie, an experimental prototype that allows users to create and explore interactive AI-generated worlds. The web application, powered by Genie 3, Nano Banana Pro, and Gemini, is rolling out to Google AI Ultra subscribers in the US aged 18 and over.

Genie 3 represents a world model that simulates environmental dynamics and predicts how actions affect them in real time. Unlike static 3D snapshots, the technology generates paths in real time as users move and interact, simulating physics for dynamic environments.

Project Genie centres on three core capabilities: world sketching, exploration, and remixing. Users can prompt with text and images to create environments, define character perspectives, and preview worlds before entering.

As users navigate, the system generates paths in real time based on their actions.

The experimental prototype has known limitations, including generation restrictions to 60 seconds, potential deviations from prompts or real-world physics, and occasional character controllability issues.

Google emphasises responsible development as part of its mission to build AI that benefits humanity, with ongoing improvements planned based on user feedback.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Large language models mirror human brain responses to unexpected twists

Researchers at the University of Chicago are using AI to uncover insights into how the human brain processes surprise. The project, directed by Associate Professor Monica Rosenberg, compares human and AI responses to narrative moments to explore cognitive processes.

The study involved participants listening to stories whilst researchers recorded their responses through brain scans. Researchers then fed identical stories to the language model Llama, prompting it to predict subsequent text after each segment.

When AI predictions diverged from actual story content, that gap served as a measure of surprise, mirroring the discrepancy human readers experience when expectations fail.

Results showed a striking alignment between AI prediction errors and both participants’ reported feelings and brain-scan activity patterns. The correlation emerged when texts were analysed in 10 to 20-word chunks, suggesting humans and AI encode surprise at broader levels where ideas unfold.

Fourth-year data science student Bella Summe, involved in the Cognition, Attention and Brain Lab research, noted the creative challenge of working in an emerging field.

Few studies have explored whether LLM prediction errors could serve as measures of human surprise, requiring constant problem-solving and experimental design adaptation throughout the project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Prism launches as OpenAI’s new workspace for scientific papers

OpenAI has launched Prism, a cloud-based LaTeX workspace designed to streamline the drafting, collaboration, and publication of academic papers. The tool integrates writing, citation management, real-time collaboration, and AI assistance into a single environment to reduce workflow friction.

Built specifically for scientific use, Prism embeds GPT-5.2 directly inside documents rather than as a separate chatbot. Researchers can rewrite sections, verify equations, test arguments, and clarify explanations without leaving the editing interface, positioning AI as a background collaborator.

Users can start new LaTeX projects or upload existing files through prism.openai.com using a ChatGPT account. Co-authors can join instantly, enabling simultaneous editing while maintaining structured formatting for equations, references, and manuscript layout.

OpenAI says Prism supports academic search, converts handwritten formulas into clean LaTeX, and allows voice-driven edits for faster reviews. Completed papers export as publication-ready PDFs alongside full source files.

Initially available for free to personal ChatGPT users, the workspace will later expand to Business, Enterprise, and Education plans. The company frames the tool as a practical productivity layer rather than a research disruption platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

French public office hit with €5 million CNIL fine after massive data leak

The data protection authority of France has imposed a €5 million penalty on France Travail after a massive data breach exposed sensitive personal information collected over two decades.

A leak which included social security numbers, email addresses, phone numbers and home addresses of an estimated 36.8 million people who had used the public employment service. CNIL said adequate security measures would have made access far more difficult for the attackers.

The investigation found that cybercriminals exploited employees through social engineering instead of breaking in through technical vulnerabilities.

CNIL highlighted the failure to secure such data breach requirements under the General Data Protection Regulation. The watchdog also noted that the size of the fine reflects the fact that France Travail operates with public funding.

France Travail has taken corrective steps since the breach, yet CNIL has ordered additional security improvements.

The authority set a deadline for these measures and warned that non-compliance would trigger a daily €5,000 penalty until France Travail meets GDPR obligations. A case that underlines growing pressure on public institutions to reinforce cybersecurity amid rising threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google brings AI agent to Chrome in the US

Google is rolling out an AI-powered browsing agent inside Chrome, allowing users to automate routine online tasks. The feature is being introduced in the US for AI Pro and AI Ultra subscribers.

The Gemini agent can interact directly with websites in the US, including opening pages, clicking buttons and completing complex online forms. Testers reported successful use for tasks such as tax paperwork and licence renewals.

Google said Gemini AI integrates with password management tools while requiring user confirmation for payments and final transactions. Security safeguards and fraud detection systems have been built into Chrome for US users.

The update reflects Alphabet’s strategy to reposition Chrome in the US as an intelligent operating agent. Google aims to move beyond search toward AI-driven personal task management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Historic digital assets regulation bill approved by US Senate committee for the first time

The US Senate Agriculture Committee has voted along party lines to advance legislation on the cryptocurrency market structure, marking the first time such a bill has cleared a Senate committee.

The Digital Commodity Intermediaries Act passed with 12 Republicans voting in favour and 11 Democrats opposing, representing a significant development for digital asset regulation in the United States.

The legislation would grant the Commodity Futures Trading Commission new regulatory authority over digital commodities and establish consumer protections, including safeguards against conflicts of interest.

Chairman John Boozman proceeded with the bill after losing bipartisan support when Senator Cory Booker withdrew backing for the version presented. The Senate Banking Committee must approve the measure before the two versions can be combined and advanced to the Senate floor.

Democrats raised concerns about the legislation, particularly regarding President Donald Trump’s cryptocurrency ventures. Senator Booker stated the bill departed from bipartisan principles established in November, noting Republicans ‘walked away’ from previous agreements.

Democrats offered amendments to ban public officials from engaging in the crypto industry and to address foreign-adversary involvement in digital commodities. Still, all were rejected as outside the committee’s jurisdiction.

Senator Gillibrand expressed optimism about the bill’s advancement, whilst Boozman called the vote ‘a critical step towards creating clear rules’. The Senate Banking Committee’s consideration was postponed following opposition from the crypto industry, with no new hearing date set.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT becomes everyday assistant for Australian households

Australians are increasingly using generative AI for everyday tasks, with meal preparation and recipe planning emerging as the most common applications. A recent survey found that many households rely on ChatGPT for cooking inspiration, practical advice, and saving time.

The OpenAI-commissioned research shows strong uptake of AI tools for home renovations, DIY projects, and household budgeting. Many users rely on the technology to summarise news, plan meals, and solve routine problems, reinforcing its role as a personal assistant.

Work-related tasks remain another major area of use, particularly for drafting emails, clarifying information, and summarising meetings. Large numbers of respondents reported saving several hours each week, underscoring how generative AI is reshaping productivity and daily routines across Australia.

Generative AI adoption is highest among younger Australians, with usage strongest among those aged between 18 and 34. The trend reflects shifting digital habits and growing comfort with AI-driven tools across daily life and work decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot