Google DeepMind launches Genie 3 to create interactive 3D worlds from text

Google DeepMind has introduced Genie 3, an AI world model capable of generating explorable 3D environments in real time from a simple text prompt.

Unlike earlier versions, it supports several minutes of continuous interaction, basic visual memory, and real-time changes such as altering weather or adding characters.

The system allows users to navigate these spaces at 24 frames per second in 720p resolution, retaining object placement for about a minute.

Users can trigger events within the virtual world by typing new instructions, making Genie 3 suitable for applications ranging from education and training to video games and robotics.

Genie 3’s improvements over Genie 2 include frame-by-frame generation with memory tracking and dynamic scene creation without relying on pre-built 3D assets.

However, the AI model still has limits, including the inability to replicate real-world locations with geographic accuracy and restricted interaction capabilities. Multi-agent features are still in development.

Currently offered as a limited research preview to select academics and creators, Genie 3 will be made more widely available over time.

Google DeepMind has noted that safety and responsibility remain central concerns during the gradual rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia tackles online scams with AI and new cyber guidelines

Cybercrime involving financial scams continues to rise in Malaysia, with 35,368 cases reported in 2024, a 2.53 per cent increase from the previous year, resulting in losses of RM1.58 billion.

The situation remains severe in 2025, with over 12,000 online scam cases recorded in the first quarter alone, involving fake e-commerce offers, bogus loans, and non-existent investment platforms. Losses during this period reached RM573.7 million.

Instead of waiting for the situation to worsen, the Digital Ministry is rolling out proactive safeguards. These include new AI-related guidelines under development by the Department of Personal Data Protection, scheduled for release by March 2026.

The documents will cover data protection impact assessments, automated decision-making, and privacy-by-design principles.

The ministry has also introduced an official framework for responsible AI use in the public sector, called GPAISA, to ensure ethical compliance and support across government agencies.

Additionally, training initiatives such as AI Untuk Rakyat and MD Workforce aim to equip civil servants and enforcement teams with skills to handle AI and cyber threats.

In partnership with CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, the ministry is also creating an AI-powered application to verify digital images and videos.

Instead of relying solely on manual analysis, the tool will help investigators detect online fraud, identity forgery, and synthetic media more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US urges Asia-Pacific to embrace open AI innovation over strict regulation

A senior White House official has urged Asia-Pacific economies to support an AI future built on US technology, warning against adopting Europe’s heavily regulated model. Michael Kratsios remarked during the APEC Digital and AI Ministerial Meeting in Incheon.

Kratsios said countries now choose between embracing American-led innovation or falling behind under regulatory burdens. He framed the US approach as one driven by freedom and open-source innovation rather than centralised control.

The US is offering partnerships with South Korea to respect data concerns while enabling shared progress. Kratsios noted that open-weight models could soon shape industry standards worldwide.

He met South Korea’s science minister in bilateral talks to discuss AI cooperation. The US reaffirmed its commitment to supporting nations in building trustworthy AI systems based on mutual economic benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT checkout could sideline major platforms

OpenAI is preparing to add a payment system into ChatGPT, allowing users to complete purchases without ever leaving the chatbot. Retail leaders are calling it a turning point in e-commerce, as it may significantly simplify how people shop online.

The company is expected to take a cut of transactions and work with platforms such as Shopify to streamline operations. With over 77 million users, ChatGPT has the reach to become a dominant shopping tool, potentially bypassing platforms like Amazon.

Executives worry visibility could depend on revenue-sharing, forcing brands to pay for prominence in the chatbot. Some fear this pay-to-play model could leave smaller retailers behind and limit consumer choice.

At the same time, personalised AI-driven recommendations may enhance user experiences while raising questions about data use and bias. Entrepreneurs on X are already predicting widespread AI-led shopping within a year.

Retailers are now adjusting strategies to remain visible in this new market. While some early adopters show success using AI to complete purchases, others highlight technical challenges in integration and website compatibility.

Observers say search engines could lose relevance as shoppers turn to AI instead. Regulators remain cautious, particularly in markets like Australia, where many consumers are open to AI-led transactions.

The industry faces a shift where chatbots may evolve into full-scale digital marketplaces. Brands are urged to act quickly, or risk losing out as AI commerce becomes the norm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft offers $5 million for cloud and AI vulnerabilities

Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.

Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.

Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.

Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.

The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.

Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New malware steals 200,000 passwords and credit card details through fake software

Hackers are now using fake versions of familiar software and documents to spread a new info-stealing malware known as PXA Stealer.

First discovered by Cisco Talos, the malware campaign is believed to be operated by Vietnamese-speaking cybercriminals and has already compromised more than 4,000 unique IP addresses across 62 countries.

Instead of targeting businesses alone, the attackers are now focusing on ordinary users in countries including the US, South Korea, and the Netherlands.

PXA Stealer is written in Python and designed to collect passwords, credit card data, cookies, autofill information, and even crypto wallet details from infected systems.

It spreads by sideloading malware into files like Microsoft Word executables or ZIP archives that also contain legitimate-looking programs such as Haihaisoft PDF Reader.

The malware uses malicious DLL files to gain persistence through the Windows Registry and downloads additional harmful files via Dropbox. After infection, it uses Telegram to exfiltrate stolen data, which is then sold on the dark web.

Once activated, the malware even attempts to open a fake PDF in Microsoft Edge, though the file fails to launch and shows an error message — by that point, it has already done the damage.

To avoid infection, users should avoid clicking unknown email links and should not open attachments from unfamiliar senders. Instead of saving passwords and card details in browsers, a trusted password manager is a safer choice.

Although antivirus software remains helpful, hackers in the campaign have used sophisticated methods to bypass detection, making careful online behaviour more important than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple develops smart search engine to rival ChatGPT

Apple is developing its AI-powered answer engine to rival ChatGPT, marking a strategic turn in its company’s AI approach. The move comes as Apple aims to close the gap with competitors in the fast-moving AI race.

A newly formed internal team, Answers, Knowledge and Information, is working on a tool to browse the web and deliver direct responses to users.

Led by former Siri head Robby Walker, the project is expected to expand across key Apple services, including Siri, Safari and Spotlight.

Job postings suggest Apple is recruiting talent with search engine and algorithm expertise. CEO Tim Cook has signalled Apple’s willingness to acquire companies that could speed up its AI progress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!