White House launches Genesis Mission for AI-driven science

Washington prepares for a significant shift in research as the White House launches the Genesis Mission, a national push to accelerate innovation through advanced AI. The initiative utilises AI to enhance US US technological leadership in a competitive global landscape.

The programme puts the Department of Energy at the centre, tasked with building a unified AI platform linking supercomputers, federal datasets and national laboratories.

The goal is to develop AI models and agents that automate experiments, test hypotheses and accelerate breakthroughs in key scientific fields.

Federal agencies, universities and private firms will conduct coordinated research using shared data spaces, secure computing and standardised partnership frameworks. Priority areas cover biotechnology, semiconductors, quantum science, critical materials and next-generation energy.

Officials argue that the Genesis Mission represents one of the most ambitious attempts to modernise US research infrastructure. Annual reviews will track scientific progress, security, collaborations and AI-driven breakthroughs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan boosts Rapidus with major semiconductor funding

Japan will inject more than one trillion yen (approximately 5.5 billion €) into chipmaker Rapidus between 2026 and 2027. The plan aims to fortify national economic security by rebuilding domestic semiconductor capacity after decades of reliance on overseas suppliers.

Rapidus intends to begin producing 2-nanometre chips in late 2027 as global demand for faster, AI-ready components surges. The firm expects overall investment to reach seven trillion yen and hopes to list publicly around 2031.

Japanese government support includes large subsidies and direct investment that add to earlier multi-year commitments. Private contributors, including Toyota and Sony, previously backed the venture, which was founded in 2022 to revive Japan’s cutting-edge chip ambitions.

Officials argue that advanced production is vital for technological competitiveness and future resilience. Critics to this investment note that there are steep costs and high risks, yet policymakers view the Rapidus investment as crucial to keeping pace with technological advancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nvidia’s results fail to ease AI bubble fears

Record profits and year-on-year revenue growth above 60 percent have put Nvidia at the centre of debate over whether the surge in AI spending signals a bubble or a long-term boom.

CEO Jensen Huang and CFO Colette Kress dismissed concerns about the bubble, highlighting strong demand and expectations of around $65 billion in revenue for the next quarter.

Executives forecast global AI infrastructure spending could reach $3–4 trillion annually by the end of the decade as both generative AI and traditional cloud computing workloads increasingly run on GPUs.

Widespread adoption by major partners, including Meta, Anthropic and Salesforce, suggests lasting momentum rather than short-term hype.

Analysts generally agree that Nvidia’s performance remains robust, but questions persist over the sustainability of heavy investment in AI. Investors continue to monitor whether Big Tech can maintain this pace and if highly leveraged customers might expose Nvidia to future risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ireland confronts rising energy strain from data centres

Ireland faces mounting pressure over soaring electricity use from data centres clustered around Dublin. Facilities powering global tech giants have grown into a major energy consumer, accounting for over a fifth of national demand.

The load could reach 30 percent by 2030 as expanding cloud and AI services drive further growth. Analysts warn that rising consumption threatens climate commitments and places significant strain on grid stability.

Campaigners argue that data centres monopolise renewable capacity while pushing Ireland towards potential EU emissions penalties. Some local authorities have already blocked developments due to insufficient grid capacity and limited on-site green generation.

Sector leaders fear stalled projects and uncertain policy may undermine Ireland’s role as a digital hub. Investment risks remain high unless upgrades, clearer rules and balanced planning reduce the pressure on national infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot