Energy efficiency becomes a priority as Nvidia unveils next AI chip

Nvidia used CES in Las Vegas to signal its next push in AI hardware, with chief executive Jensen Huang unveiling a new AI chip designed to deliver more computing power with lower energy use. The chip, named Vera Rubin, is scheduled to ship in the second half of the year.

Huang said the Rubin platform would let companies train AI models with far fewer chips than earlier generations. The redesign is also intended to lower the cost and energy demands of running AI services.

The move comes as demand for AI infrastructure accelerates, straining power supplies and intensifying competition. Rivals and major customers developing their own chips are putting pressure on Nvidia to improve efficiency.

Alongside chips, Nvidia highlighted its growing focus on autonomous vehicles. The company said new AI software would support self-driving development for carmakers and mobility firms, with vehicles using the chipmaker’s technology expected to ship later this year.

Huang said AI, robotics, and autonomy are central to the company’s long-term strategy, as the company seeks to expand beyond data centres. Rising competition and geopolitical scrutiny remain challenges, but Nvidia is betting that more efficient chips will keep it at the centre of the AI boom.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CES 2026 shows AMD betting on on-device AI at scale

AMD used CES 2026 to position AI as a default feature of personal and commercial computing. The company said AI is no longer limited to premium systems. Instead, it is being built directly into processors for consumer PCs, business laptops, compact desktops, and embedded platforms.

Executives described the shift as a new phase in AI adoption. CEO Lisa Su said usage has grown from early experimentation to more than one billion active users worldwide. Senior vice president Jack Huynh added that AI is redefining the PC by embedding intelligence, performance, and efficiency across devices.

The strategy centres on the Ryzen AI 400 Series and Ryzen AI PRO 400 Series processors. These chips integrate neural processing units delivering up to 60 TOPS of local AI compute. Built on Zen 5 architecture and XDNA 2 NPUs, they target Copilot+ PCs and enterprise deployments.

AMD also expanded its Ryzen AI Max+ portfolio for ultra-thin laptops, mini-PCs, and small workstations. The processors combine CPU, GPU, and NPU resources in a unified memory design. Desktop users saw the launch of the Ryzen 7 9850X3D, while developers were offered the Ryzen AI Halo platform.

Beyond PCs, AMD introduced a new Ryzen AI Embedded processor lineup for edge deployments. The chips are aimed at vehicles, factories, and autonomous systems. AMD said single-chip designs will support real-time AI workloads in robotics, digital twins, smart cameras, and industrial automation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fantasy football players divided over use of AI advice

More players are now turning to AI tools to help manage their Fantasy Premier League squads. Several popular apps use AI to rate teams, predict player points, and suggest transfers, with developers reporting rapid growth in both free and paid users.

Fantasy football has long allowed fans to test their instincts by building virtual teams and competing against friends or strangers. In recent years, the game has developed a large ecosystem of content creators offering advice on transfers, tactics, and player performance.

Supporters of the tools say they make the game more engaging and accessible. Some players argue that AI advice is no different from following tips on podcasts or social media and see it as a way to support decision-making rather than replace skill.

Critics, however, say AI removes key elements of instinct, luck, and banter. Some fans describe AI-assisted play as unfair or against the spirit of fantasy football leagues, while others worry it leads to increasingly similar teams driven by the same data.

Despite the debate, surveys suggest a growing share of fantasy players plan to use AI this season. League organisers and game developers are experimenting with incentives to reward creative picks, as the role of AI in fantasy football continues to expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI apprenticeships and short courses advance online learning

Diplo Academy strengthened its online education offer in 2025, placing AI and practical learning at the centre of training for diplomats, policy makers and public officials facing growing digital pressures.

A notable change was the transition from traditional eight-week online courses to more focused four-week formats. The ongoing shift aims to better match the schedules of working professionals without compromising academic rigour, applied learning or peer exchange.

Artificial intelligence featured prominently across the curriculum, particularly through the AI Apprenticeship. Three editions of the course were delivered in 2025, including one designed for Geneva-based international organisations.

The AI Apprenticeship emphasises informed, knowledgeable, effective and responsible use of AI in everyday professional contexts. Inspired by the Swiss apprenticeship tradition, it combines hands-on practice, mentorship and critical thinking, enabling participants to apply AI tools while retaining human judgement and accountability.

Diplo Academy 2025: Online courses in numbers

California launches DROP tool to erase data broker records

Residents in California now have a simpler way to force data brokers to delete their personal information.

The state has launched the Delete Requests and Opt-Out Platform, known as DROP, allowing residents to submit one verified deletion request that applies to every registered data broker instead of contacting each company individually.

A system that follows the Delete Act, passed in 2023, and is intended to create a single control point for consumer data removal.

Once a resident submits a request, the data brokers must begin processing it from August 2026 and will have 90 days to act. If data is not deleted, residents may need to provide extra identifying details.

First-party data collected directly by companies can still be retained, while data from public records, such as voter rolls, remains exempt. Highly sensitive data may fall under separate legal protections such as HIPAA.

The California Privacy Protection Agency argues that broader data deletion could reduce identity theft, AI-driven impersonation, fraud risk and unwanted marketing contact.

Penalties for non-compliance include daily fines for brokers who fail to register or ignore deletion orders. The state hopes the tool will make data rights meaningful instead of purely theoretical.

A launch that comes as regulators worldwide examine how personal data is used, traded and exploited.

California is positioning itself as a leader in consumer privacy enforcement, while questions continue about how effectively DROP will operate when the deadline arrives in 2026.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloud and AI growth fuels EU push for greener data centres

Europe’s growing demand for cloud and AI services is driving a rapid expansion of data centres across the EU.

Policymakers now face the difficulty of supporting digital growth instead of undermining climate targets, yet reliable sustainability data remains scarce.

Operators are required to report on energy consumption, water usage, renewable sourcing and heat reuse, but only around one-third have submitted complete data so far.

Brussels plans to introduce a rating scheme from 2026 that grades data centres on environmental performance, potentially rewarding the most sustainable new facilities with faster approvals under the upcoming Cloud and AI Development Act.

Industry groups want the rules adjusted so operators using excess server heat to warm nearby homes are not penalised. Experts also argue that stronger auditing and stricter application of standards are essential so reported data becomes more transparent and credible.

Smaller data centres remain largely untracked even though they are often less efficient, while colocation facilities complicate oversight because customers manage their own servers. Idle machines also waste vast amounts of energy yet remain largely unmeasured.

Meanwhile, replacing old hardware may improve efficiency but comes with its own environmental cost.

Even if future centres run on cleaner power and reuse heat, the manufacturing footprint of the equipment inside them remains a major unanswered sustainability challenge.

Policymakers say better reporting is essential if the EU is to balance digital expansion with climate responsibility rather than allowing environmental blind spots to grow.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool helps find new treatments for heart disease

A new ΑΙ system developed at Imperial College London could accelerate the discovery of treatments for heart disease by combining detailed heart scans with huge medical databases.

Cardiovascular disease remains the leading cause of death across the EU, accounting for around 1.7 million deaths every year, so researchers believe smarter tools are urgently needed.

The AI model, known as CardioKG, uses imaging data from thousands of UK Biobank participants, including people with heart failure, heart attacks and atrial fibrillation, alongside healthy volunteers.

By linking information about genes, medicines and disease, the system aims to predict which drugs might work best for particular heart conditions instead of relying only on traditional trial-and-error approaches.

Among the medicines highlighted were methotrexate, normally used for rheumatoid arthritis, and diabetes drugs known as gliptins, which the AI suggested could support some heart patients.

The model also pointed to a possible protective effect from caffeine among people with atrial fibrillation, although researchers warned that individuals should not change their caffeine intake based on the findings alone.

Scientists say the same technology could be applied to other health problems, including brain disorders and obesity.

Work is already under way to turn the knowledge graph into a patient-centred system that follows real disease pathways, with the long-term goal of enabling more personalised and better-timed treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plaud unveils compact AI NotePin S and new meeting app

Hardware maker Plaud has introduced a new AI notetaking pin called the Plaud NotePin S alongside a Mac desktop app for digital meeting notes ahead of CES in Las Vegas.

The wearable device costs 179 dollars and arrives with several accessories so users can attach or wear it in different ways. A physical button allows quick control of recordings and can be tapped to highlight key moments during conversations.

The NotePin S keeps the same core specifications as the earlier model, including 64GB of storage and up to 20 hours of continuous recording.

Two MEMS microphones capture speech clearly within roughly three metres. Owners receive 300 minutes of transcription each month without extra cost. Apple Find My support is also included, so users can locate the device easily instead of worrying about misplacing it.

Compared with the larger Note Pro, the new pin offers a shorter recording range and battery life, but the small size makes it easier to wear while travelling or working on the go.

Plaud says the device suits users who rely on frequent in-person conversations rather than long seated meetings.

Plaud has now sold more than 1.5 million notetaking devices. The company also aims to enter the AI meeting assistant market with a Mac desktop client that detects when a meeting is active and prompts users to capture audio.

The software records system sound and uses AI to organise the transcript into structured notes. Users can also add typed notes and images instead of relying only on audio.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Christians raise concerns over AI used for moral guidance

AI is increasingly used for emotional support and companionship, raising questions about the values embedded in its responses, particularly for Christians seeking guidance. Research cited by Harvard Business Review shows therapy-related use now dominates generative AI.

As Christians turn to AI for advice on anxiety, relationships, and personal crises, concerns are growing about the quality and clarity of its responses. Critics warn that AI systems often rely on vague generalities and may lack the moral grounding expected by faith-based users.

A new benchmark released by technology firm Gloo assessed how leading AI models support human flourishing from a Christian perspective. The evaluation examined seven areas, including relationships, meaning, health, and faith, and found consistent weaknesses in how models addressed Christian belief.

The findings show many AI systems struggle with core Christian concepts such as forgiveness and grace. Responses often default to vague spirituality rather than engaging directly with Christian values.

The authors argue that as AI increasingly shapes worldviews, greater attention is needed to how systems serve Christians and other faith communities. They call for clearer benchmarks and training approaches that allow AI to engage respectfully with religious values without promoting any single belief system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!