Automakers and freight partners join NVIDIA and Uber to accelerate level 4 deployments

NVIDIA and Uber partner on level 4-ready fleets using the DRIVE AGX Hyperion 10, aiming to scale a unified human-and-robot driver network from 2027. A joint AI data factory on NVIDIA Cosmos will curate training data, aiming to reach 100,000 vehicles over time.

DRIVE AGX Hyperion 10 is a reference compute and sensor stack for level 4 readiness across cars, vans, and trucks. Automakers can pair validated hardware with compatible autonomy software to speed safer, scalable, AI-defined mobility. Passenger and freight services gain faster paths from prototype to fleet.

Stellantis, Lucid, and Mercedes-Benz are preparing passenger platforms on Hyperion 10. Aurora, Volvo Autonomous Solutions, and Waabi are extending level 4 capability to long-haul trucking. Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve, and WeRide continue to build on NVIDIA DRIVE.

The production platform pairs dual DRIVE AGX Thor on Blackwell with DriveOS and a qualified multimodal sensor suite. Cameras, radar, lidar, and ultrasonics deliver 360-degree coverage. Modular design plus PCIe, Ethernet, confidential computing, and liquid cooling support upgrades and uptime.

NVIDIA is also launching Halos, a cloud-to-vehicle AI safety and certification system with an ANSI-accredited inspection lab and certification program. A multimodal AV dataset and reasoning VLA models aim to improve urban driving, testing, and validation for deployments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grammarly becomes Superhuman with unified AI tools for work

Superhuman, formerly known as Grammarly, is bundling its writing tools, workspace platform, and email client with a new AI assistant suite. The company says the rebrand reflects a push to unify generative AI features that streamline workplace tasks and online communication for subscribers.

Grammarly acquired Coda and Superhuman Mail earlier this year and added Superhuman Go. The bundle arrives as a single plan. Go’s agents brainstorm, gather information, send emails, and schedule meetings to reduce app switching.

Superhuman Mail organises inboxes and drafts replies in your voice. Coda pulls data from other apps into documents, tables, and dashboards. An upcoming update lets Coda act on that data to automate plans and tasks.

CEO Shishir Mehrotra says the aim is ambient, integrated AI. Built on Grammarly’s infrastructure, the tools work in place without prompting or pasting. The bundle targets teams seeking consistent AI across writing, email, and knowledge work.

Analysts will watch brand overlap with the existing Superhuman email app and enterprise pricing. Success depends on trust, data controls, and measurable time savings versus point tools. Rollout specifics, including regions, will follow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Experts caution that AI growth could double data centre energy use

AI’s rapid growth is fuelling a surge in electricity consumption across the United States, with data centres emerging as major contributors. Analysts warn that expanding AI infrastructure is pushing up national energy demand and could drive higher electricity bills for homes and businesses.

The US hosts more than 4,000 data centres, concentrated mainly in Virginia, Texas and California. Many now operate high-performance AI systems that consume up to 30 times more electricity than traditional facilities, according to energy experts.

The International Energy Agency reported that US data centres used a record 183 terawatt-hours of electricity in 2024, about 4% of national demand. That figure could more than double by 2030, reaching 426 terawatt-hours, as companies race to expand cloud and AI capacity.

With 60% of energy use tied to servers and processing hardware, the shift toward AI-driven computing poses growing challenges for green energy infrastructure. Researchers say that without major efficiency gains, the nation’s power grid will struggle to keep pace with AI’s accelerating appetite for electricity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Report reveals major barriers to UK workforce AI skills development

A new government analysis has identified deep-rooted barriers preventing widespread development of AI skills in the UK’s workforce. The research highlights systemic challenges across education, funding, and awareness, threatening the country’s ambition to build an inclusive and competitive AI economy.

UK experts found widespread confusion over what constitutes AI skills, with inconsistent terminology creating mismatches between training, qualifications, and labour market needs. Many learners and employers still conflate digital literacy with AI competence.

The report also revealed fragmented training provision, limited curriculum responsiveness, and fragile funding cycles that hinder long-term learning. Many adults lack even basic digital literacy, while small organisations and community programmes struggle to sustain AI courses beyond pilot stages.

Employers were found to have an incomplete understanding of their own AI skills needs, particularly within SMEs and public sector organisations. Without clearer frameworks, planning tools, and consistent investment, experts warn the UK risks falling behind in responsible AI adoption and workforce readiness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top institutes team up with Google DeepMind to spearhead AI-assisted mathematics

AI for Math Initiative pairs Google DeepMind with five elite institutes to apply advanced AI to open problems and proofs. Partners include Imperial, IAS, IHES, the Simons Institute at UC Berkeley, and TIFR. The goal is to accelerate discovery, tooling, and training.

Google support spans funding and access to Gemini Deep Think, AlphaEvolve for algorithm discovery, and AlphaProof for formal reasoning. Combined systems complement human intuition, scale exploration, and tighten feedback loops between theory and applied AI.

Recent benchmarks show rapid gains. Deep Think enabled Gemini to reach gold-medal IMO performance, perfectly solving five of six problems for 35 points. AlphaGeometry and AlphaProof earlier achieved silver-level competence on Olympiad-style tasks.

AlphaEvolve pushed the frontiers of analysis, geometry, combinatorics, and number theory, improving the best results on 1/5 of 50 open problems. Researchers also uncovered a 4×4 matrix-multiplication method that uses 48 multiplications, surpassing the 1969 record.

Partners will co-develop datasets, standards, and open tools, while studying limits where AI helps or hinders progress. Workstreams include formal verification, conjecture generation, and proof search, emphasising reproducibility, transparency, and responsible collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nordic ministers fund AI language model network

Nordic ministers for culture have approved funding for a new network dedicated to language models for AI. The decision, taken at a meeting in Stockholm on 29 October, aims to ensure AI development reflects the region’s unique linguistic and cultural traits.

It is one of the first projects for the recently launched Nordic-Baltic centre for AI, New Nordics AI.

The network will bring together national stakeholders to address shared challenges in AI language models. The initiative aims to protect smaller languages and ensure AI tools reflect Nordic linguistic diversity through knowledge sharing and collaboration.

Finland’s Minister for Research and Culture, Mari-Leena Talvitie, said the project is a key step in safeguarding the future of regional languages in digital tools.

Ministers also discussed AI’s broader cultural impact, highlighting issues such as copyright and the need for regional oversight. The network will identify collaboration opportunities and guide future investments in culturally and linguistically anchored Nordic AI solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Humanoid robots set to power Foxconn’s new Nvidia server plant in Houston

Foxconn will add humanoid robots to a new Houston plant building Nvidia AI servers from early 2026. Announced at Nvidia’s developer conference, the move deepens their partnership and positions the site as a US showcase for AI-driven manufacturing.

Humanoid systems based on Nvidia’s Isaac GR00T N are built to perceive parts, adapt on the line, and work with people. Unlike fixed industrial arms, they handle delicate assembly and switch tasks via software updates. Goals include flexible throughput, faster retooling, and fewer stoppages.

AI models are trained in simulation using digital twins and reinforcement learning to improve accuracy and safety. On the line, robots self-tune as analytics predict maintenance and balance workloads, unlocking gains across logistics, assembly, testing, and quality control.

Texas, US, offers proximity to a growing semiconductor and AI cluster, as well as policy support for domestic capacity. Foxconn also plans expansions in Wisconsin and California to meet global demand for AI servers. Scaling output should ease supply pressures around Nvidia-class compute in data centres.

Job roles will shift as routine tasks automate and oversight becomes data-driven. Human workers focus on design, line configuration, and AI supervision, with safety gates for collaboration. Analysts see a template for Industry 4.0 factories running near-continuously with rapid changeovers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India deploys AI to modernise its military operations

In a move reflecting its growing strategic ambitions, India is rapidly implementing AI across its defence forces. The country’s military has moved from policy to practice, using tools from real-time sensor fusion to predictive maintenance to transform how it fights.

The shift has involved institutional change. India’s Defence AI Council and Defence AI Project Agency (established 2019) are steering an ecosystem that includes labs such as the Centre for Artificial Intelligence & Robotics of the Defence Research and Development Organisation (DRDO).

One recent example is the cross-border operation Operation Sindoor (May 2025), in which AI-driven platforms appeared in roles ranging from intelligence analysis to operational coordination.

This effort signals more than just a technological upgrade. It underscores a shift in warfare logic, where systems of systems, connectivity and rapid decision-making matter more than sheer numbers.

India’s incorporation of AI into its capabilities, drone swarming, combat simulation and logistics optimisation, is aligned with broader trends in defence innovation and digital diplomacy. The country’s strategy now places AI at the heart of its procurement demands and force design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft faces Australian lawsuit over hidden AI subscription option

In a legal move that underscores growing scrutiny of digital platforms, the Australian Competition and Consumer Commission (ACCC) has filed a lawsuit in the Federal Court against Microsoft Corporation, accusing the company of misleading approximately 2.7 million Australian personal and family-plan subscribers of its Microsoft 365 service after integrating its AI assistant Copilot.

According to the ACCC, Microsoft raised subscription prices by 45 % for the Personal plan and 29 % for the Family plan after bundling Copilot starting 31 October 2024.

The regulator says Microsoft told consumers their only options were to pay the higher price with AI or cancel their subscription, while failing to clearly disclose a cheaper ‘Classic’ version of the plan without Copilot that remained available.

The ACCC argues Microsoft’s communications omitted the existence of that lower-priced plan unless consumers initiated the cancellation process. Chair Gina Cass-Gottlieb described this omission as ‘very serious conduct’ that deprived customers of informed choice.

The regulator is seeking penalties, consumer redress, injunctions and costs, with potential sanctions of AUS $50 million (or more) per breach.

This action signals a broader regulatory push into how major technology firms bundle AI features, raise prices and present options to consumers, an issue that ties into digital economy governance, consumer trust and platform accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot