CES 2026 showcases AI and robotics innovations

CES 2026 has already revealed a range of groundbreaking technologies, from AI-powered holograms to autonomous vehicles. The event highlights how AI and robotics are increasingly shaping both entertainment and everyday life.

Razer introduced an all-in-one anime waifu hologram for desktops, while ASUS showcased extended reality glasses that act as a 240Hz gaming monitor. LEGO unveiled a Smart Brick capable of lighting up, playing sounds, and detecting characters.

Robotics took centre stage, with Boston Dynamics revealing its next-generation Atlas robot integrated with Google DeepMind AI, signalling rapid progress in humanoid robotics.

NVIDIA announced Alpamayo, a reasoning AI for autonomous vehicles, while Lucid partnered with Uber and Nuro to showcase a robotaxi.

Health and lifestyle innovations were also prominent. Withings launched Body Scan 2, an at-home longevity station offering AI-powered insights on blood pressure and over 60 biomarkers. Gaming hardware included the 8BitDo FlipPad, a flip-style controller optimised for mobile gaming.

Samsung teased a slim 3D display that delivers depth without bulky hardware, signalling a new generation of immersive screens. Alongside it, a pen with three cameras showed advanced spatial tracking for precise motion capture and object scanning.

CES 2026 emphasises the blending of AI, robotics, and interactive devices, highlighting how technology is increasingly personal, intelligent, and integrated into everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York launches 2026 with new AI proposals

New York is beginning 2026 with a renewed push to shape how AI is used, focusing on consumer protection while continuing to attract major tech investment. The move follows the recent signing of the RAISE Act, a landmark law aimed at enhancing safety standards for advanced AI models, and signals that state leaders intend to remain active in AI governance this year.

Governor Kathy Hochul has unveiled a new package of proposals, primarily aimed to protecting children online. The measures would expand age verification requirements, set safer default settings on social media platforms for minors, limit certain AI chatbot features for children, and give parents greater control over their children’s financial transactions. The proposals, part of Hochul’s annual ‘State of the State’ agenda, must still pass the state legislature before becoming law.

At the same time, New York is positioning itself as a welcoming environment for AI and semiconductor development. Hochul recently announced a $33 million research and development expansion in Manhattan by London-based AI company ElevenLabs.

In addition, Micron is expected to begin construction later this month on a massive semiconductor facility in White Plains, part of a broader $100 billion investment plan that underscores the state’s ambitions in advanced technology and manufacturing.

Beyond child safety and economic development, state officials are also focusing to how algorithms impact everyday costs. Attorney General Letitia James is investigating Instacart over allegations that its pricing systems charge different customers different prices for the same products.

The probe follows the implementation of New York’s Algorithmic Pricing Disclosure Act, which took effect late last year, requiring companies to be more transparent about the use of automated pricing tools.

The attorney general’s office is also examining broader accountability issues tied to AI systems, including reports involving the misuse of generative AI. Together, these actions underscore New York’s commitment to addressing voter concerns regarding affordability, safety, and transparency, while also harnessing the economic potential of rapidly evolving AI technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Global AI adoption reaches record levels in 2025

Global adoption of generative AI continued to rise in the second half of 2025, reaching 16.3 percent of the world’s population. Around one in six people now use AI tools for work, learning, and problem-solving, marking rapid progress for a technology still in its early years.

Adoption remains uneven, with the Global North growing nearly twice as fast as the Global South. Countries with early investments in digital infrastructure and AI policies, including the UAE, Singapore, and South Korea, lead the way.

South Korea saw the most significant gain, rising seven spots globally due to government initiatives, improved Korean-language models, and viral consumer trends.

The UAE maintains its lead, benefiting from years of foresight, including early AI strategy, dedicated ministries, and regulatory frameworks that foster trust and widespread usage.

Meanwhile, open-source platforms such as DeepSeek are expanding access in underserved markets, including Africa, China, and Iran, lowering financial and technical barriers for millions of new users.

While AI adoption grows globally, disparities persist. Policymakers and developers face the challenge of ensuring that the next wave of AI users benefits broader communities, narrowing divides rather than deepening them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI sovereignty test in South Korea reaches a critical phase

South Korea’s flagship AI foundation model project has entered a decisive phase after accusations that leading participants relied on foreign open source components instead of building systems entirely independently.

The controversy has reignited debate over how ‘from scratch’ development should be defined within government-backed AI initiatives aimed at strengthening national sovereignty.

Scrutiny has focused on Naver Cloud after developers identified near-identical similarities between its vision encoder and models released by Alibaba, alongside disclosures that audio components drew on OpenAI technology.

The dispute now sits with the Ministry of Science and ICT, which must determine whether independence applies only to a model’s core or extends to all major components.

An outcome that is expected to shape South Korea’s AI strategy by balancing deeper self-reliance against the realities of global open-source ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gmail enters the Gemini era with AI-powered inbox tools

Google is reshaping Gmail around its Gemini AI models, aiming to turn email into a proactive assistant for more than three billion users worldwide.

With inbox volumes continuing to rise, the focus shifts towards managing information flows instead of simply sending and receiving messages.

New AI Overviews allow Gmail to summarise long email threads and answer natural language questions directly from inbox content.

Users can retrieve details from past conversations without complex searches, while conversation summaries roll out globally at no cost, with advanced query features reserved for paid AI subscriptions.

Writing tools are also expanding, with Help Me Write, upgraded Suggested Replies, and Proofread features designed to speed up drafting while preserving individual tone and style.

Deeper personalisation is planned through connections with other Google services, enabling emails to reflect broader user context.

A redesigned AI Inbox further prioritises urgent messages and key tasks by analysing communication patterns and relationships.

Powered by Gemini 3, these features begin rolling out in the US in English, with additional languages and regions scheduled to follow during 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could optimise power grids and reduce energy waste

AI could help make power grids cleaner and more efficient while reducing energy waste, even as data centres powering generative models consume increasing amounts of electricity. Researchers are exploring ways to balance these demands with optimisation techniques.

Accurate predictions of renewable energy availability using AI can help grid operators integrate solar and wind power more effectively. AI can solve complex optimisation problems, quickly and accurately managing power generation, battery use, and flexible loads.

Beyond real-time operations, AI can enhance grid planning and predictive maintenance, identifying potential faults before they lead to outages.

Application-specific AI models can support decarbonisation strategies and enable the integration of more renewable energy while keeping energy costs and environmental impacts in check.

Experts caution that not all AI development in the energy sector is equally beneficial. Large, general-purpose models are highly resource-intensive, whereas smaller, targeted models offer measurable advantages in terms of grid efficiency and sustainability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Portugal government backs AI with €400 million plan

Portugal has announced a €400 million investment in AI over the period 2026-2030, primarily funded by European programmes. The National Artificial Intelligence Agenda (ANIA) and its Action Plan (PAANIA) aim to strengthen Portugal’s position in AI research, industry, and innovation.

The government predicts AI could boost the country’s GDP by €18-22 billion in the next decade. Officials highlight Portugal’s growing technical talent pool, strong universities and research centres, renewable energy infrastructure, and a dynamic start-up ecosystem as key advantages.

Key projects include establishing AI gigafactories and supercomputing facilities to support research, SMEs, and start-ups, alongside a National Data Centre Plan aimed to simplifying licensing and accelerating the sector.

Early investments of €10 million target AI applications in public administration, with a total of €25 million planned.

Sectoral AI Centres will focus on healthcare and industrial robotics, leveraging AI to enhance patient care, improve efficiency, and support productivity, competitiveness, and the creation of skilled jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netomi shows how to scale enterprise AI safely

Netomi has developed a blueprint for scaling enterprise AI, utilising GPT-4.1 for rapid tool use and GPT-5.2 for multi-step reasoning. The platform supports complex workflows, policy compliance, and heavy operational loads, serving clients such as United Airlines and DraftKings.

The company emphasises three core lessons. First, systems must handle real-world complexity, orchestrating multiple APIs, databases, and tools to maintain state and situational awareness across multi-step workflows.

Second, parallelised architectures ensure low latency even under extreme demand, keeping response times fast and reliable during spikes in activity.

Third, governance is embedded directly into the runtime, enforcing compliance, protecting sensitive data, and providing deterministic fallbacks when AI confidence is low.

Netomi demonstrates how agentic AI can be safely scaled, providing enterprises with a model for auditable, predictable, and resilient intelligent systems. These practices serve as a roadmap for organisations seeking to move AI from experimental tools to production-ready infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot