Nvidia CEO Jensen Huang said the company is not in active discussions to sell Blackwell-family AI chips to Chinese firms and has no current plans to ship them. He also clarified remarks about the US-China AI race, saying he intended to acknowledge China’s technical strength rather than predict an outcome.
Huang spoke in Taiwan ahead of meetings with TSMC, as Nvidia expands partnerships and pitches its platforms across regions and industries. The company has added roughly a trillion dollars in value this year and remains the world’s most valuable business despite recent share volatility.
US controls still bar sales of Nvidia’s most advanced data-centre AI chips into China, and a recent bilateral accord did not change that. Officials have indicated approvals for Blackwell remain off the table, keeping a potentially large market out of reach for now.
Analysts say uncertainty around China’s access to the technology feeds broader questions about the durability of hyperscale AI spending. Rivals, including AMD and Broadcom, are racing to win share as customers weigh long-term returns on data-centre buildouts.
Huang is promoting Nvidia’s end-to-end stack to reassure buyers that massive investments will yield productivity gains across sectors. He said he hopes policy environments eventually allow Nvidia to serve China again, but reiterated there are no active talks.
OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.
The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.
OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Circle has submitted its comments to the US Department of the Treasury, outlining its support for the GENIUS Act and calling for clear, consistent rules to govern payment stablecoin issuers.
The company emphasised that effective rulemaking could create a unified national framework for both domestic and foreign issuers, providing consumers with safer and more transparent financial products.
The firm urged Treasury to adopt a cooperative supervisory approach that promotes uniform compliance and risk management standards across jurisdictions. Circle warned against excessive restrictions that could harm liquidity, cross-border payments, or interoperability.
It also called for closing potential loopholes that might allow unregulated entities to avoid oversight while benefiting from the US dollar’s trust and stability.
Circle proposed safeguards requiring stablecoins to be fully backed, independently audited, and supported by transparent public reports. The firm stressed recognising foreign regimes, applying equal rules to all issuers, and enforcing consistent penalties.
Circle described the GENIUS Act as a chance to strengthen the stability of digital finance in the US. The company believes transparent, fully backed stablecoins and recognised foreign issuers could strengthen US leadership in secure, innovative finance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France’s competition authority has fined Doctolib €4.67 million for abusing its dominant position in online medical appointment booking and teleconsultation services. The regulator found that Doctolib used exclusivity clauses and tied selling to restrict competition and strengthen its market control.
Doctolib required healthcare professionals to subscribe to its appointment booking service to use its teleconsultation platform, effectively preventing them from using rival providers. Contracts also included clauses discouraging professionals from signing with competing services.
The French authority also sanctioned Doctolib for its 2018 acquisition of MonDocteur, describing it as a strategy to eliminate its main competitor. Internal documents revealed that the merger aimed to remove MonDocteur’s product from the market and reduce pricing pressure.
The decision marks the first application of the EU’s Towercast precedent to penalise a below-threshold merger as an abuse of dominance. Doctolib has been ordered to publish the ruling summary in Le Quotidien du Médecin and online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Coca-Cola has released an improved AI-generated Christmas commercial after last year’s debut campaign drew criticism for its unsettling visuals.
The latest ‘Holidays Are Coming’ ads, developed in part by San Francisco-based Silverside, showcase more natural animation and a wider range of festive creatures, instead of the overly lifelike characters that previously unsettled audiences.
The new version avoids the ‘uncanny valley’ effect that plagued 2024’s ads. The use of generative AI by Coca-Cola reflects a wider advertising trend focused on speed and cost efficiency, even as creative professionals warn about its potential impact on traditional jobs.
Despite the efficiency gains, AI-assisted advertising remains labour-intensive. Teams of digital artists refine the content frame by frame to ensure realistic and emotionally engaging visuals.
Industry data show that 30% of commercials and online videos in 2025 were created or enhanced using generative AI, compared with 22% in 2023.
Coca-Cola’s move follows similar initiatives by major firms, including Google’s first fully AI-generated ad spot launched last month, signalling that generative AI is now becoming a mainstream creative tool across global marketing.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.
Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.
YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.
Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.
Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.
It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.
The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.
The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.
According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.
The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.
To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.
It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.
OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to the Business Software Alliance, India could add over $500 billion to its economy by 2035 through the widespread adoption of AI.
At the BSA AI Pre-Summit Forum in Delhi, the group unveiled its ‘Enterprise AI Adoption Agenda for India’, which aligns with the goals of the India–AI Impact Summit 2026 and the government’s vision for a digitally advanced economy by 2047.
The agenda outlines a comprehensive policy framework across three main areas: talent and workforce, infrastructure and data, and governance.
It recommends expanding AI training through national academies, fostering industry–government partnerships, and establishing innovation hubs with global companies to strengthen talent pipelines.
BSA also urged greater government use of AI tools, reforms to data laws, and the adoption of open industry standards for content authentication. It called for coordinated governance measures to ensure responsible AI use, particularly under the Digital Personal Data Protection Act.
BSA has introduced similar policy roadmaps in other major markets, apart from India, including the US, Japan, and ASEAN countries, as part of its global effort to promote trusted and inclusive AI adoption.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.
Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.
Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.
Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.
Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
GEMS Education is rolling out Microsoft 365 Copilot to cut admin and personalise learning, with clear guardrails and transparency. Teachers spend less time on preparation and more time with pupils. The aim is augmentation, not replacement.
Copilot serves as a single workspace for plans, sources, and visuals. Differentiated materials arrive faster for struggling and advanced learners. More time goes to feedback and small groups.
Student projects are accelerating. A Grade 8 pupil built a smart-helmet prototype, using AI to guide circuitry, code, and documentation. The idea to build functionally moved quickly.
The School of Research and Innovation opened in August 2025 as a living lab, hosting educator training, research partners, and student incubation. A Microsoft-backed stack underpins the campus.
Teachers are co-creating lightweight AI agents for curriculum and analytics. Expert oversight and safety patterns stay central. The focus is on measurable time savings and real-world learning.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!