Top institutes team up with Google DeepMind to spearhead AI-assisted mathematics

AI for Math Initiative pairs Google DeepMind with five elite institutes to apply advanced AI to open problems and proofs. Partners include Imperial, IAS, IHES, the Simons Institute at UC Berkeley, and TIFR. The goal is to accelerate discovery, tooling, and training.

Google support spans funding and access to Gemini Deep Think, AlphaEvolve for algorithm discovery, and AlphaProof for formal reasoning. Combined systems complement human intuition, scale exploration, and tighten feedback loops between theory and applied AI.

Recent benchmarks show rapid gains. Deep Think enabled Gemini to reach gold-medal IMO performance, perfectly solving five of six problems for 35 points. AlphaGeometry and AlphaProof earlier achieved silver-level competence on Olympiad-style tasks.

AlphaEvolve pushed the frontiers of analysis, geometry, combinatorics, and number theory, improving the best results on 1/5 of 50 open problems. Researchers also uncovered a 4×4 matrix-multiplication method that uses 48 multiplications, surpassing the 1969 record.

Partners will co-develop datasets, standards, and open tools, while studying limits where AI helps or hinders progress. Workstreams include formal verification, conjecture generation, and proof search, emphasising reproducibility, transparency, and responsible collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humanoid robots set to power Foxconn’s new Nvidia server plant in Houston

Foxconn will add humanoid robots to a new Houston plant building Nvidia AI servers from early 2026. Announced at Nvidia’s developer conference, the move deepens their partnership and positions the site as a US showcase for AI-driven manufacturing.

Humanoid systems based on Nvidia’s Isaac GR00T N are built to perceive parts, adapt on the line, and work with people. Unlike fixed industrial arms, they handle delicate assembly and switch tasks via software updates. Goals include flexible throughput, faster retooling, and fewer stoppages.

AI models are trained in simulation using digital twins and reinforcement learning to improve accuracy and safety. On the line, robots self-tune as analytics predict maintenance and balance workloads, unlocking gains across logistics, assembly, testing, and quality control.

Texas, US, offers proximity to a growing semiconductor and AI cluster, as well as policy support for domestic capacity. Foxconn also plans expansions in Wisconsin and California to meet global demand for AI servers. Scaling output should ease supply pressures around Nvidia-class compute in data centres.

Job roles will shift as routine tasks automate and oversight becomes data-driven. Human workers focus on design, line configuration, and AI supervision, with safety gates for collaboration. Analysts see a template for Industry 4.0 factories running near-continuously with rapid changeovers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alliance science pact lifts US–Korea cooperation on AI, quantum, 6G, and space

The United States and South Korea agreed on a broad science and technology memorandum to deepen alliance ties and bolster Indo-Pacific stability. The non-binding pact aims to accelerate innovation while protecting critical capabilities. Both sides cast it as groundwork for a new Golden Age of Innovation.

AI sits at the centre. Plans include pro-innovation policy alignment, trusted exports across the stack, AI-ready datasets, safety standards, and enforcement of compute protection. Joint metrology and standards work links the US Center for AI Standards and Innovation with the AI Safety Institute of South Korea.

Trusted technology leadership extends beyond AI. The memorandum outlines shared research security, capacity building for universities and industry, and joint threat analysis. Telecommunications cooperation targets interoperable 6G supply chains and coordinated standards activity with industry partners.

Quantum and basic research are priority growth areas. Participants plan interoperable quantum standards, stronger institutional partnerships, and secured supply chains. Larger projects and STEM exchanges aim to widen collaboration, supported by shared roadmaps and engagement in global consortia.

Space cooperation continues across civil and exploration programmes. Strands include Artemis contributions, a Korean cubesat rideshare on Artemis II, and Commercial Lunar Payload Services. The Korea Positioning System will be developed for maximum interoperability with GPS.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia founder questions Musk’s Grokipedia accuracy

Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.

Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.

He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.

Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.

However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.

Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple fined over unfair iPhone sales contracts in France

A Paris court has ordered Apple to pay around €39 million to French mobile operators, ruling that the company imposed unfair terms in contracts governing iPhone sales more than a decade ago. The court also fined Apple €8 million and annulled several clauses deemed anticompetitive.

Judges found that Apple required carriers to sell a set number of iPhones at fixed prices, restricted how its products were advertised, and used operators’ patents without compensation. The French consumer watchdog DGCCRF had first raised concerns about these practices years earlier.

Under the ruling, Apple must compensate three of France’s four major mobile networks; Bouygues Telecom, Free, and SFR. The decision applies immediately despite Apple’s appeal, which will be heard at a later date.

Apple said it disagreed with the ruling and would challenge it, arguing that the contracts reflected standard commercial arrangements of the time. French regulators have increasingly scrutinised major tech firms as part of wider efforts to curb unfair market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Emergency cardiology gets a lift from AI-read ECGs, with fewer false activations

AI ECG analysis improved heart attack detection and reduced false alarms in a multicentre study of 1,032 suspected STEMI cases. Conducted across three primary PCI centres from January 2020 to May 2024, it points to quicker, more accurate triage, especially beyond specialist hospitals.

ST-segment elevation myocardial infarction occurs when a major coronary artery is blocked. Guideline targets call for reperfusion within 90 minutes of first medical contact. Longer delays are associated with roughly a 3-fold increase in mortality, underscoring the need for rapid, reliable activation.

The AI ECG model, trained to detect acute coronary occlusion and STEMI equivalents, analysed each patient’s initial tracing. Confirmatory angiography and biomarkers identified 601 true STEMIs and 431 false positives. AI detected 553 of 601 STEMIs, versus 427 identified by standard triage on the first ECG.

False positives fell sharply with AI. Investigators reported a 7.9 percent false-positive rate with the model, compared with 41.8 percent under standard protocols. Clinicians said earlier that more precise identification could streamline transfers from non-PCI centres and help teams reach reperfusion targets.

An editorial welcomed the gains but urged caution. The model targets acute occlusion rather than STEMI, needs prospective validation in diverse populations, and must be integrated with clear governance and human oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!