UN Secretary-General Antonio Guterres has submitted the composition of a new Independent International Scientific Panel on AI to the United Nations General Assembly, marking a step towards evidence-based global AI governance.
The panel brings together 40 experts from across regions and disciplines, selected through an open global call that attracted more than 2,600 applications, and members serve in a personal and independent capacity.
Guterres said the body would act as the first fully independent global scientific authority focused on closing the AI knowledge gap and assessing real-world impacts across economies and societies.
According to the UN chief, a reliable and unbiased understanding of AI has become essential as technologies reshape governance, labour markets, and social systems at an accelerating speed.
The panel will operate for an initial three-year term, aiming to provide a shared scientific foundation for international cooperation amid rising geopolitical tension and technological competition.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Medical AI promises faster analysis, more accurate pattern detection, and continuous availability, yet most systems still struggle to perform reliably in real clinical environments beyond laboratory testing.
Researchers led by Marinka Zitnik at Harvard Medical School identify contextual errors as a key reason why medical AI often fails when deployed in hospitals and clinics.
Models frequently generate technically sound responses that overlook crucial factors, such as medical speciality, geographic conditions, and patients’ socioeconomic circumstances, thereby limiting their real-world usefulness.
The study argues that training datasets, model architecture, and performance benchmarks must integrate contextual information to prevent misleading or impractical recommendations.
Improving transparency, trust, and human-AI collaboration could allow context-aware systems to support clinicians more effectively while reducing harm and inequality in care delivery.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Indian Railways has deployed an AI powered Rail Robocop at Visakhapatnam Railway Station in India to strengthen passenger security. The system is designed to patrol platforms and monitor crowds in Visakhapatnam.
The robot, named ASC Arjun, uses facial recognition to compare live images with a database of known criminals in India. Officials said the system recently identified a suspect during routine surveillance in Visakhapatnam.
Once a match was detected, the AI system sent an instant alert to the Railway Protection Force CCTV control room in Visakhapatnam. Officers were able to respond quickly using the automated notification.
Authorities in India say the Rail Robocop will support human staff rather than replace them. Similar AI deployments are expected at other major railway stations in India following trials in Visakhapatnam.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Artificial intelligence is transforming car design by generating rapid concept images and exploring new ideas in seconds. Designers can test colours, materials, and forms faster than with traditional sketches.
AI excels at designing components, creating mood boards, and supporting research, yet it struggles with originality. Industry leaders emphasise that developing entirely new models still requires human imagination and creativity.
Many manufacturers have developed internal AI systems trained on their own designs to protect intellectual property. These tools help designers experiment with combinations they might not have considered, offering fresh perspectives while keeping confidential data secure.
While AI is unlikely to replace human designers, it has become an essential tool for staying competitive. By combining computational speed with creative vision, design teams can enhance efficiency, inspire innovation, and explore ideas beyond traditional limits.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Electronic Arts has entered a multi year partnership with Stability AI to develop generative AI tools for game creation. The collaboration will support franchises such as The Sims, Battlefield and Madden NFL.
The company said the partnership centres on customised AI models that give developers more control over creative processes. Electronic Arts invested in Stability AI during its latest funding round in October.
Executives at Electronic Arts said concerns about job losses are understandable across the gaming industry. The company views AI as a way to enhance specific tasks and create new roles rather than replace staff.
Stability AI said similar technologies have historically increased demand for skilled workers. Electronic Arts added that active involvement in AI development helps the industry adapt rather than react to disruption.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia’s plans to export its H200 AI chips to China remain pending nearly two months after US President Donald Trump approved. A national security review is still underway before licences can be issued to Chinese customers.
Chinese companies have delayed new H200 orders while awaiting clarity on licence approvals and potential conditions, according to people familiar with the discussions. The uncertainty has slowed anticipated demand and affected production planning across Nvidia’s supply chain.
In January, the US Commerce Department eased H200 export restrictions to China but required licence applications to be reviewed by the departments of State, Defence, and Energy.
Commerce has completed its analysis, but inter-agency discussions continue, with the US State Department seeking additional safeguards.
The export framework, which also applies to AMD, introduces conditions related to shipment allocation, testing, and end-use reporting. Until the review process concludes, Nvidia and prospective Chinese buyers remain unable to proceed with confirmed transactions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has missed a key deadline to issue guidance on how companies should classify high-risk AI systems under the EU AI Act, fuelling uncertainty around the landmark law’s implementation.
Guidance on Article 6, which defines high-risk AI systems and stricter compliance rules, was due by early February. Officials have indicated that feedback is still being integrated, with a revised draft expected later this month and final adoption potentially slipping to spring.
The delay follows warnings that regulators and businesses are unprepared for the act’s most complex rules, due to apply from August. Brussels has suggested delaying high-risk obligations under its Digital Omnibus package, citing unfinished standards and the need for legal clarity.
Industry groups want enforcement delayed until guidance and standards are finalised, while some lawmakers warn repeated slippage could undermine confidence in the AI Act. Critics warn further changes could deepen uncertainty if proposed revisions fail or disrupt existing timelines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.
OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.
Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.
The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.
ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.
The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.
A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.
Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.
Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.
Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major international AI safety report warns that AI systems are advancing rapidly, with sharp gains in reasoning, coding and scientific tasks. Researchers say progress remains uneven, leaving systems powerful yet unreliable.
The report highlights rising concerns over deepfakes, cyber misuse and emotional reliance on AI companions in the UK and the US. Experts note growing difficulty in distinguishing AI generated content from human work.
Safeguards against biological, chemical and cyber risks have improved, though oversight challenges persist in the UK and the US. Analysts warn advanced models are becoming better at evading evaluation and controls.
The impact of AI on jobs in the UK and the US remains uncertain, with mixed evidence across sectors. Researchers say labour disruption could accelerate if systems gain greater autonomy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!