Purdue and Google collaborate to advance AI research and education

Purdue University and Google are expanding their partnership to integrate AI into education and research, preparing the next generation of leaders while advancing technological innovation.

The collaboration was highlighted at the AI Frontiers summit in Indianapolis on 13 November. The event brought together university, industry, and government leaders to explore AI’s impact across sectors such as health care, manufacturing, agriculture, and national security.

Leaders from both organisations emphasised the importance of placing AI tools in the hands of students, faculty, and staff. Purdue plans a working AI competency requirement for incoming students in fall 2026, ensuring all graduates gain practical experience with AI tools, pending Board approval.

The partnership also builds on projects such as analysing data to improve road safety.

Purdue’s Institute for Physical Artificial Intelligence (IPAI), the nation’s first institute dedicated to AI in the physical world, plays a central role in the collaboration. The initiative focuses on physical AI, quantum science, semiconductors, and computing to equip students for AI-driven industries.

Google and Purdue emphasised responsible innovation and workforce development as critical goals of the partnership.

Industry leaders, including Waymo, Google Public Sector, and US Senator Todd Young, discussed how AI technologies like autonomous drones and smart medical devices are transforming key sectors.

The partnership demonstrates the potential of public-private collaboration to accelerate AI research and prepare students for the future of work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft expands AI model Aurora to improve global weather forecasts

Extreme weather displaced over 800,000 people worldwide in 2024, highlighting the importance of accurate forecasts for saving lives, protecting infrastructure, and supporting economies. Farmers, coastal communities, and energy operators rely on timely forecasts to prepare and respond effectively.

Microsoft is reaffirming its commitment to Aurora, an AI model designed to help scientists better understand Earth systems. Trained on vast datasets, Aurora can predict weather, track hurricanes, monitor air quality, and model ocean waves and energy flows.

The platform will remain open-source, enabling researchers worldwide to innovate, collaborate, and apply it to new climate and weather challenges.

Through partnerships with Professor Rich Turner at the University of Cambridge and initiatives like SPARROW, Microsoft is expanding access to high-quality environmental data.

Community-deployable weather stations are improving data coverage and forecast reliability in underrepresented regions. Aurora’s open-source releases, including model weights and training pipelines, will let scientists and developers adapt and build upon the platform.

The AI model has applications beyond research, with energy companies, commodity traders, and national meteorological services exploring its use.

By supporting forecasting systems tailored to local environments, Aurora aims to improve resilience against extreme weather, optimise renewable energy, and drive innovation across multiple industries, from humanitarian aid to financial services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Baidu launches new AI chips amid China’s self-sufficiency push

In a strategic move aligned with national technology ambitions, Baidu announced two newly developed AI chips, the M100 and the M300, at its annual developer and client event.

The M100, designed by Baidu’s chip subsidiary Kunlunxin Technology, targets inference efficiency for large models using mixture-of-experts techniques, while the M300 is engineered for training very large multimodal models comprising trillions of parameters.

The M100 is slated for release in early 2026 and the M300 in 2027, according to Baidu, which claims they will deliver ‘powerful, low-cost and controllable AI computing power’ to support China’s drive for technological self-sufficiency.

Baidu also revealed plans for clustered architectures such as the Tianchi256 stack in the first half of 2026 and the Tianchi512 in the second half of 2026, intended to boost inference capacity through large-scale interconnects of chips.

This announcement illustrates how China’s tech ecosystem is accelerating efforts to reduce dependence on foreign silicon, particularly amid export controls and geopolitical tensions. Domestically-designed AI processors from Baidu and other firms such as Huawei Technologies, Cambricon Technologies and Biren Technology are increasingly positioned to substitute for western hardware platforms.

From a policy and digital diplomacy perspective, the development raises questions about the global semiconductor supply chain, standards of compute sovereignty and how AI-hardware competition may reshape power dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Romania pilots EU Digital Identity Wallet for payments

In a milestone for the European digital identity ecosystem, Banca Transilvania and payments-tech firm BPC have completed the first pilot in Romania using the EU Digital Identity Wallet (EUDIW) for a real-money transaction.

The initiative lets a cardholder authenticate a purchase using the wallet rather than a conventional one-time password or card reader.

The pilot forms part of a large-scale testbed led by the European Commission under the eIDAS 2 Regulation, which requires all EU banks to accept the wallet for strong customer authentication and KYC (know-your-customer) purposes by 2027.

Banca Transilvania’s Deputy CEO Retail Banking, Oana Ilaş, described the project as a historic step toward a unified European digital identities framework that enhances interoperability, inclusivity and banking access.

From a digital governance and payments policy perspective, this pilot is significant. It shows how national banking systems are beginning to integrate digital-ID wallets into card and account-based flows, potentially reducing reliance on legacy authentication mechanisms (such as SMS OTP or hardware tokens) that are vulnerable to fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Explainable AI predicts cardiovascular events in hospitalised COVID-19 patients

In the article published by BMC Infectious Diseases, researchers developed predictive models using machine learning (LightGBM) to identify cardiovascular complications (such as arrhythmia, acute heart failure, myocardial infarction) in 10,700 hospitalised COVID-19 patients across Brazil.

The study reports moderate discriminatory performance, with AUROC values of 0.752 and 0.760 for the two models, and high overall accuracy (~94.5%) due to the large majority of non-event cases.

However, due to the rarity of cardiovascular events (~5.3% of cases), the F1-scores for detecting the event class remained very low (5.2% and 4.2%, respectively), signalling that the models struggle to reliably identify the minority class despite efforts to rebalance the data.

Using SHAP (Shapley Additive exPlanations) values, the researchers identified the most influential predictors: age, urea level, platelet count and SatO₂/FiO₂ (oxygen saturation to inspired oxygen fraction) ratio.

The authors emphasise that while the approach shows promise for resource-constrained settings and contributes to risk stratification, the limitations around class imbalance and generalisability remain significant obstacles for clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI platforms approved for Surrey Schools classrooms

Surrey Schools has approved MagicSchool, SchoolAI, and TeachAid for classroom use, giving teachers access through the ONE portal with parental consent. The district says the tools are intended to support instruction while maintaining strong privacy and safety safeguards.

Officials say each platform passes rigorous reviews covering educational value, data protection, and technical security before approval. Teachers receive structured guidance on appropriate use, supported by professional development aligned with wider standards for responsible AI in education.

A two-year digital literacy programme helps staff explore online identity, digital habits, and safe technology use as AI becomes more common in lessons. Students use AI to generate ideas, check code, and analyse scientific or mathematical problems, reinforcing critical reasoning.

Educators stress that pupils are taught to question AI outputs rather than accept them at face value. Leaders argue this approach builds judgment and confidence, preparing young people to navigate automated systems with greater agency beyond school settings.

Families and teachers can access AI safety resources through the ONE platform, including videos, podcasts and the ‘Navigating an AI Future’ series. Materials include recordings from earlier workshops and parent sessions, supporting shared understanding of AI’s benefits and risks across the community.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI credentials grow as AWS launches practical training pathway

AWS is launching four solutions to help close the AI skills gap as demand rises and job requirements shift. The company positions the new tools as a comprehensive learning journey, offering structured pathways that progress from foundational knowledge to hands-on practice and formal validation.

AWS Skill Builder now hosts over 220 free AI courses, ranging from beginner introductions to advanced topics in generative and agentic AI. The platform enables learners to build skills at their own pace, with flexible study options that accommodate work schedules.

Practical experience anchors the new suite. The Meeting Simulator helps learners explain AI concepts to realistic personas and refine communication with instant feedback. Cohorts Studio offers team-based training through study groups, boot camps, and game-based challenges.

AWS is expanding its credential portfolio with the AWS Certified Generative AI Developer – Professional certification. The exam helps cloud practitioners demonstrate proficiency in foundation models, RAG architectures, and responsible deployment, supported by practice tasks and simulated environments.

Learners can validate hands-on capability through new microcredentials that require troubleshooting and implementation in real AWS settings. Combined credentials signal both conceptual understanding and task-ready skills, with Skill Builder’s more expansive library offering a clear starting point for career progression.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator opens DSA probe into X

Ireland’s media watchdog has opened a formal investigation into X under the EU’s Digital Services Act. Regulators will assess appeal rights and internal complaint handling after reports of inaccessible processes for users.

Irish officials will examine whether users can challenge refusals to remove reported content and receive clear outcomes. Potential penalties reach up to 6% of global turnover for confirmed breaches.

The case stems from ongoing supervision, a user complaint, and information from HateAid, marking the first such probe by Ireland. Wider EU scrutiny continues across huge platforms.

Other services, including Meta and TikTok, have faced DSA actions, underscoring tighter enforcement across the bloc. Remedial measures and transparency improvements could follow if non-compliance is found.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ElevenLabs recreates celebrity voices for digital content

Matthew McConaughey and Michael Caine have licensed their voices to ElevenLabs, an AI company, joining a growing number of celebrities who are embracing generative AI. McConaughey will allow his newsletter to be translated into Spanish using his voice, while Caine’s voice is available on ElevenLabs’ text-to-audio app and Iconic Marketplace. Both stressed that the technology is intended to amplify storytelling rather than replace human performers.

ElevenLabs offers a range of synthetic voices, including historical figures and performers like Liza Minnelli and Maya Angelou, while claiming a ‘performer-first’ approach focused on consent and creative authenticity. The move comes amid debate in Hollywood, with unions such as SAG-AFTRA warning AI could undermine human actors, and some artists, including Guillermo del Toro and Hayao Miyazaki, publicly rejecting AI-generated content.

Despite concerns, entertainment companies are investing heavily in AI. Netflix utilises it to enhance recommendations and content, while directors and CEOs argue that it fosters creativity and job opportunities. Critics, however, caution that early investments could form a volatile bubble and highlight risks of misuse, such as AI-generated endorsements or propaganda using celebrity likenesses.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!