In Cameroon, where career guidance often takes a back seat, a new AI platform is helping students plan their futures. Developed by mathematician and AI researcher Frédéric Ngaba, OSIA offers personalised academic and career recommendations.
The platform provides a virtual tutor trained on Cameroon’s curricula, offering 400 exam-style tests and psychometric assessments. Students can input grades and aspirations, and the system builds tailored academic profiles to highlight strengths and potential career paths.
OSIA already has 13,500 subscribers across 23 schools, with plans to expand tenfold. Subscriptions cost 3,000 CFA francs for locals and €10 for students abroad, making it an affordable solution for many families.
Teachers and guidance counsellors see the tool as a valuable complement, though they stress it cannot replace human interaction or emotional support. Guidance professionals insist that social context and follow-up remain key to students’ development.
The Secretariat for Secular Private Education of Cameroon has authorized OSIA to operate. Officials expect its benefits to scale nationwide as the government considers a national AI strategy to modernise education and improve success rates.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Routine hospital blood samples could help predict spinal cord injury severity and even mortality, a University of Waterloo study has found. Researchers used machine learning to analyse millions of data points from over 2,600 patients.
The models identified patterns in routine blood measurements, including electrolytes and immune cells, collected during the first three weeks following injury. These patterns forecast recovery outcomes even when neurological exams were unreliable or impossible.
Researchers said the models were accurate in predicting injury severity and mortality as early as one to three days after admission. Accuracy improved further as more blood test data became available over time.
Unlike MRI or fluid-based biomarkers, which are not always accessible, routine blood tests are low-cost and widely available in hospitals. The approach could help clinicians make more informed and faster treatment decisions.
The team says its findings could reshape early critical care for spinal cord injuries. Predicting severity sooner could guide resource allocation and prioritise patients needing urgent intervention.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft, Drexel University, and the Broad Institute have developed a generative AI assistant to support genome sequencing. The study in ACM Transactions on Interactive Intelligent Systems demonstrates how AI can accelerate searching, filtering, and synthesising data in rare disease diagnosis.
Whole genome sequencing often takes weeks and yields a diagnosis in fewer than half of cases. Analysts must decide which unsolved cases to revisit as new research appears. The AI assistant flags cases for reanalysis and compiles new gene and variant data into a clear, usable format.
The team interviewed 17 genetics professionals to map workflows and challenges before co-designing the prototype. Sessions focused on problems such as data overload, slow collaboration, and difficulty prioritising unsolved cases, helping ensure the tool addressed real-world pain points.
The prototype enables collaborative sensemaking, allowing users to edit and verify AI-generated content. It offers flexible filtering to surface the most relevant evidence while keeping a comprehensive view, saving time and improving decision-making.
Microsoft-led researchers plan to test the assistant in real-world environments to measure its effect on diagnostic yield and workflow efficiency. They emphasise that success will depend on collaboration among developers, genetic experts, and system designers to build trustworthy and explainable tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At Climate Week NYC 2025, UN Climate Chief Simon Stiell urged governments and industries to accelerate clean energy, embrace industrial and AI transformation, and prepare for decisive progress at COP30 in Belém.
He highlighted that renewable investment reached US$2 trillion last year and that most new renewable projects are cheaper than fossil fuels, showing that the transition is already underway instead of being dependent on breakthroughs.
Stiell warned, however, that the benefits remain uneven and too many industrial projects lie idle. He called on governments to align policy and finance with the Paris Agreement sector by sector while unlocking innovation to create millions of jobs.
On AI, he stressed the importance of harnessing its catalytic potential responsibly, using it to manage energy grids, map climate risks and guide planning, rather than allowing it to displace human skills.
Looking ahead, the UN Climate Chief pointed to the Baku to Belém Roadmap, a plan to mobilise at least US$1.3 trillion annually by 2035 to support climate action in developing countries. He said COP30 must respond to this roadmap, accelerate progress on national climate commitments and deliver for vulnerable communities.
Above all, he argued that climate cooperation is bending the warming curve and must continue to drive real-world improvements in jobs, health and energy access instead of faltering.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched its low-cost ChatGPT Go subscription in Indonesia, pricing it at 75,000 rupiah ($4.5) per month. The new plan offers ten times more messaging capacity, image generation tools and double memory compared with the free version.
The rollout follows last month’s successful launch in India, where ChatGPT subscriptions more than doubled. India has since become OpenAI’s largest market, accounting for around 13.5% of global monthly active users. The US remains second.
Nick Turley, OpenAI Vice President and head of ChatGPT, said Indonesia is already one of the platform’s top five markets by weekly activity. The new tier is aimed at expanding reach in populous, price-sensitive regions while ensuring broader access to AI services.
OpenAI is also strengthening its financial base as it pushes into new markets. On Monday, the company secured a $100 billion investment commitment from NVIDIA, joining Microsoft and SoftBank among its most prominent backers. The funding comes amid intensifying competition in the AI industry.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US General Services Administration (GSA) has launched a OneGov initiative with Meta to give federal agencies streamlined access to Llama, its open source AI models. The approach eliminates individual agency negotiations, saving time and reducing duplicated work across departments.
The initiative supports America’s AI Action Plan and federal memoranda, promoting the government’s accelerated and efficient use of AI. Rapid access to Llama aims to boost innovation, governance, public trust, and operational efficiency.
Open source Llama models allow federal teams to maintain complete control over data processing and storage. Agencies can build, deploy, and scale AI applications at lower cost, enhancing public services while delivering value to taxpayers.
Meta’s free access to the models further enables agencies to develop tailored solutions without reliance on proprietary platforms.
Collaboration between GSA and Meta ensures federal requirements are met while providing consistent department access. The arrangement enhances the government’s ability to implement AI while promoting transparency, reproducibility, and flexible mission-specific applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Over 200 scientists, political leaders and cultural figures have signed a global appeal to set boundaries on AI use. The Global Call for AI Red Lines initiative aims to establish an international agreement on applications that should never be pursued.
Signatories include Nobel laureates, former heads of state, and leading AI researchers such as Geoffrey Hinton, Ian Goodfellow and Yoshua Bengio. OpenAI co-founder Wojciech Zaremba, authors Yuval Noah Harari and Stephen Fry.
Supporters argue that unchecked AI development risks destabilising societies and violating human rights. Consensus is urgently needed to prohibit applications threatening democracy, security, or public safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI and NVIDIA have announced a strategic partnership to build at least 10 gigawatts of AI data centres powered by millions of NVIDIA GPUs.
A deal, supported by the investment of up to $100 billion from NVIDIA, that aims to provide the infrastructure for OpenAI’s next generation of models, with the first phase scheduled for late 2026 on the NVIDIA Vera Rubin platform.
The companies said the collaboration will enable the development of AGI and accelerate AI adoption worldwide. OpenAI will treat NVIDIA as its preferred strategic compute and networking partner, coordinating both sides’ hardware and software roadmaps.
They will also continue working with Microsoft, Oracle, SoftBank and other partners to build advanced AI infrastructure.
OpenAI has grown to more than 700 million weekly users across businesses and developers globally. Executives at both firms described the new partnership as the next leap in AI computing power, one intended to fuel innovation at scale instead of incremental improvements.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new Pew Research Center survey shows Americans are more worried than excited about AI shaping daily life. Half of adults say AI’s rise will harm creative thinking and meaningful relationships, while only small shares see improvements.
Many want greater control over its use, even as most are willing to let it assist with routine tasks.
The survey of over 5,000 US adults found 57% consider AI’s societal risks to be high, with just a quarter rating the benefits as significant. Most respondents also doubt their ability to recognise AI-generated content, although three-quarters believe being able to tell human from machine output is essential.
Americans remain sceptical about AI in personal spheres such as religion and matchmaking, instead preferring its application in heavy data tasks like weather forecasting, fraud detection and medical research.
Younger adults are more aware of AI than older generations, yet they are also more likely to believe it will undermine creativity and human connections.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A leading AI developer has released the third iteration of its Frontier Safety Framework (FSF), aiming to identify and mitigate severe risks from advanced AI models. The update expands risk domains and refines the process for assessing potential threats.
Key changes include the introduction of a Critical Capability Level (CCL) focused on harmful manipulation. The update targets AI models with the potential to systematically influence beliefs and behaviours in high-stakes contexts, ensuring safety measures keep pace with growing model capabilities.
The framework also enhances protocols for misalignment risks, addressing scenarios where AI could override operators’ control or shutdown attempts. Safety case reviews are now conducted before external launches and large-scale internal deployments reach critical thresholds.
The updated FSF sharpens risk assessments and applies safety and security mitigations in proportion to threat severity. It reflects a commitment to evidence-based AI governance, expert collaboration, and ensuring AI benefits humanity while minimising risks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!