AI-powered OSIA aims to boost student success rates in Cameroon

In Cameroon, where career guidance often takes a back seat, a new AI platform is helping students plan their futures. Developed by mathematician and AI researcher Frédéric Ngaba, OSIA offers personalised academic and career recommendations.

The platform provides a virtual tutor trained on Cameroon’s curricula, offering 400 exam-style tests and psychometric assessments. Students can input grades and aspirations, and the system builds tailored academic profiles to highlight strengths and potential career paths.

OSIA already has 13,500 subscribers across 23 schools, with plans to expand tenfold. Subscriptions cost 3,000 CFA francs for locals and €10 for students abroad, making it an affordable solution for many families.

Teachers and guidance counsellors see the tool as a valuable complement, though they stress it cannot replace human interaction or emotional support. Guidance professionals insist that social context and follow-up remain key to students’ development.

The Secretariat for Secular Private Education of Cameroon has authorized OSIA to operate. Officials expect its benefits to scale nationwide as the government considers a national AI strategy to modernise education and improve success rates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Go launches in Indonesia with $4.5 monthly plan

OpenAI has launched its low-cost ChatGPT Go subscription in Indonesia, pricing it at 75,000 rupiah ($4.5) per month. The new plan offers ten times more messaging capacity, image generation tools and double memory compared with the free version.

The rollout follows last month’s successful launch in India, where ChatGPT subscriptions more than doubled. India has since become OpenAI’s largest market, accounting for around 13.5% of global monthly active users. The US remains second.

Nick Turley, OpenAI Vice President and head of ChatGPT, said Indonesia is already one of the platform’s top five markets by weekly activity. The new tier is aimed at expanding reach in populous, price-sensitive regions while ensuring broader access to AI services.

OpenAI is also strengthening its financial base as it pushes into new markets. On Monday, the company secured a $100 billion investment commitment from NVIDIA, joining Microsoft and SoftBank among its most prominent backers. The funding comes amid intensifying competition in the AI industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Xueba 01 becomes first robot doctoral student in China

Standing 1.75 metres tall and weighing 32 kilograms, Xueba 01 has become the first robot doctoral student at the Shanghai Theatre Academy. Over the next four years, it will study digital performance design, focusing on traditional Chinese opera movements and techniques.

The programme, launched in partnership with the University of Shanghai for Science and Technology, combines technical and artistic training. USST provides technical guidance, while STA develops the robot’s artistic performance, focusing on interaction, expression, and cognitive growth.

Xueba 01 features an advanced tendon-based bionic structure, human-like facial technology, and the ability to perform over 100 lifelike expressions. Based on audience feedback, it can adjust its height and appearance, perform for extended periods, and adapt its performance in real time.

Motion capture technology helps it learn from professional performers to refine movements and gestures.

STA faculty highlight the robot’s role in exploring the intersection of art and technology. The initiative aims to integrate AI with traditional Chinese arts, preserve cultural heritage, and inspire contemporary artists to combine technological literacy with humanistic and interdisciplinary skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI government minister delivers first speech in Albanian parliament

Albania has made history by introducing the world’s first AI government minister, named Diella, who gave her inaugural address to parliament this week. Appearing in a video as a woman in traditional Albanian dress, Diella defended her appointment by stressing she was ‘not here to replace people, but to help them.’

She also dismissed accusations of being ‘unconstitutional,’ saying the real threat to the constitution comes from ‘inhumane decisions of those in power.’ Prime Minister Edi Rama announced that the AI minister will oversee all public tenders, promising full transparency and a corruption-free process.

The move comes as Albania struggles with corruption scandals, including the detention of Tirana’s mayor on charges of money laundering and abuse of contracts. Albania currently ranks 80th out of 180 countries on Transparency International’s corruption index.

The opposition, however, fiercely rejected the initiative. Former prime minister and Democratic Party leader Sali Berisha called the project a publicity stunt, warning that Diella cannot curb corruption and that it is unconstitutional. The opposition has vowed to challenge the appointment in the Constitutional Court after boycotting the parliamentary vote.

Despite the controversy, the government insists the AI minister reflects its commitment to reform and the EU integration. Rama has set an ambitious goal of leading Albania, a nation of 2.8 million, into the European Union by 2030, with the fight against corruption at the heart of that mission.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Research shows AI complements, not replaces, human work

AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.

Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.

Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.

Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.

The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5-powered ChatGPT Edu comes to Oxford staff and students

The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.

ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.

Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.

The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.

A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google boosts AI and connectivity in Africa

Google has announced new investments to expand connectivity, AI access and skills training across Africa, aiming to accelerate youth-led innovation.

The company has already invested over $1 billion in digital infrastructure, including subsea cable projects such as Equiano and Umoja, enabling 100 million people to come online for the first time. Four new regional cable hubs are being established to boost connectivity and resilience further.

Alongside infrastructure, Google will provide college students in eight African countries with a free one-year subscription to Google AI Pro. The tools, including Gemini 2.5 Pro and Guided Learning, are designed to support research, coding, and problem-solving.

By 2030, Google says it intends to reach 500 million Africans with AI-powered innovations tackling issues such as crop resilience, flood forecasting and access to education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

JFTC study and MSCA shape Japan’s AI oversight strategy

Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.

The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.

The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.

The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.

With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI explains approach to privacy, freedom, and teen safety

OpenAI has outlined how it balances privacy, freedom, and teen safety in its AI tools. The company said AI conversations often involve personal information and deserve protection like privileged talks with doctors or lawyers.

Security features are being developed to keep data private, though critical risks such as threats to life or societal-scale harm may trigger human review.

The company is also focused on user freedom. Adults are allowed greater flexibility in interacting with AI, within safety boundaries. For instance, the model can engage in creative or sensitive content requests, while avoiding guidance that could cause real-world harm.

OpenAI aims to treat adults as adults, providing broader freedoms as long as safety is maintained. Teen safety is prioritised over privacy and freedom. Users under 18 are identified via an age-prediction system or, in some cases, verified by ID.

The AI will avoid flirtatious talk or discussions of self-harm, and in cases of imminent risk, parents or authorities may be contacted. Parental controls and age-specific rules are being developed to protect minors while ensuring safe use of the platform.

OpenAI acknowledged that these principles sometimes conflict and not everyone will agree with the approach. The company stressed transparency in its decision-making and said it consulted experts to establish policies that balance safety, freedom, and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!