Suzanne Somers lives on in an AI twin

Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.

Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.

Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.

Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.

Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ChatGPT Atlas web browser

OpenAI has launched ChatGPT Atlas, a web browser built around ChatGPT to help users work and explore online more efficiently. The browser lets ChatGPT operate directly on webpages, using past conversations and browsing context to assist with tasks without copying and pasting.

Early testers say it streamlines research, study, and productivity by providing instant AI support alongside the content they are viewing.

Atlas introduces browser memories, letting ChatGPT recall context from visited sites to improve responses and automate tasks. Users stay in control, with the ability to view, archive, or delete memories. 

Agent mode allows ChatGPT to perform tasks such as researching, summarising, or planning events while browsing. Safety is a priority, with safeguards to prevent unauthorised actions and options to operate in logged-out mode.

The browser is available worldwide on macOS for Free, Plus, Pro, and Go users, with Windows, iOS, and Android support coming soon. OpenAI plans to add multi-profile support, better developer tools, and improved app discoverability, advancing an agent-driven web experience with seamless AI integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT unveils SEAL, a self-improving AI model

Researchers at the Massachusetts Institute of Technology (MIT) have unveiled SEAL, a new AI model capable of improving its own performance without human intervention. The framework allows the model to generate its own training data and fine-tuning instructions, enabling it to learn new tasks autonomously.

The model employs reinforcement learning, a method in which it tests different strategies, evaluates their effectiveness, and adjusts its internal processes accordingly. This allows SEAL to refine its capabilities and increase accuracy over time.

In trials, SEAL outperformed GPT-4.1 by learning from the data it generated independently. The results demonstrate the potential of self-improving AI systems to reduce reliance on manually curated datasets and human-led fine-tuning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scouts can now earn AI and cybersecurity badges

In the United States, Scouting America, formerly known as the Boy Scouts, has introduced two new merit badges in AI and cybersecurity. The badges give scouts the opportunity to explore modern technology and understand its applications, while the organisation continues to adapt its programs to a digital era. Scouting America has around a million members and offers hundreds of merit badges across a wide range of skills.

The AI badge challenges scouts to examine AI’s effects on daily life, study deepfakes, and complete projects that demonstrate AI concepts. The cybersecurity badge teaches practical tools to stay safe online, emphasises ethical behaviour, and introduces scouts to a career field with thousands of unfilled positions.

Earlier this year, Scouting America launched Scoutly, an AI-powered chatbot designed to answer questions about the organisation and its merit badges. The initiative is part of Scouting America’s broader effort to modernise its programs and prepare young people for opportunities in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Kenya leads the way in AI skilling across Africa

Kenya’s AI Skilling Initiative (AINSI) is offering valuable insights for African countries aiming to build digital capabilities. With AI projected to create 230 million digital jobs across Africa by 2030, coordinated investment in skills development is vital to unlock this potential.

Despite growing ambition, fragmented efforts and uneven progress continue to limit impact.

Government leadership plays a central role in building national AI capacity. Kenya’s Regional Centre of Competence for Digital and AI Skilling has trained thousands of public servants through structured bootcamps and online programmes.

Standardising credentials and aligning training with industry needs are crucial to ensure skilling efforts translate into meaningful employment.

Industry and the informal economy are key to scaling transformation. Partnerships with KEPSA and MESH are training entrepreneurs and SMEs in AI and cybersecurity while tackling affordability, connectivity, and data access challenges.

Education initiatives, from K–12 to universities and technical institutions, are embedding AI training into curricula to prepare future generations.

Civil society collaboration further broadens access, with community-based programmes reaching gig workers and underserved groups. Kenya’s approach shows how inclusive, cross-sector frameworks can scale digital skills and support Africa’s AI-driven growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netherlands and China in talks to resolve Nexperia dispute

The Dutch Economy Minister has spoken with his Chinese counterpart to ease tensions following the Netherlands’ recent seizure of Nexperia, a major Dutch semiconductor firm.

China, where most of Nexperia’s chips are produced and sold, reacted by blocking exports, creating concern among European carmakers reliant on its components.

Vincent Karremans said he had discussed ‘further steps towards reaching a solution’ with Chinese Minister of Commerce Wang Wentao.

Both sides emphasised the importance of finding an outcome that benefits Nexperia, as well as the Chinese and European economies.

Meanwhile, Nexperia’s China division has begun asserting its independence, telling employees they may reject ‘external instructions’.

The firm remains a subsidiary of Shanghai-listed Wingtech, which has faced growing scrutiny from European regulators over national security and strategic technology supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is the world ready for AI to rule justice?

AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As technology reshapes the way we work, communicate, and make decisions, its potential to transform legal processes is becoming increasingly difficult to ignore. The justice system, however, is one of the most ethically sensitive and morally demanding fields in existence. 

For AI to play a meaningful role in it, it must go beyond algorithms and data. It needs to understand the principles of fairness, context, and morality that guide every legal judgement. And perhaps more challengingly, it must do so within a system that has long been deeply traditional and conservative, one that values precedent and human reasoning above all else. Jet, from courts to prosecutors to lawyers, AI promises speed, efficiency, and smarter decision-making, but can it ever truly replace the human touch? 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI in courts: Smarter administration, not robot judges… yet

Courts across the world are drowning in paperwork, delays, and endless procedural tasks, challenges that are well within AI’s capacity to solve efficiently. From classifying cases and managing documentation to identifying urgent filings and analysing precedents, AI systems are beginning to serve as silent assistants within courtrooms. 

The German judiciary, for example, has already shown what this looks like in practice. AI tools such as OLGA and Frauke have helped categorise thousands of cases, extract key facts, and even draft standardised judgments in air passenger rights claims, cutting processing times by more than half. For a system long burdened by backlogs, such efficiency is revolutionary.

Still, the conversation goes far beyond convenience. Justice is not a production line; it is built on fairness, empathy, and the capacity to interpret human intent. Even the most advanced algorithm cannot grasp the nuance of remorse, the context of equality, or the moral complexity behind each ruling. The question is whether societies are ready to trust machine intelligence to participate in moral reasoning.

The final, almost utopian scenario would be a world where AI itself serves as a judge who is unbiased, tireless, and immune to human error or emotion. Yet even as this vision fascinates technologists, legal experts across Europe, including the EU Commission and the OECD, stress that such a future must remain purely theoretical. Human judges, they argue, must always stay at the heart of justice- AI may assist in the process, but it must never be the one to decide it. The idea is not to replace judges but to help them navigate the overwhelming sea of information that modern justice generates.

Courts may soon become smarter, but true justice still depends on something no algorithm can replicate: the human conscience. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for prosecutors: Investigating with superhuman efficiency

Prosecutors today are also sifting through thousands of documents, recordings, and messages for every major case. AI can act as a powerful investigative partner, highlighting connections, spotting anomalies, and bringing clarity to complex cases that would take humans weeks to unravel. 

Especially in criminal law, cases can involve terabytes of documents, evidence that humans can hardly process within tight legal deadlines or between hearings, yet must be reviewed thoroughly. AI tools can sift through this massive data, flag inconsistencies, detect hidden links between suspects, and reveal patterns that might otherwise remain buried. Subtle details that might escape the human eye can be detected by AI, making it an invaluable ally in uncovering the full picture of a case. By handling these tasks at superhuman speed, AI could also help accelerate the notoriously slow pace of legal proceedings, giving prosecutors more time to focus on strategy and courtroom preparation. 

More advanced systems are already being tested in Europe and the US, capable of generating detailed case summaries and predicting which evidence is most likely to hold up in court. Some experimental tools can even evaluate witness credibility based on linguistic cues and inconsistencies in testimony. In this sense, AI becomes a strategic partner, guiding prosecutors toward stronger, more coherent arguments. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for lawyers: Turning routine into opportunity

The adoption of AI and its capabilities might reach their maximum when it comes to the work of lawyers, where transforming information into insight and strategy is at the core of the profession. AI can take over repetitive tasks: reviewing contracts, drafting documents, or scanning case files, freeing lawyers to focus on the work that AI cannot replace, such as strategic thinking, creative problem-solving, and providing personalised client support. 

AI can be incredibly useful for analysing publicly available cases, helping lawyers see how similar situations have been handled, identify potential legal opportunities, and craft stronger, more informed arguments. By recognising patterns across multiple cases, it can suggest creative questions for witnesses and suspects, highlight gaps in the evidence, and even propose potential defence strategies. 

AI also transforms client communication. Chatbots and virtual assistants can manage routine queries, schedule meetings, and provide concise updates, giving lawyers more time to understand clients’ needs and build stronger relationships. By handling the mundane, AI allows lawyers to spend their energy on reasoning, negotiation, and advocacy.

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

Balancing promise with responsibility

AI is transforming the way courts, prosecutors, and lawyers operate, but its adoption is far from straightforward. While it can make work significantly easier, the technology also carries risks that legal professionals cannot ignore. Historical bias in data can shape AI outputs, potentially reinforcing unfair patterns if humans fail to oversee its use. Similarly, sensitive client information must be protected at all costs, making data privacy a non-negotiable responsibility. 

Training and education are therefore crucial. It is essential to understand not only what AI can do but also its limits- how to interpret suggestions, check for hidden biases, and decide when human judgement must prevail. Without this understanding, AI risks being a tool that misleads rather than empowers. 

The promise of AI lies in its ability to free humans from repetitive work, allowing professionals to focus on higher-value tasks. But its power is conditional: efficiency and insight mean little without the ethical compass of the human professionals guiding it.

Ultimately, the justice system is more than a process. It is about fairness, empathy, and moral reasoning. AI can assist, streamline, and illuminate, but the responsibility for decisions, for justice itself, remains squarely with humans. In the end, the true measure of AI’s success in law will be how it enhances human judgement, not how it replaces it.

So, is the world ready for AI to rule justice? The answer remains clear. While AI can transform how justice is delivered, the human mind, heart, and ethical responsibility must remain at the centre. AI may guide the way, but it cannot and should not hold the gavel.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!