Why AI won’t replace empathy at work

AI is increasingly being used to improve how organisations measure and support employee performance and well-being.

According to Dr Serena Huang, founder of Data with Serena and author of The Inclusion Equation, AI provides insights that go far beyond traditional annual reviews or turnover statistics.

AI tools can detect early signs of burnout, identify high-potential staff, and even flag overly controlling management styles. More importantly, they offer the potential to personalise development pathways based on employee needs and aspirations.

Huang emphasises, however, that ethical use is vital. Transparency and privacy must remain central to ensure AI empowers rather than surveils workers. Far from making human skills obsolete, Huang argues that AI increases their value.

With machines handling routine analysis, people are free to focus on complex challenges and relationship-building—critical skills in sales, leadership, and team dynamics. AI can assist, but it is emotional intelligence and empathy that truly drive results.

To ensure data-driven efforts align with business goals, Huang urges companies to ask better questions. Understanding what challenges matter to stakeholders helps ensure that any AI deployment addresses real-world needs. Regular check-ins and progress reviews help maintain alignment.

Rather than fear AI as a job threat, Huang encourages individuals to embrace it as a tool for growth. Staying curious and continually learning can ensure workers remain relevant in an evolving market.

She also highlights the strategic advantage of prioritising employee well-being. Companies that invest in mental health, work-life balance, and inclusion enjoy higher productivity and retention.

With younger workers placing a premium on wellness and values, businesses that foster a caring culture will attract top talent and stay competitive. Ultimately, Huang sees AI not as a replacement for people, but as a catalyst for more human-centric, data-informed workplaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance through the lens of magical realism

AI today straddles the line between the extraordinary and the mundane, a duality that evokes the spirit of magical realism—a literary genre where the impossible blends seamlessly with the real. Speaking at the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, Jovan Kurbalija proposed that we might better understand the complexities of AI governance by viewing it through this narrative lens.

Like Gabriel García Márquez’s floating characters or Salman Rushdie’s prophetic protagonists, AI’s remarkable feats—writing novels, generating art, mimicking human conversation—are increasingly accepted without question, despite their inherent strangeness.

Kurbalija argues that AI, much like the supernatural in literature, doesn’t merely entertain; it reveals and shapes profound societal realities. Algorithms quietly influence politics, reshape economies, and even redefine relationships.

Just as magical realism uses the extraordinary to comment on power, identity, and truth, AI forces us to confront new ethical dilemmas: Who owns AI-created content? Can consent be meaningfully given to machines? And does predictive technology amplify societal biases?

The risks of AI—job displacement, misinformation, surveillance—are akin to the symbolic storms of magical realism: always present, always shaping the backdrop. Governance, then, must walk a fine line between stifling innovation and allowing unchecked technological enchantment.

Kurbalija warns against ‘black magic’ policy manipulation cloaked in humanitarian language and urges regulators to focus on real-world impacts while resisting the temptation of speculative fears. Ultimately, AI isn’t science fiction—it’s magical realism in motion.

As we build policies and frameworks to govern it, we must ensure this magic serves humanity, rather than distort our sense of what is real, ethical, and just. In this unfolding story, the challenge is not only technological, but deeply human.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Path forward for global digital cooperation debated at IGF 2025

At the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, policymakers, civil society, and digital stakeholders gathered to chart the future of global internet governance through the WSIS+20 review. With a high-level UN General Assembly meeting scheduled for December, co-facilitators from Kenya and Albania emphasised the need to update the World Summit on the Information Society (WSIS) framework while preserving its original, people-centred vision.

They underscored the importance of inclusive consultations, highlighting a new multistakeholder sounding board and upcoming joint sessions to enhance dialogue between governments and broader communities. The conversation revolved around the evolving digital landscape and how WSIS can adapt to emerging technologies like AI, data governance, and digital public infrastructure.

While some participants favoured WSIS as the primary global framework, others advocated for closer synergy with the Global Digital Compact (GDC), stressing the importance of coordination to avoid institutional duplication. Despite varied views, there was widespread consensus that the existing WSIS action lines, being technology-neutral, can remain relevant by accommodating new innovations.

Speakers from the government, private sector, and civil society reiterated the call to permanently secure the IGF’s mandate, praising its unique ability to foster open, inclusive dialogue without the pressure of binding negotiations. They pointed to IGF’s historical success in boosting internet connectivity and called for more tangible outputs to influence policymaking.

National-level participation, especially from developing countries, women, youth, and marginalised communities, was identified as crucial for meaningful engagement.

The session ended on a hopeful note, with participants expressing a shared commitment to a more inclusive and equitable digital future. As the December deadline looms, the global community faces the task of turning shared principles into concrete action, ensuring digital governance mechanisms remain cooperative, adaptable, and genuinely representative of all voices.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Digital rights under threat: Global majority communities call for inclusive solutions at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a pivotal session hosted by Oxfam’s RECIPE Project shed light on the escalating digital rights challenges facing communities across the Global majority. Representatives from Vietnam, Bolivia, Cambodia, Somalia, and Palestine presented sobering findings based on research with over 1,000 respondents across nine countries.

Despite the diversity of regions, speakers echoed similar concerns: digital literacy is dangerously low, access to safe and inclusive online spaces remains unequal, and legal protections for digital rights are often absent or underdeveloped.

The human cost of digital inequality was made clear from Bolivia to Palestine. In Bolivia, over three-quarters of respondents had experienced digital security incidents, and many reported targeted violence linked to their roles as human rights defenders.

In Somalia, where internet penetration is high, only a fraction understands how to protect their personal data. Palestine, meanwhile, faces systematic digital discrimination, marked by unequal infrastructure access and advanced surveillance technologies used against its population, exacerbated by ongoing occupation and political instability.

Yet amidst these challenges, the forum underscored a strong sense of resilience and innovation. Civil society organisations from Cambodia and Bolivia showcased bottom-up approaches, such as peer-led digital security training and feminist digital safety networks, which help communities protect themselves and influence policy.

Vietnam emphasised the need for genuine participation in policymaking, rather than formalistic consultations, as a path to more equitable digital governance. The session concluded with a shared call to action: digital governance must prioritise human rights and meaningful participation from the ground up.

Speakers and audience members highlighted the urgent need for multistakeholder cooperation—spanning civil society, government, and the tech industry—to counter misinformation and protect freedom of expression, especially in the face of expanding surveillance and online harm. As one participant from Zambia noted, digital safety must not come at the expense of digital freedom; the two must evolve together.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Le Chat leads AI privacy ranking report

A new report has revealed that Le Chat from Mistral AI is the most privacy-respecting generative AI, with ChatGPT and Grok close behind. The study by Incogni assessed nine popular services against 11 criteria covering data use, sharing and transparency.

Meta AI came last, flagged for poor privacy practices and extensive data sharing. According to the findings, Gemini and Copilot also performed poorly in protecting user privacy.

Incogni highlighted that several services, including ChatGPT and Grok, allow users to stop their data from being used for training. However, other providers like Meta AI, Pi AI and Gemini offered no clear way to opt-out.

The report warned that AI firms often share data with service providers, affiliates, researchers and law enforcement. Clear, readable privacy policies and opt-out tools were key for building trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

IGF leadership panel explores future of digital governance

As the Internet Governance Forum (IGF) prepares to mark its 20th anniversary, members of the IGF Leadership Panel gathered in Norway to present a strategic vision for strengthening the forum’s institutional role and ensuring greater policy impact.

The session explored proposals to make the IGF a permanent UN institution, improve its output relevance for policymakers, and enhance its role in implementing outcomes from WSIS+20 and the Global Digital Compact.

While the tone remained largely optimistic, Nobel Peace Prize laureate Maria Ressa voiced a more urgent appeal, calling for concrete action in a rapidly deteriorating information ecosystem.

Speakers emphasized the need for a permanent and better-resourced IGF. Vint Cerf, Chair of the Leadership Panel, reflected on the evolution of internet governance, arguing that ‘we must maintain enthusiasm for computing’s positive potential whilst addressing problems’.

He acknowledged growing threats like AI-driven disruption and information pollution, which risk undermining democratic governance and economic fairness online. Maria Fernanda Garza and Lise Fuhr echoed the call, urging for the IGF to be integrated into the UN structure with sustainable funding and measurable performance metrics. Fuhr commended Norway’s effort to bring 16 ministers from the Global South to the meeting, framing it as a model for future inclusive engagement.

 Indoors, Restaurant, Adult, Female, Person, Woman, Cafeteria, Boy, Male, Teen, Man, Wristwatch, Accessories, Jewelry, Necklace, People, Glasses, Urban, Face, Head, Cup, Food, Food Court, Lucky Fonz III, Judy Baca, Roy Hudd, Lisa Palfrey, Ziba Mir-Hosseini, Mareen von Römer, Kim Shin-young, Lídia Jorge

A significant focus was placed on integrating IGF outcomes with the WSIS+20 and Global Digital Compact processes. Amandeep Singh Gill noted that these two tracks are ‘complementary’ and that existing WSIS architecture should be leveraged to avoid duplication. He emphasized that budget constraints limit the creation of new bodies, making it imperative for the IGF to serve as the core platform for implementation and monitoring.

Garza compared the IGF’s role to a ‘canary in the coal mine’ for digital policy, urging better coordination with National and Regional Initiatives (NRIs) to translate global goals into local impact.

Participants discussed the persistent challenge of translating IGF discussions into actionable outputs. Carol Roach emphasized the need to identify target audiences and tailor outputs using formats such as executive briefs, toolkits, and videos. Lan Xue added,’ to be policy-relevant, the IGF must evolve from a space of dialogue to a platform of strategic translation’.

He proposed launching policy trackers, aligning outputs with global policy calendars, and appointing liaison officers to bridge the gap between IGF and forums such as the G20, UNGA, and ITU.

Inclusivity emerged as another critical theme. Panellists underscored the importance of engaging underrepresented regions through financial support, capacity-building, and education. Fuhr highlighted the value of internet summer schools and grassroots NRIs, while Gill stressed that digital sovereignty is now a key concern in the Global South. ‘The demand has shifted’, he said, ‘from content consumption to content creation’.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Maria Ressa closed the session with an impassioned call for immediate action. She warned that the current information environment contributes to global conflict and democratic erosion, stating that ‘without facts, no truth, no trust. Without trust, you cannot govern’. Citing recent wars and digital manipulation, she urged the IGF community to move from reflection to implementation. ‘Online violence is real-world violence’, she said. ‘We’ve talked enough. Now is the time to act.’

Despite some differences in vision, the session revealed a strong consensus on key issues: the need for institutional evolution, enhanced funding, better policy translation, and broader inclusion. Bertrand de la Chapelle, however, cautioned against making the IGF a conventional UN body, instead proposing a ‘constitutional moment’ in 2026 to consider more flexible institutional reforms.

The discussion demonstrated that while the IGF remains a trusted forum for inclusive dialogue, its long-term relevance depends on its ability to produce concrete outcomes and adapt to a volatile digital environment. As Vint Cerf reminded participants in closing, ‘this is an opportunity to make this a better environment than it already is and to contribute more to our global digital society’.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.