Why AI won’t replace empathy at work

AI is increasingly being used to improve how organisations measure and support employee performance and well-being.

According to Dr Serena Huang, founder of Data with Serena and author of The Inclusion Equation, AI provides insights that go far beyond traditional annual reviews or turnover statistics.

AI tools can detect early signs of burnout, identify high-potential staff, and even flag overly controlling management styles. More importantly, they offer the potential to personalise development pathways based on employee needs and aspirations.

Huang emphasises, however, that ethical use is vital. Transparency and privacy must remain central to ensure AI empowers rather than surveils workers. Far from making human skills obsolete, Huang argues that AI increases their value.

With machines handling routine analysis, people are free to focus on complex challenges and relationship-building—critical skills in sales, leadership, and team dynamics. AI can assist, but it is emotional intelligence and empathy that truly drive results.

To ensure data-driven efforts align with business goals, Huang urges companies to ask better questions. Understanding what challenges matter to stakeholders helps ensure that any AI deployment addresses real-world needs. Regular check-ins and progress reviews help maintain alignment.

Rather than fear AI as a job threat, Huang encourages individuals to embrace it as a tool for growth. Staying curious and continually learning can ensure workers remain relevant in an evolving market.

She also highlights the strategic advantage of prioritising employee well-being. Companies that invest in mental health, work-life balance, and inclusion enjoy higher productivity and retention.

With younger workers placing a premium on wellness and values, businesses that foster a caring culture will attract top talent and stay competitive. Ultimately, Huang sees AI not as a replacement for people, but as a catalyst for more human-centric, data-informed workplaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK urged to prepare for agentic AI in government

Agentic AI, a new generation of AI that goes beyond automation to deliver full task orchestration, could change how government operates. Sharon Moore, CTO Public Sector UK at IBM, argues the UK Government must adopt this technology to drive operational efficiency and better public services.

Departments using AI agents have already recorded significant savings, such as 3,300 hours saved in HR tasks by East and North Hertfordshire NHS Trust and 800 hours monthly by a New Jersey agency. IBM itself has cut development costs by billions, showcasing the potential for large-scale productivity gains.

Agentic systems integrate multiple AI models and tools, solving complex problems with minimal human intervention. Unlike traditional chatbots, these systems handle end-to-end tasks and adapt across use cases, from citizen services to legacy software modernisation.

To implement these systems safely, the UK must address risks like data leaks, hallucinations, and compliance failures. Moore emphasises that future governance must shift from overseeing individual models to managing entire AI systems, built on transparency, security, and performance oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Springer machine learning book faces fake citation scandal

A Springer Nature book on machine learning has come under scrutiny after researchers discovered that many of its citations were fabricated or erroneous.

A review of 18 citations in Mastering Machine Learning: From Basics to Advanced revealed that two-thirds either referenced nonexistent papers or misattributed authorship and publication sources.

Several academics whose names were included in the book confirmed they did not write the cited material, while others noted inaccuracies in where their actual work was supposedly published. One researcher was alerted by Google Scholar to multiple fake citations under his name.

Govindakumar Madhavan, the author, has not confirmed whether AI tools were used in producing the content, though his book discusses ethical concerns around AI-generated text.

Springer Nature has acknowledged the issue and is investigating whether the book breached its AI use policies, which require authors to declare AI involvement beyond basic editing.

The incident has reignited concerns about publishers’ quality control, with critics pointing to the increasing misuse of large language models in academic texts. As AI tools become more advanced, ensuring the integrity of published research remains a growing challenge for both authors and editors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Student builds AI app to help farmers tackle crop issues

A student is developing an AI-powered app designed to help farmers detect and address crop problems. Soj Gamayon, a communications technology management student at Ateneo de Manila University, was inspired by his family’s farming struggles and his experiences abroad to build AgriConnect PH.

The app uses smart sensors to monitor conditions such as water levels, moisture, and pests, then sends the data to the cloud where it is analysed by AI. Farmers receive real-time alerts with a colour-coded system indicating the severity of risks, helping them respond before crops are damaged.

Gamayon aims to move farmers from reactive responses to proactive management. With updates available at least twice a day and instant alerts for urgent threats, the system offers timely intervention to reduce losses.

Currently supporting cereal crops like rice and corn, the app is set to expand to vegetables and livestock. While the technology is still in development, Gamayon believes AI can revolutionise agriculture and provide Filipino farmers with better tools for resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta launches AI superintelligence lab to compete with rivals

Meta has launched a new division called Meta Superintelligence Labs to accelerate its AI ambitions and close the gap with rivals such as OpenAI and Google.

The lab will be led by Alexandr Wang, former CEO of Scale AI, following Meta’s $14.3 billion investment in the data-labeling company. Former GitHub CEO Nat Friedman and SSI co-founder Daniel Gross will also hold key roles in the initiative.

Mark Zuckerberg announced the new effort in an internal memo, stating that Meta is now focused on developing superintelligent AI systems capable of matching or even outperforming humans. He described this as the beginning of a new era and reaffirmed Meta’s commitment to leading the field.

The lab’s mission is to push AI to a point where it can solve complex tasks more effectively than current models.

To meet these goals, Meta has been aggressively recruiting AI researchers from top competitors. Reports suggest that OpenAI employees have been offered signing bonuses as high as $100 million to join Meta.

New hires include talent from Anthropic and Google, although Meta has reportedly avoided deeper recruitment from Anthropic due to concerns over culture fit.

Meta’s move comes in response to the lukewarm reception of its Llama 4 model and mounting pressure from more advanced AI products released by competitors.

The company hopes that by combining high-level leadership, fresh talent and massive investment, its new lab can deliver breakthrough results and reposition Meta as a serious contender in the race for AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Facebook uses phone photos for AI if users allow it

Meta has introduced a new feature that allows Facebook to access and analyse users’ photos stored on their phones, provided they give explicit permission.

The move is part of a broader push to improve the company’s AI tools, especially after the underwhelming reception of its Llama 4 model. Users who opt in will be agreeing to Meta’s AI Terms of Service, which grants the platform the right to retain and use personal media for content suggestions.

The new feature, currently being tested in the US and Canada, is designed to offer Facebook users creative ideas for Stories by processing their photos and videos through cloud infrastructure.

When enabled, users may receive suggestions such as collages or travel highlights based on when and where images were captured, as well as who or what appears in them. However, participation is strictly optional and can be turned off at any time.

Facebook clarifies that the media analysed under the feature is not used to train AI models in the current test. Still, the system does upload selected media to Meta’s servers on an ongoing basis, raising privacy concerns.

The option to activate these suggestions can be found in the Facebook app’s settings, where users are asked whether they want camera roll data to inform sharing ideas.

Meta has been actively promoting its AI ambitions, with CEO Mark Zuckerberg pushing for the development of ‘superintelligence’. The company recently launched Meta Superintelligence Labs to lead these efforts.

Despite facing stiff competition from OpenAI, DeepSeek and Google, Meta appears determined to deepen its use of personal data to boost its AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Africa risks being left behind in global AI development

Africa is falling far behind in the global race to develop AI, according to a new report by Oxford University.

The study mapped the location of advanced AI infrastructure and revealed that only 32 countries — just 16% of the world — currently operate major AI data centres.

These facilities are essential for training and developing modern AI systems. In contrast, most African nations remain dependent on foreign technology providers, limiting their control over digital development.

Rather than building local capacity, Africa has essentially been treated as a market for AI products developed elsewhere. Regional leaders have often focused on distributing global tech tools instead of investing in infrastructure for homegrown innovation.

One notable exception is Strive Masiyiwa’s Cassava Technologies, which recently partnered with Nvidia to launch the continent’s first AI factory, which is located in South Africa. The project aims to expand across Egypt, Kenya, Morocco and Nigeria.

Unlike typical data centres, an AI factory is explicitly built to support the full AI lifecycle, from raw data to trained models. Nvidia’s GPUs will power the facility, enabling ‘AI as a service’ to be used by governments, businesses, and researchers across the continent.

Cassava’s model offers a more sustainable vision, where African data is used to create local solutions, instead of exporting value abroad.

Experts argue that Africa needs more such initiatives to reduce dependence and participate meaningfully in the AI economy. An AI Fund supported by leading African nations could help finance new factories and infrastructure.

With time running out, leaders must move beyond surface-level engagement and begin coordinated action to address the continent’s growing digital divide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!