Rethinking ‘soft skills’ as core drivers of transformation

Communication, empathy, and judgment were dismissed for years as ‘soft skills‘, sidelined while technical expertise dominated training and promotion. A new perspective argues that these human competencies are fundamental to resilience and transformation.

Researchers and practitioners emphasise that AI can expedite decision-making but cannot replace human judgment, trust, or narrative. Failures in leadership often stem from a lack of human capacity rather than technical gaps.

Redefining skills like decision-making, adaptability, and emotional intelligence as measurable behaviours helps organisations train and evaluate leaders effectively. Embedding these human disciplines ensures transformation holds under pressure and uncertainty.

Career and cultures are strengthened when leaders are assessed on their ability to build trust, resolve conflicts, and influence through storytelling. Without funding the human core alongside technical skills, strategies collapse, and talent disengages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New research shows AI bias against human content

A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.

Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.

Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.

There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.

Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds chain-of-thought reasoning in LLMs is a brittle mirage

A new study from Arizona State University researchers suggests that chain-of-thought reasoning in large language models (LLMs) is closer to pattern matching than accurate logical inference. The findings challenge assumptions about human-like intelligence in these systems.

The researchers used a data distribution lens to examine where chain-of-thought fails, testing models on new tasks, different reasoning lengths, and altered prompt formats. Across all cases, performance degraded sharply outside familiar training structures.

Their framework, DataAlchemy, showed that models replicate training patterns rather than reason abstractly. Failures could be patched quickly through fine-tuning on small new datasets, but this reinforced the pattern-matching theory.

The paper warns developers against relying on chain-of-thought reasoning for high-stakes domains, emphasising the risks of fluent but flawed rationale. It urges practitioners to implement rigorous out-of-distribution testing and treat fine-tuning as a limited patch.

The researchers argue that applications can remain effective for enterprise use by systematically mapping a model’s boundaries and aligning them with predictable tasks. Targeted fine-tuning then becomes a tool for precision rather than broad generalisation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe to launch national AI policy by October to boost digital sovereignty

Zimbabwe’s Information and Communication Technology Minister, Tendai Mavetera, revealed the second draft of the National AI Policy during the AI Summit for Africa 2025 in Victoria Falls, hosted by Alpha Media Holdings and AIIA.

Though the policy was not formalised during the summit, Mavetera stated it is expected to be launched by 1 October 2025 at the new Parliament building, with presidential presence anticipated.

The strategy is designed to foster an Africa where AI serves humanity, ensuring connectivity in every village, education access for every child, and opportunity for every young person.

Core features include data sovereignty and secure data storage, with institutions like TelOne expected to host localised solutions, moving away from past practices of storing data abroad.

Speakers at the summit underscored AI’s role in economic and social transformation rather than job displacement; Africa’s investment in AI surpassed US$200 billion in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Pakistan launches national AI innovation competition

Pakistan’s Ministry of Planning, Development, and Special Initiatives has launched a national innovation competition to drive the development of AI solutions in priority sectors. The initiative aims to attract top talent to develop impactful health, education, agriculture, industry, and governance projects.

Minister Ahsan Iqbal said AI is no longer a distant prospect but a present reality that is already transforming economies. He described the competition as a milestone in Pakistan’s digital history and urged the nation to embrace AI’s global momentum.

Iqbal stressed that algorithms now shape decisions more than traditional markets, warning that technological dependence must be avoided. Pakistan, he argued, must actively participate in the AI revolution or risk being left behind by more advanced economies.

He highlighted AI’s potential to predict crop diseases, aid doctors in diagnosis, and deliver quality education to every child nationwide. He said Pakistan will not be a bystander but an emerging leader in shaping the digital future.

The government has begun integrating AI into curricula and expanding capacity-building initiatives. Officials expect the competition to unlock new opportunities for innovation, empowering youth and driving sustainable development across the country.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSPRA warns AI must complement, not replace, human voices in education

A new report from the National School Public Relations Association (NSPRA) and ThoughtExchange highlights the growing role of AI in K-12 communications, offering detailed guidance for ethical integration and effective school engagement.

Drawing on insights from 200 professionals across 37 states, the study reveals how AI tools boost efficiency while underscoring the need for stronger policies, transparency, and ongoing training.

Barbara M Hunter, APR, NSPRA executive director, explained that AI can enhance communication work but will never replace strategy, human judgement, relationships, and authentic school voices.

Key findings show that 91 percent of respondents already use AI, yet most districts still lack clear policies or disclosure practices for employee use.

The report recommends strengthening AI education, accelerating policy development, expanding the scope to cover staff, and building proactive strategies supported by human oversight and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI upskilling at heart of Singapore’s new job strategy

Singapore has launched a $27 billion initiative to boost AI readiness and protect jobs, as global tensions and automation reshape the workforce.

Prime Minister Lawrence Wong stressed that securing employment is key to national stability, particularly as geopolitical shifts and AI adoption accelerate.

IMF research warns Singapore’s skilled workers, especially women and youth, are among the most exposed to job disruption from AI technologies.

To address this, the government is expanding its SkillsFuture programme and rolling out local initiatives to connect citizens with evolving job markets.

The tech investment includes $5 billion for AI development and positions Singapore as a leader in digital transformation across Southeast Asia.

Social challenges remain, however, with rising inequality and risks to foreign workers highlighting the need for broader support systems and inclusive policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!