Meta has introduced AI-powered translation tools for creators on Instagram and Facebook, allowing reels to be dubbed into other languages with automatic lip syncing.
The technology uses the creator’s voice instead of a generic substitute, ensuring tone and style remain natural while lip movements match the dubbed track.
The feature currently supports English-to-Spanish and Spanish-to-English, with more languages expected soon. On Facebook, it is limited to creators with at least 1,000 followers, while all public Instagram accounts can use it.
Viewers automatically see reels in their preferred language, although translations can be switched off in settings.
Through Meta Business Suite, creators can also upload up to 20 custom audio tracks per reel, offering manual control instead of relying only on automated translations. Audience insights segmented by language allow performance tracking across regions, helping creators expand their reach.
Meta has advised creators to prioritise face-to-camera reels with clear speech instead of noisy or overlapping dialogue.
The rollout follows a significant update to Meta’s Edits app, which added new editing tools such as real-time previews, silence-cutting and over 150 fresh fonts to improve the Reels production process.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.
Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.
While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles.
Improving access to justice
Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain.
NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.
While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure.
Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.
AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law.
Risking human rights
While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.
Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.
Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight.
Image via Pixabay / jessica45
Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.
However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors.
While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.
The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance.
The policy path forward
As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.
The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights.
Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems.
The future of justice
AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.
However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support.
Image via Pixabay / souandresantana
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Zoom has unveiled its Virtual Agent for Zoom Phone, a 24/7 AI concierge designed to replace or support human receptionists. The tool can greet callers naturally, process requests, and initiate next steps without human intervention, aiming to reduce missed calls and waiting times.
The AI agent is initially available in English, Spanish, French, German, Portuguese, and Japanese, with more languages planned.
Companies can set up the system without coding expertise by training it with existing documents or company websites, allowing for a faster, personalised, and scalable customer experience.
Zoom highlighted use cases across sectors, including booking appointments in healthcare, checking stock and answering retail product queries, and providing financial service updates. The Virtual Agent promises to handle these tasks autonomously, giving businesses greater efficiency and flexibility.
In addition, Zoom has enhanced its AI Companion tool to manage meeting scheduling. The agent can coordinate invites, track responses, and suggest alternatives, freeing teams to focus on discussions rather than logistics.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Online questionnaires are being increasingly swamped by AI-generated responses, raising concerns that a vital data source for researchers is becoming polluted. Platforms like Prolific, which pay participants to answer questions, are widely used in behavioural studies.
Researchers at the Max Planck Institute noticed suspicious patterns in their work and began investigating. They found that nearly half of the respondents copied and pasted answers, strongly suggesting that many were outsourcing tasks to AI chatbots.
Analysis showed clear giveaways, including overly verbose and distinctly non-human language. The researchers concluded that a substantial proportion of behavioural studies may already be compromised by chatbot-generated content.
In follow-up tests, they set traps to detect AI use, including invisible text instructions and restrictions on copy-paste. The measures caught a further share of participants, highlighting the scale of the challenge facing online research platforms.
Experts say the responsibility lies with both researchers and platforms. Stronger verification methods and tighter controls are needed for online behavioural research to remain credible.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.
One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.
The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.
While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.
Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.
The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.
The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.
The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.
According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.
Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.
The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.
Anthropic added that the feature is experimental and may be adjusted based on user feedback.
The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report from the National School Public Relations Association (NSPRA) and ThoughtExchange highlights the growing role of AI in K-12 communications, offering detailed guidance for ethical integration and effective school engagement.
Drawing on insights from 200 professionals across 37 states, the study reveals how AI tools boost efficiency while underscoring the need for stronger policies, transparency, and ongoing training.
Barbara M Hunter, APR, NSPRA executive director, explained that AI can enhance communication work but will never replace strategy, human judgement, relationships, and authentic school voices.
Key findings show that 91 percent of respondents already use AI, yet most districts still lack clear policies or disclosure practices for employee use.
The report recommends strengthening AI education, accelerating policy development, expanding the scope to cover staff, and building proactive strategies supported by human oversight and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The African Development Bank has strengthened Africa’s digital journey by backing a landmark AI training initiative linked to Agenda 2063. The effort aims to accelerate the continent’s long-term strategy, ‘The Africa We Want,’ by equipping states with practical expertise.
Through its Joint Secretariat Support Office, the Bank gave both technical and financial backing to the 5th Annual Training Workshop. The event focused on applying AI to monitoring, evaluation, and reporting under the Second Ten-Year Plan of Agenda 2063.
The Lusaka workshop, co-hosted by the African Union Commission and the African Capacity Building Foundation, featured sessions with Ailyse, ChatGPT, Google AI Studio, Google Gemini, and Perplexity. Delegates explored embedding AI insights into analytics for stronger policymaking and accountability.
By investing in institutional capacity, the AfDB and partners aim to advance AI-enabled solutions that improve policy interventions, resource allocation, and national priorities. The initiative reflects a broader effort to integrate digital tools into Africa’s governance structures.
The workshop also fostered peer learning, allowing delegates to share best practices in digital monitoring frameworks. By driving AI adoption in planning and results delivery, the AfDB underlines its role as a partner in Africa’s transformation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.
Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.
Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.
Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.
The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.
Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.
The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!