Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.
Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.
While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles.
Improving access to justice
Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain.
NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.
While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure.
Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.
AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law.
Risking human rights
While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.
Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.
Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight.
Image via Pixabay / jessica45
Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.
However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors.
While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.
The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance.
The policy path forward
As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.
The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights.
Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems.
The future of justice
AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.
However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support.
Image via Pixabay / souandresantana
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.
The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.
Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.
The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.
Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.
The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.
Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.
These changes will take effect on 15 September 2025 without a public feedback period.
The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.
Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.
Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.
With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.
WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Brazilian President Luiz Inácio Lula da Silva has confirmed that his government is preparing new legislation to regulate social media, a move he defended despite criticism from US President Donald Trump. Speaking at an event in Pernambuco, Lula stressed that ‘laws also apply to foreigners’ operating in Brazil, underlining his commitment to hold international platforms accountable.
The draft proposal, which has not yet been fully detailed, aims to address harmful content such as paedophilia, hate speech, and disinformation that Lula said threaten children and democracy. According to government sources, the bill would strengthen penalties for companies that fail to remove content flagged as especially harmful by Brazil’s Justice Department.
Trump has taken issue with Brazil’s approach, criticising the Supreme Court for ruling that platforms could be held responsible for user-generated content and denouncing the 2024 ban of X, formerly Twitter, after Elon Musk refused to comply with court orders. He linked these disputes to imposing a 50% tariff on certain Brazilian imports, citing the political persecution of former president Jair Bolsonaro.
Lula pushed back on Trump’s remarks, insisting Bolsonaro’s trial for an alleged coup attempt is proceeding with full legal guarantees. On trade, he signalled that Brazil is open to talks over tariffs but emphasised negotiations would take place strictly on commercial, not political, grounds.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.
She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.
Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.
She stated that UK public acceptance would depend on a responsible and targeted application.
By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s AI chatbot Grok was briefly suspended from X, then returned without its verification badge and with a controversial video pinned to its replies. Confusing and contradictory explanations appeared in multiple languages, leaving users puzzled.
English posts blamed hateful conduct and Israel-Gaza comments, while French and Portuguese messages mentioned crime stats or technical bugs. Musk called the situation a ‘dumb error’ and admitted Grok was unsure why it had been suspended.
Grok’s suspension follows earlier controversies, including antisemitic remarks and introducing itself as ‘MechaHitler.’ xAI blamed outdated code and internet memes, revealing that Grok often referenced Musk’s public statements on sensitive topics.
The company has updated the chatbot’s prompts and promised ongoing monitoring, amid internal tensions and staff resignations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.
Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.
Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.
The dispute highlights growing tensions as AI companies compete for prominence on major platforms.
Apple and Musk’s xAI have not responded yet to requests for comment.
The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
After Elon Musk accused Apple of favouring OpenAI’s ChatGPT over other AI applications on the App Store, there was a strong response from OpenAI CEO Sam Altman.
Altman alleged that Musk manipulates the social media platform X for his benefit, targeting competitors and critics. The exchange adds to their history of public disagreements since Musk left OpenAI’s board in 2018.
Musk’s claim centres on Apple’s refusal to list X or Grok (XAI’s AI app) in the App Store’s ‘Must have’ section, despite X being the top news app worldwide and Grok ranking fifth.
Although Musk has not provided evidence for antitrust violations, a recent US court ruling found Apple in contempt for restricting App Store competition. The EU also fined Apple €500 million earlier this year over commercial restrictions on app developers.
OpenAI’s ChatGPT currently leads the App Store’s ‘Top Free Apps’ list for iPhones in the US, while Grok holds the fifth spot. Musk’s accusations highlight ongoing tensions in the AI industry as big tech companies battle for app visibility and market dominance.
The situation emphasises how regulatory scrutiny and legal challenges shape competition within the digital economy.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The report reveals that authorities have misused these laws to prosecute individuals for leaving online comments, making donations, or sharing songs or memes that appear to carry critical messages towards the government.
Since the 2020–2021 protests, Belarusian de facto authorities have reportedly initiated at least 22,500 criminal cases related to ‘anti-extremism’. In collaboration with our partner Human Constanta, we present a joint analysis highlighting this alarming trend, which further intensifies the widespread repression of civil society, they said.
Article 19 states in its report that such actions restrict digital rights and violate international human rights law, including the right to freedom of expression and the right to seek, receive, and impart information.
Additionally, Article 19 notes that Belarus’s ‘anti-extremism’ laws lack the clarity required under international human rights standards, employing vague terms broadly interpreted to suppress digital expression and create a chilling effect.
However, this means people are discouraged or prevented from legitimate expression or behaviour due to fear of legal punishment or other negative consequences.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!