Minnesota weighs AI free speech limits

The National Constitution Center reports that Minnesota lawmakers are considering a constitutional amendment to exclude AI systems from free speech protections. The proposal would clarify that such rights apply to people, not machines.

According to the National Constitution Center, the amendment would add language stating that AI does not have the right to speak, write or publish sentiments freely. Human free speech protections would remain unchanged under the proposal.

The article highlights ongoing debate around the measure, with supporters arguing it distinguishes human rights from technological tools, while critics warn it could affect how AI-generated content is treated under the law.

The National Constitution Center notes that the proposal reflects broader tensions over how legal systems should address AI and free expression as the issue develops in Minnesota.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN experts focus on human-centred AI governance

UN’s Independent International Scientific Panel on AI has begun work on a global study examining how AI is reshaping economies and societies. The 40-member panel aims to assess AI’s risks and opportunities, with a focus on maintaining human oversight in decision-making.

Human-centred design stands at the core of the panel’s approach. Members are exploring how AI can complement rather than replace human capabilities, an idea often described as ‘augmented intelligence’.

Research will examine impacts across key sectors, including labour markets and healthcare, while also addressing inclusivity challenges such as language diversity and access to digital infrastructure.

Concerns over trust, ethics and accountability are driving the initiative. Warnings from UN leadership have highlighted the dangers of unregulated AI, reinforcing the need for governance frameworks that reflect social and human rights principles.

Proposals under consideration include tools such as AI watermarking to improve transparency and distinguish between human and machine-generated content.

Findings from the study are expected to inform global policy discussions, with a first report scheduled for presentation at an international dialogue on AI governance in Geneva. Long-term outcomes will depend on aligning technological innovation with ethical safeguards and inclusive development goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Civil society urges stronger EU AI Act protections

ARTICLE 19, alongside more than 40 civil society organisations, has raised concerns about proposed changes to the European Union AI Act under the so-called AI Omnibus package. The groups argue the revisions could weaken existing protections.

According to ARTICLE 19, the proposal risks reducing safeguards for people affected by high-risk AI systems, including areas such as biometric identification and education. The organisations say the changes could leave individuals without adequate protection.

The publication also criticises the legislative process, stating that the European Commission did not follow standard procedures such as impact assessments or public consultation. This raises concerns about transparency and accountability in the legislative process.

ARTICLE 19 is calling on the European Union‘s institutions to restore key safeguards, particularly transparency requirements and oversight powers, to ensure fundamental rights are protected across the European Union.

This contrasts with representatives of leading industry businesses, who call for more relaxed AI Act rules so as to ensure the EU businesses remain competitive. This highlights a shared struggle between innovation and legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK tests AI transcripts to improve access to justice and reduce court costs

The UK Ministry of Justice, alongside HM Courts & Tribunals Service, has launched a study examining how AI can be used to generate court transcripts more efficiently.

The initiative aims to reduce the cost and time required for accessing official court records.

Currently, transcript fees can be prohibitively expensive, limiting access for victims seeking clarity on court proceedings. The proposed use of AI-based systems, including in-house tools such as Justice Transcribe, could lower these barriers while maintaining required accuracy standards.

The policy forms part of broader efforts in the UK to modernise the justice system and enhance transparency. It aligns with legislative developments, including the Victims and Courts Bill, and plans to provide free access to sentencing remarks in Crown Court cases from 2027.

By improving access to legal records, the initiative seeks to strengthen accountability and support victims’ understanding of judicial processes, contributing to a more accessible and responsive justice system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global data governance efforts expand as UNESCO supports policy capacity for AI systems

UNESCO and the United Nations Development Programme (UNDP) have launched a joint initiative to support governments in developing rights-based data governance frameworks for AI. The programme reflects growing global efforts to align digital transformation with public interest objectives.

The Data governance for inclusive digital and AI futures initiative provides policymakers with practical tools to design transparent and accountable data systems, with a focus on safeguarding rights and enabling inclusive AI deployment.

It responds to increasing demand for structured governance approaches as countries expand the use of data-driven technologies.

Participants from multiple regions applied governance frameworks to areas including healthcare, digital identity, and social protection. These projects demonstrate how data governance can improve public service delivery while strengthening accountability and citizen trust.

Hosted at ITU Academy and supported by the EU Global Gateway initiative, the programme also promotes cross-country collaboration and knowledge exchange, reinforcing international coordination in data governance.

An initiative by UNESCO that highlights the importance of building institutional capacity to ensure that AI systems operate within clear legal and ethical frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital Public Goods Alliance roadmap incorporates UNESCO Open Solutions

UNESCO announced that its Open Solutions have been included in the Digital Public Goods Alliance’s roadmap as part of its membership.

Roadmap activities focus on Open Solutions supporting knowledge ecosystems and information resilience by advancing Open Educational Resources as digital public goods, mainstreaming equitable open access to knowledge ecosystems, unlocking open data for research and learning, and strengthening Free and Open Source Software, according to UNESCO.

Mariya Gabriel, UNESCO’s Assistant Director-General for Communication and Information, said: ‘The inclusion of UNESCO’s Open Solutions— Open Educational Resources, Open Access, Open Data and Free and Open Source Software— in the Digital Public Goods Alliance roadmap, underscores our commitment to knowledge as a public good and to multilateral cooperation. Through these open systems, UNESCO supports Member States in expanding access to information and advancing the Sustainable Development Goals.’

UNESCO said its Open Solutions support the discovery, use, and adaptation of digital public goods that help reduce structural barriers to knowledge. It added that they prioritise multilingual access, equitable participation, and the reuse of educational, scientific, and public-interest resources.

UNESCO described the Digital Public Goods Alliance as a multistakeholder initiative that supports the achievement of the Sustainable Development Goals by advancing the discovery, development, use, and investment in digital public goods. It said these include open source software, open data, open AI models, and open content that adhere to applicable laws and best practices, are designed to do no harm, and contribute to sustainable development.

Liv Marte Nordhaug, Chief Executive Officer of the Digital Public Goods Alliance Secretariat, said: ‘Through its Open Solutions, UNESCO is advancing open and inclusive knowledge ecosystems while strengthening the development and adoption of digital public goods that expand access to shared, interoperable resources and enable equitable participation in the digital age.’

UNESCO also said its engagement in the alliance contributes to implementing the UN Global Digital Compact and the United Nations Pact for the Future, reaffirming that knowledge, and the digital systems that underpin it, must remain a global public good, governed in the public interest, anchored in international human rights standards, and accessible to all without discrimination.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law strengthens protections for healthcare patients in Brazil

Brazil has introduced a new legal framework establishing a nationwide Statute of Patients’ Rights through Law No. 15.378. The law sets out protections and responsibilities for healthcare patients across public, private, and insurance services.

The statute guarantees patients’ key rights such as non-discriminatory treatment, access to clear and sufficient medical information, confidentiality of health data, and the requirement for informed consent before treatment decisions.

Additional protections include the right to a companion during care, access to interpreters or accessibility support, and the ability to seek a second medical opinion. Patient responsibilities are also formalised under the law.

Individuals are expected to provide accurate medical history and follow prescribed treatments. They must ask questions when needed, respect healthcare rules, and inform professionals of any changes in their condition or decision to discontinue treatment.

Compliance measures include publishing rights, assessing healthcare quality, promoting research, and providing complaint channels. Violations are treated as human rights infringements, reinforcing the law’s legal and ethical importance in Brazil’s healthcare system.

By embedding principles such as informed consent, non-discrimination, privacy, and access to information into law, it strengthens individual autonomy and dignity in medical decision-making.

In broader terms, it reinforces the idea that access to safe, transparent, and respectful healthcare is an essential component of fundamental human rights, not a discretionary service.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN commissioner calls for human rights-centred digital governance at GANHRI conference

United Nations High Commissioner for Human Rights Volker Türk told the annual conference of the Global Alliance of National Human Rights Institutions in Geneva that digital technologies are affecting human rights in areas including conflict, surveillance, online violence, and civic space, while protections have not kept pace.

Türk said ‘while our rights fully apply online, the systems to protect them have yet to keep pace.’ He referred to social media hate speech, surveillance, online violence against women in public life, and the use of digital technologies in conflict.

The speech set out two priorities for national human rights institutions: using digital tools in their own work, and strengthening protection of human rights in digital spaces. Türk said this includes documenting the human rights impact of digital technologies, using existing laws for accountability, and helping shape new legal frameworks.

On AI, Türk said: ‘This evidence should be used to push for accountability under existing laws. It should also inform the development of new legal frameworks, in line with the Global Digital Compact’s vision of inclusive and accountable digital governance, based on human rights.’ He added: ‘This also means advocating for mandatory human rights due diligence in the design, development, and deployment of AI systems.’

Türk also said the Office of the United Nations High Commissioner for Human Rights is launching the Human Rights Data Exchange, which he described as a way to bring together fragmented data on human rights violations and support earlier and more coordinated action. He also referred to a new Global Alliance for Human Rights (GAHRI), which he said seeks to place human rights at the centre of global debate and decision-making.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!