The National Artificial Intelligence Code of Ethics of Jordan
August 2022
Author: Ministry of Digital Economy and Entrepreneurship
The National Artificial Intelligence Code of Ethics of Jordan, approved by the Council of Ministers on 3 August 2022, is a comprehensive document established to guide the ethical development, use, and governance of AI technologies in the Kingdom. It was formulated by the Ministry of Digital Economy and Entrepreneurship in coordination with various stakeholders, including public and private sectors, academia, civil society, and security agencies. The code is deeply rooted in human, religious, and Jordanian societal values and aligns with international standards such as the UNESCO AI Ethics Recommendation.
Purpose and foundations
The code aims to:
- Establish a unified ethical foundation for AI that upholds human dignity, rights, and societal stability.
- Promote responsible innovation that balances technological advancement with ethical safeguards.
- Address challenges such as bias, privacy violations, accountability, and misinformation.
Legal reference
It draws authority from:
- The Telecommunications Law No. 13 (1995)
- The National Communications and IT Policy (2018)
- The Jordan Artificial Intelligence Policy (2020)
Ethical principles
The Code is structured around several core principles for responsible AI use:
- Humanity and society
- Respect for human rights and dignity
- Preventing AI from overriding human judgement
- Avoiding psychological or societal harm
- Inclusiveness and justice
- Preventing bias and ensuring fairness in datasets and algorithms
- Equitable access to AI benefits for marginalised groups
- Avoiding reinforcement of social discrimination
- Privacy and data
- Adhering to laws for data usage, sharing, and protection
- Ensuring informed consent and data ownership rights
- Avoiding surveillance and misuse of AI outputs
- Transparency
- Clear explanation of AI decisions and processes
- Use of open-source models where applicable
- Disclosure of system performance and limitations
- Responsibility and accountability
- Assigning clear roles and holding individuals accountable
- Offering grievance mechanisms for those affected by AI
- Legal responsibility lies with humans, not systems
- Reliability
- Ensuring technical soundness and consistency
- Conducting performance evaluations and risk assessments
- Integrity and authenticity
- Avoiding exaggeration of AI’s capabilities
- Preventing monopolisation and promoting open competition
- Environment and sustainability
- Encouraging energy-efficient, eco-friendly AI practices
- Promoting AI for sustainable development
Special contexts
- Virtual environment and metaverse: Emphasises equal opportunity, mental health considerations, user data rights, and transparency in virtual interactions.
- Scientific research ethics: Calls for transparency, reproducibility, multidisciplinary collaboration, and preemptive risk evaluation before publication.
Risks of non-compliance
Failure to follow the code may lead to:
- Violation of rights and reputational harm
- Increased unemployment and social inequality
- Data abuse and privacy breaches
- Societal destabilisation and security threats
Recommendations for compliance
- Embed ethics in educational curricula
- Promote public awareness and best practices
- Encourage organisations to officially adopt the code
- Establish certification and awards for ethical compliance
- Create complaint mechanisms and oversight bodies
- Facilitate access to anonymised data for public benefit