International Artificial Intelligence Federation
The Ethical Guidelines Adopted by the International Artificial Intelligence FederationArtificial Intelligence
The International Artificial Intelligence Federation adopts ethical guidelines derived from internationally recognized principles, including those previously established in Dubai. These guidelines emphasize a commitment to fairness, transparency, accountability, and explainability in the design and operation of AI systems. The purpose of these guidelines is to offer practical steps and clear recommendations to help stakeholders uphold ethical responsibility when utilizing AI technologies. The federation’s recommended principles address key issues such as fairness in algorithms, clarity of processes, distribution of responsibilities, and the explainability of decisions made by intelligent systems. The federation aims to continuously develop these guidelines into a globally applicable framework that ensures the ethical and safe use of artificial intelligence. It also encourages all stakeholders to actively participate in this dialogue and values all feedback and contributions that help refine these principles and expand their applicability.
- Responsibility Beyond the System Itself:
AI systems alone should not bear full responsibility for outcomes; accountability must be shared among developers and operators, who hold the legal and ethical responsibility for system performance. - Proactive Risk Monitoring and Mitigation:
Developers must carefully assess potential risks arising from system design and implement effective measures to minimize the likelihood of harm or unintended negative impacts. - External Auditing (In Progress):
Systems making sensitive decisions should undergo review and auditing by independent and neutral bodies to increase transparency and build user and public trust. - Right to Appeal:
Individuals affected by AI decisions should have the right to challenge those decisions, with the option to opt out of automated systems whenever possible, protecting their rights and freedom to decide. - No Decisions Without Prior Consent:
AI systems should not make significant decisions on behalf of individuals without obtaining their explicit and prior consent, ensuring respect for personal will and privacy. - Multidisciplinary Teams:
Sensitive AI systems must be designed and developed by diverse teams combining technical expertise and domain-specific knowledge to ensure comprehensive consideration of all relevant aspects. - Awareness of System Nature:
Organizations operating AI systems must fully understand their characteristics to ensure suitability for the intended environment, enhancing transparency and effective accountability.
At the International Artificial Intelligence Federation, we believe that fairness is fundamental to ensuring communities’ trust in our intelligent systems. From this perspective, we are committed to ensuring that AI systems operate in an equitable manner that reflects the needs of everyone without exception.
Here are the principles we follow to achieve this goal:
- Accurate Representation of Data
The data feeding AI systems must reflect the actual reality of the groups affected by their outcomes. - Monitoring and Analyzing Bias
Continuous verification is required to detect any bias in decision-making processes within intelligent systems. - Fairness of Critical Decisions
It must be ensured that important decisions made by AI are fair and unbiased. - Equal Accessibility for Users
Organizations operating AI systems must guarantee easy access and fair use of these systems by all groups without discrimination.
To achieve transparency, we commit to the following:
- Traceability
Organizations operating AI systems must provide clear mechanisms to trace the reasons and logic behind any important decision made by intelligent systems, especially those that may cause harm or losses. - Informing Individuals of Their Interaction with Systems
It should be clearly communicated when and where users interact with AI systems, so they are aware that the decision or information originates from an automated system.
We believe that a clear understanding of how intelligent systems operate enhances transparency and increases individuals’ trust in them. Therefore, we are committed to making these systems as explainable as possible within the limits of current technical capabilities.
Our principles to achieve this include:
- General Explanation of System Operation
Operating organizations can provide simplified explanations that broadly clarify how the AI systems they use work. - Providing the Right to Request Explanations
Affected individuals should be enabled to request clarifications about automated decisions impacting them, considering technical capabilities and business model limitations. - Easy Access to Explanations
When explanations are available, they should be easily accessible, prompt, free of charge, and presented in clear language understandable to users.
Participation in building a better futureResponsibility
We believe that transparency begins with listening, and improvement cannot happen without constructive criticism. Therefore, we invite you to be part of this journey by sharing your feedback, perspectives, and suggestions — as they are not merely comments, but building blocks with which we create a stronger and more inclusive ethics framework.
