Post rollout of ChatGPT, overall the technology news has been focused on capabilities of AI and its potential when used for all the right and wrong reasons. AI being a fairly new technology ( in terms of its use cases), its ability to mimic or exceed human intelligence has been rightly questioned – especially when it comes to aspects such as fairness, transparency, bias, ethics and so on.
Today organizations across various sectors are leveraging or exploring the power of AI to drive innovation, enhance efficiency, and improve decision-making processes with their applications or businesses.
With great power comes great responsibility. The ethical implications of AI technologies have prompted the development of various frameworks and standards aimed at ensuring “responsible AI” practices.
Among these, recently published ISO/IEC 42001 stands out as a comprehensive set of guidelines that can take organizations towards building ethical AI.
Click Responsible AI for beginners for easy understanding.
Who Should Be Covered Under ISO/IEC 42001:2023
Overall, be it ISO 42001 or EU AI act or any such similar framework is designed to be applicable to all organizations involved in the development, deployment, and use of AI technologies. This includes but is not limited to
– Technology companies developing AI software and hardware.
– Enterprises integrating AI into their operations, products, or services.
– Research institutions conducting AI-related studies and experiments.
– AI consumers or third parties used by organizations for AI development.

The key aspects of Responsible AI
Below are some of the key characteristics and aspects that are usually expected when building responsible AI. It’s crucial to ensure that AI systems are built with accountability and below values as the AI harms can be quite adverse and detrimental to the society.
1. Security and Privacy
Privacy is a fundamental aspect of responsible AI and individuals’ sensitive information should be protected throughout the AI lifecycle. ISO 42001 emphasizes the importance of incorporating privacy considerations into AI systems from inception to deployment. For example, consider a healthcare organization developing an AI-driven diagnostic tool. By adhering to ISO 42001 guidelines, the organization would consider implementing data anonymization techniques to safeguard patient data and such risks would be covered in the AI risk assessment.
Know some of the Potential deepfake risks in elections in upcoming elections
2. Fairness
Fairness focuses on mitigating biases and ensuring fair outcomes for all individuals. ISO 42001 encourages fairness by advocating for the use quality datasets . Lets say for example, a financial institution utilizing AI for credit scoring adopts ISO 42001 principles to detect and rectify biases. The planning phase of ISO 42001 [ clause 4-7] would normally take into account fairness principles and its risks.
3. Transparency
Transparency strengthens the trust and accountability in AI systems by demonstrating how decisions are made and the rationale behind them. ISO 42001 encourages organizations to provide clear documentation and explanations of AI processes. For example, a retail company employing AI-powered recommendation systems adheres to ISO 42001 guidelines by disclosing how customer data is utilized to further generate personalized recommendations.
4. Bias
Consider a recruitment agency leveraging AI for candidate screening. The agency can include bias detection mechanisms to prevent the perpetuation of gender or racial biases in hiring decisions, thereby promoting diversity and inclusion in the workforce. Bias is one the serious risks that should be documented in AI risk register for the organizations.
5. Continuous Improvement
Continuous improvement is integral to responsible AI, especially when it comes non-supervised learnings. A quality of dataset, algorithms etc are dynamic factors in AI development and they should be continuously monitored for any deviations. The correction / learning should be integral part of your AI development.
Especially the clause 10 of ISO 42001 clearly calls out for continuous improvement in AI development lifecycle.
For example, a social media platform regularly reviews its AI algorithms for content moderation, incorporating user feedback and emerging best practices. Infact, there was a news recently where an individual took social media company to the court, for wrongly terminating his account based on AI based content moderation tool. His child’s photo was tagged incorrectly by the system.
6. Enhanced Governance
ISO 42001 promotes robust governance structures that oversee AI development, deployment, and monitoring processes. Its recommended to have a AI governance council established in the organization and leadership support should be secured. The centralized governance, oversight ensures that AI development is risk-free for all the stakeholders and assures ethical use of the system.
7. Increased Stakeholder Confidence
Responsible AI practices clearly bolster stakeholder confidence by demonstrating a commitment to ethical principles and societal well-being. This is done by reassuring customers, regulators, and investors of its ethical approach.
8. Systematic Risk Assessment
AI risk assessments can be strategic, system level or AI/component level. ISO 42001 guides organizations in conducting systematic risk assessments across various stages of the AI lifecycle.
In a nutshell ,
ISO 42001 may not be a holistic framework however it serves as a valuable resource at this point , providing guidelines and best practices to work towards AI ethics. I think Infosys was one of the early players to be certified for ISO42001.
Hope this article helped understand the context of responsible AI. For any guidance, implementation or training for AI framework, feel free to DM or reach out to info@rivedix.com
