There’s nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.
Educate on the criticality of a ‘people and ethics first’ approach
AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.
The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.
Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.
These councils can then educate everyone on the need to design and control AI systems carefully so that they reflect the company’s core values and industry while providing transparency, human interpretability, and audit trails to build trust and control them when things go wrong.
Activate by creating a balanced set of actions
Ensure there are at least two AI ethics councils – internal and external – and that both have clear goals. Both groups need to outline their charter in terms of a clear problem statement: Building AI you can trust.
The external council could be comprised of experts in a variety of fields – social sciences, customer advocates, regulators, AI experts, legal experts, auditors, anthropologists, etc. External advisory groups need to focus on a variety of topics: providing insights and expertise on the art of using AI responsibly, helping create a code of ethics that put guardrails on AI design and development from a societal, regulatory and human first perspective and, in some cases, attaching conditions to some deals limiting the use of its technology.
The internal council could be comprised of not just technology (CIO/CTO/CDO, software developers and machine learning experts), but also business and product owners, lawyers, customer service, design and marketing experts. Their goal should be to ensure that they are designing, deploying and controlling AI systems that are explainable, so human users will be able to understand and trust decisions. The internal group also needs to have a sub-group for providing a means of remediation for AI damages or harm.
Scale by institutionalizing good practices while learning quickly and preparing for things to get worse before they get better
It’s important for both internal and external councils to ensure that the ethical design, development, and implementation of AI technologies are guided by the following five trusted AI principles. Data Rights: Do you have rights to the data? Explainability: Is your AI transparent? Fairness: Is your AI unbiased and fair? Robustness: Is your AI robust and secure? Compliance: Is your AI appropriately governed?
After establishing these councils, organizations must focus on learning and scaling good practices by taking a series of steps:
- Hire company ethicists and form AI review boards
- Implement ethical and responsible AI training programs
- Work with the external advisory board and clients to publish an AI code of ethics
- Require a design thinking approach to AI that includes an audit trail
- Attach conditions to some deals limiting the use of its technology
- Provide a means of remediation for AI damages or harm if the situation arises
Last but not least, proactively prepare a public relations crisis strategy and hotline to address incidents. It is not a matter of if, but when. There will be many disasters on the way to AI being put to use for good, and we need to be ready.
The people and ethics-first approach will usher in a new movement, one of fruitful and pervasive usage guidelines of AI in our daily lives to address some of the most complex challenges facing us as a society.
Article Credit: Forbes