Isn’t any corporate reference to ethics limited to a ‘nice-to-have’ statement on the company’s ‘About Us’ website page?
I recall my days at KPMG when presenting a project proposal to a new client; the first item they typically tried to cut out was the Change Management costs. These were considered unnecessary costs that big System Integrators would include to inflate their revenues. Needless to say, those that removed these costs ultimately ended up with a failed project.
Well, not developing a practical Artificial Intelligence (AI) Ethical Framework will have the same consequences as those who ignored the importance of Change Management or feel such a framework is unnecessary.
Scaling AI also scales risks
In a 2020 article published in Harvard Business Review, Reid Blackman rightly noted that while companies were scaling their use of big data and AI for competitive advantage, they were also “scaling their reputational, regulatory, and legal risks” (Blackman, 2020).
While AI ethics would have been considered the domain of academia a few years ago, those who have been early adopters of AI now accept that failing to incorporate such a framework has directly impacted their bottom line. Without a practical ethical framework, these early adopters have experienced costly inefficiencies in developing and deploying their AI, especially around data bias and privacy breaches.
Amazon and Optum are two notable cases concerning data bias
Amazon had to scrap an expensive HR program because the embedded AI systematically discriminated against women due to poor/biased data used in the model’s training and testing. After three years of costly development, the project was eventually scraped (Dastin, 2018). In the case of Optum, their algorithm resulted in the company being investigated by regulators as it allegedly biased doctors and nurses to pay more attention to white patients than black patients. As reported in the Washington Post, this bias was unintentional, as the algorithm was based on how much patients would cost the healthcare system in the future and expressly excluded race as a variable. “The issue was that cost wasn’t a race-neutral measure” apparently, in this instance, black patients had incurred less medical costs per year than white patients with similar conditions; “thus, the algorithm scored white patients as equally at risk of future health problems as black patients who had many more diseases” (Johnson, 2019).
Regarding data privacy and security breaches, we have the cases of Los Angeles City suing IBM for deceptively mining the private location data of users of its Weather Channel application and selling this information to advertising and marketing companies (Dean, 2022) and Facebook providing access to the personal data of more than 50 million users to Cambridge Analytica, who were digital consultants to the 2016 Trump presidential campaign (Confessore, 2018).
Guiding principles of an Ethical Framework
Governments worldwide have issued guidelines for implementing an Ethical Framework for AI. While the detail may vary from country to country, the principles effectively focus on five key areas and are designed to achieve safer, more reliable outcomes from the application of AI and to reduce the potential negative impact on those affected by these applications (DISR, 2019) (Gov.UK, 2021) (WEF, 2021).
1. Human-centeredness
Any AI application must respect human rights, diversity, and individual autonomy. These applications should not undertake actions that potentially deny personal autonomy, including deception, inappropriate surveillance, or non-alignment with the stated purpose and use of the AI. An AI application should benefit not only those immediately impacted but also future generations and the environment (DISR, 2019).
2. Fairness
Refers to AI being non-discriminative, be that intentional or unintentional. AI needs to be fair and enable inclusion through its lifecycle. These systems also need to be user-centric, allowing anyone interacting with them to have relevant and equal access to any associated product or services. Those implementing AI need to ensure that appropriate controls are in place to ensure that AI outputs comply with all anti-discrimination legislation.
3. Transparency & Explainability
AI cannot be a ‘black box’ solution; adequate transparency and disclosure must be provided to all stakeholders with a clear understanding of the AI’s impact on them. This transparency needs to ensure those impacted by the AI clearly understand what the system is intended for and why. Any required disclosures must be provided in a clear and timely, allowing individuals or groups impacted by AI to challenge any outcomes.
4. Accountability
Those responsible for any phase in the AI lifecycle need to be identifiable and accountable for that aspect of the AI. These stakeholders need an adequate understanding of the possible negative impacts of their part of the AI development or use to ensure they take appropriate control and oversight measures.
It’s important to note without clear accountability and protocols in place to identify, evaluate and mitigate risks, there will always be a high probability of departmental issues going undetected. AI-associated risks must be integrated into the enterprise’s overall risk management strategy.
5. Privacy & Security
AI systems need to respect and uphold privacy rights and data protection by ensuring the security of the data used or generated complies with all relevant legislation. These systems must identify potential security vulnerabilities and assure resilience to adversarial attacks. Measures should account for the alternative application of the AI, and potential abuse risk with appropriate mitigation processes in place should an event arise (DISR, 2019).
Five critical success factors in implementing an Ethical Framework
1. Executive leadership
Senior executives must establish an AI governance strategy tailored to their organization that incorporates appropriate measures and processes to ensure adherence to the above principles. Further, this strategy needs to identify clear owners and stakeholders of the AI. AI should be viewed as augmenting human-centric decision-making rather than a wholesale replacement of it. Senior executives need to clearly define and communicate which decisions will be automated and which ones will require human input.
2. Build organizational awareness and obtain buy-in
AI is not simply the domain of the Data Scientist, Engineer, or Analyst; senior executives need to ensure they have enterprise-wide buy-in from all stakeholders whose roles will be impacted by the implementation/deployment of AI. A vital aspect of this buy-in is providing these non-technical stakeholders with adequate transparency into the development/deployment of AI, so they are comfortable accepting their relevant accountability.
3. AI needs to have a customer customer-centric focus
Think about AI being used to access an individual’s eligibility for funding by a financial institution or access to specific medical treatments in healthcare. It is increasingly important for organizations to understand exactly how the AI makes decisions and be able to explain those systems.
With any deployment of AI, organizations need to be transparent and clear with those impacted by the AI about what data has been used to build the model, what data is being collected with its use, and how it was used.
4. Leverage appropriate technologies
AI is not a one-off exercise; it is an ongoing iterative process. To accommodate this form of development and to believe in transparency, interpretability, and usability, enterprises can ill afford a ‘black box’ solution that is dependent on costly and scarce specialized technical resources to manage it. Instead, organizations need to look at developing their AI in an integrated cloud-based, open-source environment in which ownership and accountability can be shared across all relevant stakeholders.
Further, the design of these systems is critical, especially around the user experience; as Ben Shneiderman from the University of Maryland notes, “AI applications can bring many benefits, but they are more likely to succeed when user-experience designers have a leading role in shaping human control of highly automated systems” (Rainie, Anderson and Vogels, 2021).
5. Eliminate bias that will have an adverse impact on AI output
AI can analyze a vast amount of data, and enterprises need to ensure their stewardship over any AI development, including implementing a standard for evaluating the data being used and ingested by their AI models. A holistic, transparent, and traceable data set is essential to implement AI successfully.
We don’t live in a world of perfect data; organizations must find a balance between available historical data and the potential requirement for human manipulation to build a specific model. In such circumstances, additional diligence will be required to ensure no bias creeps into the model.
Finally, AI may require combining structured and unstructured data to train a particular model; in such instances, there will equally be a potential for unintentional bias to be created in the merged data.
How can Genetical assist?
Genetica’s Cortex Cognitive AI Platform is a cloud-based end-to-end AI development & lifecycle management solution. The Platform has been developed leveraging a scalable open-source cloud clustering architecture with an intuitive user interface & zero programming environment to ensure reliability, scalability, and usability.
Once a model is deployed, our predictive models run autonomously, leveraging real-time data feeds from IoT devices, alerting different users with relevant information and recommendations specific to their roles at different times.
Our Platform is ideally positioned to deliver on the practical aspects of adhering to an AI ethical framework while simultaneously reducing the cost and time to deploy by a factor of ten.
Reference List
Blackman, R., 2020. A Practical Guide to Building Ethical AI. [online] Harvard Business Review.
Available at: https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai.
Confessore, N., 2018. Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. The
New York Times, [online] Available at: https://www.nytimes.com/2018/04/04/us/politics
/cambridge-analytica-scandal-fallout.html.
Dastin, J., 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters,
[online] Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation
-insight-idUSKCN1MK08G.
Dean, S., 2022. LA is suing IBM for illegally gathering and selling user data through its Weather
Channel app. Los Angeles Times, [online] Available at: https://www.latimes.com/business/
technology/la-fi-tn-city-attorney-weather-app-20190104-story.html
Department of Industry, Science and Resources (DISR). 2019. Australia’s Artificial Intelligence Ethics
Framework. [online] Available at: https://www.industry.gov.au/data-and
-publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics
-principles.
Gov.UK (Department for Digital, Culture, Media & Sport)., 2021. Ethics, Transparency and
Accountability Framework for Automated Decision-Making. [online] Available at:
https://www.gov.uk/government/publications/ethics-transparency-and-accountability
-framework-for-automated-decision-making.
Johnson, C., 2019. Racial bias in a medical algorithm favors white patients over sicker black
patients. Washington Post, [online] Available at: https://www.washingtonpost.com/health
/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black
-patients/.
Rainie, L., Anderson, J. and Vogels, E., 2021. Experts Doubt Ethical AI Design Will Be Broadly Adopted
as the Norm Within the Next Decade. [online] Pew Research Center. Available at:
https://www.pewresearch.org/internet/2021/06/16/experts-doubt-ethical-ai-design-will
-be-broadly-adopted-as-the-norm-within-the-next-decade/.
World Economic Forum (WEF). 2021. AI Ethics Framework. [online] Available at:
https://www.weforum.org/projects/ai-ethics-framework.