Is your organization using AI/machine learning for many of its products or planning to use AI models extensively for upcoming products? Do you have an AI guiding principles in place for stakeholders such as product management, data scientists/machine learning researchers to make sure that safe and unbiased AI (as appropriate) is used for developing AI-based solutions? Are you planning to create AI guiding principles for the AI stakeholders including business stakeholders, customers, partners etc?
If the answer to above is not in affirmation, it is recommended that you should start thinking about laying down AI guiding principles, sooner than later, in place to help different stakeholders such as executive team, product management, and data scientists to plan, build, test, deploy and govern AI-based products? The rapidly growing capabilities of AI-based systems has started inviting questions from business stakeholders including customers and partners to provide details on the impact, governance, ethics, and accountability of AI-based products made part of different business processes/workflows. No longer could a company afford to hide some of the above details in lights of IP-related or privacy concerns. In order to
In this post, you will learn about some of the AI guiding principles that you could set for your business. These guiding principles have been created based on the Google AI principles set by them for developing AI-based products. The following is the list of these AI principles:
The following diagram represents the AI guiding principles for Ethical AI:
AI/Machine learning models should be built to solve the complex business problems while ensuring that the models’ benefits outweigh the risks posed by the models. The following represents the examples of the different types of risks posed by the respective models:
AI/ML models often get trained with data sets with the underlying assumption or ignorance that the data set selected is unbiased. However, the reality is something different. While building models, both, the feature set and data associated with these features need to be checked for bias. The bias would need to be tested during both:
Let’s take a few examples to understand the bias in training datasets:
One must understand that there are two different kinds of bias based on whether the bias is based on experience or discrimination. A doctor could use his experience to classify the patient as suffering from the disease or otherwise. This can be called as good bias. Alternatively, a model which fails to recognize the people of different skin color other than white is said to be discriminating in nature. This could be called as bad bias. The goal is to detect bad bias and eliminate them, either during model training phase or after the model is built.
Models performance should be examined to minimize false positive/negatives appropriately to ensure freedom from the risk associated with business functions. Let’s take an example of a machine learning model (in account receivable domain) which predicts whether the buyers’ orders can be delivered based on his credit score. If the model incorrectly predicts that the order should be delivered as receivables, the supplier could be at risk of not receiving invoice payments on time which would further impact his revenue collections. Thus, such models should not be moved into production primarily due to the reason that it could impact business in the negative manner leading to the loss of revenue.
Models should be trustworthy/trustable or explainable. In this relation, the customers using the model prediction could ask for details related to which all features contributed to the prediction. Keeping this in mind, one should either be able to explain or derive at how the prediction was made or avoid using blackbox models where it gets difficult to explain the predictions and instead use simpler linear models.
As part of governance practice, customer data privacy should be respected. If customers are told/informed that their data privacy will be maintained and that their data won’t be used for any business-related purpose without informing them and taking their permissions, the same should be respected and governed as part of ML model review practices. Business should set up the QA team or audit team which makes sure that customer data privacy agreement is always respected.
Machine learning model life-cycle includes aspects related to some of the following:
As part of AI guiding principles, the continuous governance controls should be put to audit aspects related to all of the above. The following represents some of the governance controls:
It must be ensured that AI models should be built using the best of tools and frameworks. In addition, it must also be governed that people involved in building AI models are trained appropriately at regular intervals with best practices and up-to-date educational materials. The tools and frameworks must ensure some of the following:
In this post, you learned about the AI guiding principles which you would want to consider setting for your AI/ML team and business stakeholders including executive management, customers and partners for developing and governing AI-based solutions. Some of the most important AI guiding principles include safety, bias and trust-ability / explain-ability.
In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…
Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…
With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…
Anxiety is a common mental health condition that affects millions of people around the world.…
In machine learning, confounder features or variables can significantly affect the accuracy and validity of…
Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…