This is my first post here with a focus on building Responsible AI. If you find this useful, read on, share with friends. If not please unsubscribe below.
We spoke about three AI Ethics principles at the CareerPivot to AI Ethics course at Business School of AI. I want to build on that to show you what the three AI Ethics principles mean when you build AI Products along the datascience, Machine Learning modeling lifecycle.
What are AI Ethics Principles?
AI Ethics Principles give a common framework for people working together to have a common language to address all aspect of ethics.
Three AI Ethics Principles I teach in my classes
I teach the principles of fairness, trust and transparency in all my courses.
Fairness is about diversity and inclusion and removing data and algorithmic bias. This means removing bias in every stage of data collection, labeling and the ML modeling lifecycle. This leads to the concept of inclusive AI.
See AI Ethicist Susanna Raj’s DataBias types below by way of Lionbridge.ai (we have no affiliation). Click image below to watch the WeeklyWed session (30 min video) about data bias from min 2.57 to min 30.
Feel free to subscribe to my YouTube or join us at WeeklyWed at Business School of AI for our live speaker lounge for more learning pathways to AI and AI Ethics.
Trust is typically synonymous with respecting user’s privacy and leads to GDPR laws and compliance. There is plenty of research about building human centered interfaces that earns the trust of users. This leads to the concept of Trustworthy AI. Trustworthy AI does not necessarily mean ethical AI according to ethicist and philosopher Adewale Babalola.
Transparency is about letting the user know how the underlying AI makes decisions. This leads to the developing field of explainable AI.
Transparency is typically talked about as algorithmic transparency and that aspect of AI algorithms is called explainability.
Algorithms are built as a black box of thousands of layers of neural networks that deliver a point decision. Some example of such an AI’s point decision are to recognize a person from a photo of a face or to make a recommendation for an auto-correct or to up-sell some purchase on an ecommerce site.
Are data ethics and AI ethics the same?
As we dig deeper to understand AI Ethics, it is easy to get dragged into a discussion about data consent privacy and what is ethical.
It is important to understand that data is the language of AI. AI is trained using data and AI communicates with the user using data. So bias in data can feed and build a biased AI that is not fair and can be a blackbox with lack of transparency and thereby lose user’s trust. Data bias leads to classic examples of combining all three AI ethics principles.
It is important to understand data bias and mitigation in the AI lifecycle, separate from several decisions you will be complicit to make about product design, genderizing an AI and the many assumptions that will taken agency from some group of people calling for you to do the right thing.
Can you think of a product design which combines trust and transparency? You can find many technology products that are built using AI where it is a blackbox on how the algorithm makes decisions and what data is collected and how it is utilized.
Add a comment below with an example.
Also tell me what do you want to learn about next?
Do you want to dig deeper into the path of data bias or peel each of the three AI ethics principles to understand them deeper?
Would you like to pick an industry product and analyze it for the three principles or would you like to learn about research in the area of AI Ethics and how it ties back to build AI Ethics in industry to help you understand your role in it.
Sign up now so you don’t miss the first issue.
In the meantime, tell your friends!
Till next time!