Evolution of AI: eliminating any biases that might appear must be a priority

0

Artificial intelligence can make life and work much easier in the years to come. But loopholes exist and require careful monitoring.
Image Credit: Shutterstock

Just as electricity has changed for the better in the last century, AI will transform this era. It can help society reach new heights – making us healthier, more prosperous and more sustainable.

But as we celebrate and anticipate the enormous potential of AI for economic and social good, there are questions and concerns. People worry about the way AI makes decisions. What information does it use? Is it objective and fair or is it biased? And how do we know that?

These need to be resolved. Because no matter how big or how exciting its potential is, if society decides not to trust AI, it can’t be successful. In IBM’s 2021 Global AI Adoption Index, 86% of companies surveyed believe consumers are more likely to choose AI services from a company that uses an ethical framework and provides transparency on its data and business models. ‘IA.

I think establishing fundamentals is the starting point for a fairer, more responsible and more inclusive AI. We use our company’s three guiding principles of trust and transparency to shape the way we develop and deploy AI.

• First, AI systems must be transparent and explainable. When humans develop AI systems and collect the data used to train them, they can consciously or unconsciously inject their own biases into their work, resulting in unfair recommendations. These must be mitigated by putting in place appropriate procedures and processes.

• Second, the purpose of AI is to increase human intelligence. AI is not about man versus machine, but man and machine. AI should make us all better at our jobs, and the benefits of the AI ​​age should reach many, not just the elite.

• Third, AI ideas belong to their creator. IBM customer data and information belongs to them, not us.

So far, 91% of companies using AI say their ability to explain how they came to a decision is essential. Such transparency can help reduce bias in AI systems which is of concern. Bias can have serious consequences when they influence recommendations in sensitive areas – recruitment, court decisions, etc.

IBM worked with a bank that wanted to use AI for their loan decision-making process. The bank provided loan data. The data showed that men, all other factors being equal, were more likely to get loans than women.

Give up the legacy

This was based on historical societal prejudices, not real financial measures. We could mitigate this bias – but if it goes undetected, an AI system will use that data to learn and then perpetuate its biases, meaning fewer women get loan approvals.

A lack of diversity within teams developing AI makes it difficult for developers to anticipate biases and their potential impact. Gather diverse teams and you will reduce blind spots and increase the chances of detecting bias.

Education and training of developers is essential, not only on tools and methodologies, but also on their awareness of their own biases. Another way to mitigate bias is to make sure AI decisions are transparent and explainable. For example, if a patient or healthcare professional wants to know how an AI system came to a given conclusion regarding diagnosis or treatment, this should be explained in clear language and terms to anyone asking the question.

Of course, achieving equity in AI is not just a technical issue; it also requires the right governance structures, the commitment of company management and a willingness to do the right thing. IBM has established an internal AI ethics committee. It supports initiatives aimed at operationalizing our principles of trust and transparency.

Raising the level of trust in AI systems isn’t just a moral imperative, it’s good business sense. If customers, employees and stakeholders don’t trust AI, our society cannot reap the benefits it can offer. This is an opportunity not to be missed.

Share.

Leave A Reply