Artificial Intelligence might be biased against us, but are we biased against AI?

The massive introduction of AI tools for the general public we are witnessing these days suggests that we are quickly moving to the era of AI democratization. As this transition takes place, it is critical to get ready to be able to assess AI opportunities and risks as objectively as possible.

Interestingly, the topic of AI bias has generated a large amount of debate and study. However, we have been overlooking the importance of asking the opposite question. Namely, AI might be biased against us, but are we biased against AI, and what are the consequences of it?

Unintentional biases toward AI

Unconscious biases are beliefs and stereotypes that unintentionally influence our attitudes and behaviors toward people, but the same can apply toward technology, and we are generally unaware of it.

It is important to be aware that unconscious biases towards AI can lead to different emotions, attitudes, and behaviors such as fear, avoidance, or adoption resistance, but also to an overestimation of AI capabilities.

For instance, science fiction depicting dangerous robots and AI might play an important role in maintaining these biases and encourage thoughts such as “Robots will destroy humanity”, “AI is dangerous”, or “AI will control us”.

There are many types of biases that might be influencing how we relate to AI. For instance, confirmation bias is our inclination to look for information or interpret it in a way that is consistent with our pre-existing beliefs. Let’s say, if we already hold a very negative conception of AI we might have a tendency to only pay attention to news that reinforce the idea that AI is dangerous and disregard the positive ones. But the same can happen vice-versa. If we have an overly positive perception of AI we might have a tendency to underestimate information related to risks and threats.

Technology biases determine our interactions with AI, but more critically, can also play a role in determining the evolution of AI within society, and even the outcomes we obtain from these tools.

What can we do about it?

Becoming aware of the fact that we might be holding these biases is a good remedy in itself. I suggest the next time you read news or articles about AI you ask yourself: Am I interpreting it from a biased or objective approach? Then, read and reflect again and notice any differences.

Finally, it is critical to keep in mind that being afraid of AI and being aware of its risks are two different things. Being afraid of AI prevents you from seeing its opportunities. Being aware of its risks and limitations helps you choose the right ones. Gaining maturity in this regard will ultimately facilitate our capacity to evaluate the opportunities and risks of AI more objectively.

For a more comprehensive preparation 

If you are seeking corporate training on addressing AI biases within your organization and building a healthy and inclusive AI organizational culture explore these currently open trainings:

Workshop: Building a Healthy Culture of AI for the Organization

Trainer Development: Coaching around AI