Let’s Get Ethical: Biases in AI
If artificial intelligence is meant to mimic human behaviors and cognitive functions, will AI find itself plagued with the same biases that humans hold? What happens when a supposed non-partial AI behaves in a way that can be deemed harmful?
Bias is defined as “an unreasoned and unfair distortion of judgment in favor of or against a person or thing.” (Merriam-Webster). Examples of biases that are highly prevalent in modern society are opinions towards different races and ethnicities, sexual orientations, gender orientations, income levels, disability status, and so on. People with biases may have strong, outward opinions towards certain groups, while others’ biases may be more unconscious. Ethics, on the other hand, is a “set of moral principles” (Merriam-Webster), generally communally agreed upon in our current society.
Bias and ethics play a huge role in business. It is expected that businesses behave in a way that is accepted by our society’s ethical standards; no insider trading, no shady deals, and no taking outward advantage of consumers. Bias may result in unfair internal treatment, such as limiting promotions to only certain groups of people or hiring based on demographics as opposed to skills (Morrison 2020). It is important that if a business wishes to be successful and accepted by society, it must abide by a certain ethical code and work to eliminate biases, conscious or unconscious. Luckily, there are systems set in place to prevent biases and ethical dilemmas: employment laws make it illegal to discriminate based on gender or sexual identity, race, and disability status, and businesses that cross ethical boundaries are often subject to investigation and criminal punishment.
Unfortunately, it is very unlikely that there is a human out there who does not have some biases, even unconscious ones. Because humans are products of their environment, they form opinions based on their surroundings, and ultimately unlearn those opinions down the line if necessary. So because humans, who are the ones creating artificial intelligence, are inherently biased, we can concur that AI may hold biases as well. In fact, we’ve already seen that they do.
In a newsletter published by Bloomberg in December 2022, reporter Davey Alba brought attention to the newest AI taking over the Internet by storm, ChatGPT, and how although the system has “guardrails” put in place to decline inappropriate responses that may lead to biased and harmful. However, some users have been able to bypass these guardrails, resulting in ChatGPT spitting out racially-motivated or misogynistic statements or lyrics, which in turn trains the AI to repeat these phrases and biases down the line.
The creators of ChatGPT suggested that users “vote down” harmful statements they come across as a way to train the AI to find these opinions unacceptable. Unfortunately, it doesn’t seem entirely possible to create a completely-bias free AI so long as AI is being created by inherently biased humans. Perhaps there will ultimately be a way for AI to create other AI, and in turn, eliminate biases completely. But until then, use AI with caution.
Comments
Post a Comment