Artificial intelligence (AI) is one of the fastest growing and most promising technologies with the potential to revolutionize a wide range of industries, from medical to judicial. Machine learning is a subset of artificial intelligence where the AI is not explicitly programmed but rather “learns” to make decisions through training. A particularly powerful technique in machine learning and currently very popular is deep learning. Deep learning relies on a neural network of “nodes” arranged in layers joined by weighted connections. These neural networks can be trained on datasets to perform functions that are out of the reach of an ordinary algorithm based on just basic logic. It can perform tasks such as recognizing and distinguishing different animals in images or controlling self-driving vehicles. In 2015, Deepmind's AlphaGo AI beat European Go champion Fan Hui in its first match and the world champion in 2016 before competing online against a variety of the world's best Go players and winning all 60 matches. AlphaGo used a deep learning neural network to determine which moves to play. This type of gameplay was only possible through artificial intelligence, as the game contains approximately 10761 game states; too many to deal with with a traditional algorithm. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay AlphaGo was trained by analyzing thousands of games played by expert Go players and then playing against itself to improve its initial knowledge. In 2017 the AlphaGo team revealed a new version of their artificial intelligence called AlphaGo Zero. This AI did not initially train on human data, but learned the game from scratch by repeatedly playing against itself. AlphaGo zero outperformed the initial AlphaGo and used less computing power to do so. This is because it was not influenced by the inefficient human bias inherent in the data provided. This self-learning approach, however, can only work in an artificial environment like Go, where the rules are simple and easily defined. In the real world, a computer cannot simulate every aspect of an environment, and so an AI that solves real-world problems depends on data to train on. As seen with AlphaGo, this introduces human bias into the algorithm's decision making. While often benign, there are cases where AI also learns negative human biases. An example of this is the COMPAS AI algorithm used to help judges determine an offender's risk of reoffending. An analysis of cases conducted by ProPublica found that the algorithm favors white people and gives a higher risk rating to those with darker skin color. The program's creators Northpointe Inc. (now Equivalent) insisted that it was not racist since race is not one of the inputs the algorithm is trained on. In a similar case, a computer science professor who was building an image recognition program noticed that when his algorithm was trained on public datasets, some even endorsed by Facebook and Microsoft, associations between classic cultural stereotypes such as women cooking or shopping and men and sports equipment were not only exposed but actually amplified. The problem with AI inheriting negative human biases is not just for fear of offending, but when using AI-based decision making in a real-life context, even.
tags