The Russian chess player Garri Kasparov in 2005 (not against chess computer Deep Blue).
In 1997, computer Deep Blue defeated Garri Kasparov, the best chess player in the world. In 2011, computer Watson won the TV quiz show Jeopardy! and in 2016 AlphaGo won the world champion’s board game Go. Humans are rapidly losing out to computers.
Algorithms are advancing in many more places than just games. They check your tax return, determine what you see online and arrange payments. As a result, everyone has to deal with algorithms – noticed or unnoticed. This often has advantages, but it can also go wrong. Algorithms can discriminate and make mistakes that no other human would make. Should we allow the computer to rule our world? In this Q&A you will learn what an algorithm is, what they can and cannot do, and how that can lead to discrimination.
1. What is an Algorithm?
The 9th century Persian mathematician Mohammed ibn Musa al-Khwarizmi first used the word algorithm to describe a series of calculations.
An algorithm is a set of rules that you execute to get a certain result. This includes a lot: a recipe for apple pie, rules that tell you how to multiply numbers or a computer program for a traffic light that ensures that traffic from one direction at a time enters an intersection.
Usually by algorithms we mean applications in technology, such as in apps or computer programs. For example, algorithms are used to sort, categorize, associate or filter information. You can find them in, for example, a search engine that gives you the most important information about a certain subject or the computer of a self-driving car that distinguishes between a cyclist and a car. Also think of a dating app that searches for a match based on profiles or a speech recognition algorithm that isolates your voice from disturbing ambient noises.
2. What are algorithms used for?
The articles you read online, the people you come into contact with via social media, the recommendation of products in online stores, the control of your financial administration and the checking of your health: it is no exaggeration to say that algorithms almost all aspects of our lives have been penetrated. NEMO Kennislink spoke to several people in whose lives algorithms play a role in very different ways. Here we highlight three that benefit or had from algorithms.
For example, Carlijn van der Kulk buys almost everything online: from weekly groceries to bikinis and candles. Algorithms run the web shops and make recommendations based on what Van der Kulk bought earlier. In addition, various companies monitor Carlijn’s purchasing behaviour, so that she can also receive advertisements on other websites for items she viewed online. Algorithms are also involved in the payments. They make a secure connection with the bank and sometimes check the creditworthiness of the customer in advance.
Gerita Groenbos has weekly contact with fellow sufferers via a group on Facebook. Her daughter has the rare Snijders Blok-Campeau syndrome, caused by a gene mutation that only occurs in a few hundred people worldwide. In the protected environment of the Facebook group, members ask each other questions and keep each other informed about their personal development. It doesn’t matter that they don’t speak each other’s language, Facebook’s algorithm takes care of the translation.
For years, Pieter Hoexum took his daughters to school on foot, but when they entered secondary school, this source of exercise disappeared. His wife gave him a pedometer that motivates him to take 250 steps every hour. A simple notification from the app is enough to keep it moving. “Even if it’s just putting away the trash can,” he says. “I move a lot more because of my pedometer.”
3. What can algorithms do and what not?
From recommending your future partner to identifying tumors on a medical scan, from closing the flood defense exactly on time to trading shares extremely quickly on the stock exchange. There seem to be fewer and fewer things that algorithms can’t do, and in what they can now do, they are in many ways better than humans.
Are you already afraid that people will no longer be needed for anything? You can comment on the above examples. For example, an algorithm is only very good at the one specific task for which it was designed. An algorithm that controls a self-driving car will incur huge losses in the stock market, and letting your dating app control the storm surge barrier is a recipe for disaster.
An algorithm is not aware of the world. It effortlessly recognizes a horse in a photo, but has no idea what a horse actually is. It’s good at what it’s trained for: recognizing objects in a photo. If a computer beats a human in a knowledge game, “as happened in the American TV quiz Jeopardy in 2011!”:https://www.nemokennislink.nl/publicaties/watson-weet-het-beter/, it can because the algorithm was able to quickly process an enormous amount of information and recognize in the question what information the quizmaster was looking for. The algorithm does not possess ‘general intelligence’ like that of a human and that makes it possible to think the box.In fact, an algorithm is very bad in that regard.It won’t sound the alarm the moment it encounters something strange, unless programmers have explicitly put that function in it.
The photo was produced by a so-called ‘smart blending’ algorithm. It combines an image of a skier with a background photo. The algorithm only recognizes part of the skier.
4. How can an algorithm discriminate?
Discriminatory algorithms have already been identified in various places, such as when filing a tax return or when calculating the chance that a convicted person of a crime will repeat a crime. But how can an algorithm discriminate without awareness and without bias? An algorithm usually looks at different types of information, combines them in a predetermined way and produces a result. Discrimination can enter that process through the way it makes decisions and when the information itself contains bias.
The first shortcoming was the basis of the allowance affair, of which Sarah Alarda-Hensen was also a victim. Between 2004 and 2019, the tax authorities wrongly labeled tens of thousands of Dutch people as fraudsters using an algorithm that was supposed to detect fraud with child benefits. An investigation by the Dutch Data Protection Authority showed that the algorithm took into account issues such as origin and dual nationality. That was unjust and led to discriminatory decisions. A person’s origin should not be taken into account in determining whether someone is a fraud.
But the information an algorithm works with can also contain prejudices, as British mathematician Hannah Fry shows in her book Algorithms in power. In the United States, the Compas algorithm calculates the probability that a convicted person will repeat himself. Judges use that information for sentencing. Afterwards (i.e. after the recidivism was actually known) it turned out that the black population group was wrongly estimated to be higher in recidivism. One of the reasons for this turned out to be the fact that black Americans are disproportionately targeted by the police for some offenses. The algorithm thus continues the discrimination that is already in the system, even when things like race are not taken into account. Prejudice still creeps into the system through factors such as living situation – which can depend on race.
It seems difficult to make neutral technology. According to Marleen Stikker, founder of the Waag research institute, that doesn’t even exist at all. In the forthcoming book Down to Earth by Lianne Tijhaar she says: “It is often said that it depends on what you do with a technology: you can build a house with a hammer, you can also smash someone’s head with it. But that hammer has properties that make it possible to deal hard blows with it. There is meaning in it. The use is already largely in the design. This is even more evident with software and algorithms. Managers switch to data-driven policies on the assumption that the data is objective. The reality is that data does not exist in reality. You always make a selection. If you measure the safety of a municipality, you first need a definition of safety. That is why it is important to know who creates the database. Who pays that person? What interests are behind it?”
In the theme Your data and you, NEMO Kennislink collaborates with NEMO in the exhibition Bits of You. This exhibition for adults, in which you experience how the data traces we leave behind influence our lives, can be seen until January 9, 2022 in De Studio van NEMO on the Marineterrein in Amsterdam.
Source: Kennislink by www.nemokennislink.nl.
*The article has been translated based on the content of Kennislink by www.nemokennislink.nl. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!
*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.
*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!