Dr. Pieter Buteneers in an interview about machine learning
Dr. Pieter Buteneers in an interview about machine learning
Alexander Görlach: Your expertise is in machine learning; what can algorithms do? What can’t they do?
Pieter Buteneers: Given enough examples, these algorithms can do whatever you want them to do. This means that ML algorithms are in theory capable of doing everything humans do, even the most complicated tasks. The catch is that they need a lot, and I really mean, a lot of examples. If a human needs to look at something only once to learn it, these algorithms need in the best case hundreds, but often over tens of thousands of examples to learn it.
Everything that is repetitive enough will be automated by these algorithms in the near future, which means that all repetitive tasks or even entire jobs will be replaced in the not so distant future.
Luckily, there are quite a few tasks that these algorithms have trouble ‘understanding’. It is specifically those tasks that require context. We humans are genetically preprogrammed to be good at quite a lot of tasks like, for example, empathy. We are really good at feeling what other people feel even if we only hear the other person on the phone. Since ML algorithms have to learn everything from examples, it will require a lot of trial and error before they will be able to respond in a way that the other person feels like he or she was correctly understood.
Another example of something humans are really good at is at creating new things from a combination of only slightly related things. And obviously we also make mistakes when we invent new things, but we are able to steer our creativity in such a way that we only need to try a couple of times at most to get it right. For ML algorithms to be able to do this, they would have to try more in the order of a million times to come even close.
Alexander Görlach: The example of the world turning into one giant coffee plantation due to an algorithm going wild has become the core example for those who want to exemplify the catastrophic impact AI will have. Is it an exaggeration or will AI kill us all if we don’t act now?
Pieter Buteneers: Luckily we don’t have to act now. Yet. AI is not at a stage yet where it can improve itself in a way that is beyond human control. We are still at least a decade away from the point in time that we call the singularity. But once we reach the point where AI can build AI faster than a human can, we are doomed.
So, in a way we have to prepare now for when that time comes so that we can make sure future AI behaves morally.
Alexander Görlach: To what extent can algorithms be moral, or let’s say, be ...