The Forum for Artificial Fear or Artificial Intelligence?

Published on Wednesday, December 4th 2019

post-cover

There’s one thing that all people have in common: we scare easily. Bordering on the traumatic. Negative experiences will always leave deeper marks than all the other things we feel, and it’s not something you can reason away. There’s a biological explanation though: our sense of fear is regulated by a tiny almond-shaped bit of the brain called the amygdala. It’s located in one of the oldest parts of the brain, evolutionary speaking. Thinking and reasoning takes place in another part, the prefrontal cortex, which is relatively new and has nothing to do with the processes of the amygdala. So when we feel the urge to run away, or when we are too petrified to move, it’s the amygdala that rules, reducing our ratio to a voice crying in the wilderness. Fear has many layers and is a bit like an onion. People are keen to peel the layers off one by one and conquer the fear. But when they have solved their fears on one issue, a new set arises, with their own layers, and the process starts all over again. Apparently we like to worry, it’s easy to stimulate it in others, and it’s even little addictive - especially in the West where we don’t have to think about survival and are mainly focused on maintaining our skills and achievements.

Fear is also a dominant factor in the societal debate on artificial intelligence (AI). This got worse when AI entrepreneur (!) Elon Musk decided to warn us at regular intervals about the dangers of AI. ‘AI may turn into a malicious, many-headed monster that can destroy our society,’ or words to that effect. On the other hand, artificial intelligence has an endless capacity for further optimizing the performance of our technology, and that would have a positive impact.

As early as the 1940s, AI paved the way for the first computers to be developed. Today AI is a hype, which makes it the perfect candidate for the public at large to develop a collective fear. I’m not counting those who think AI is the best thing since sliced bread, the people who have a blind faith in any futuristic gadget to make the world a ‘much better’ place. These are the same people who up to a few years ago preached the gospel of blockchain. Plenty of reason, therefore, to organize a forum where supporters and opponents can have a fair and open debate.

Incidentally, even after all these years AI is still in its infancy, as is blockchain. We have no idea what parts of it we will use in, say, ten years’ time. The onion of fear of Artificial Intelligence has plenty of layers to keep us peeling for ages, and there will always be something with which to frighten each other.

But here’s a reality check for scaredy-cats and futurists alike (and for all of us really): fear, doom scenarios, ignorance of the issue and even malice are things of all times. I’ll give you two examples. Over the past few decades, the banks have mass-produced faulty financial products based on algorithms. The sky was the limit. The bank’s profit was the driving force; not many people at the top understood the details of these products, but they were only too happy to market them. The public in turn was only too happy to buy them, until the financial crisis burst that bubble.

Since we in the Netherlands exchanged carriages for cars, tens of thousands of people have died in traffic. Cars are indisputably mass murderers, but never for a second does that keep us from driving them. After an initial decline, the road accident toll is rising again. But just one fatal accident involving a self-driving car, meaning ‘dangerous technology’, is enough to make headlines across the world and to let fear prevail.

Artificial intelligence, machine learning, deep learning, robotics, reinforcement learning, generative adversarial networks, data science: for the moment it’s donkey work, interesting for only a select group, and our organizations are nowhere near ready to start applying it. Maybe in a test ground environment, but that’s it. These are not new futuristic gadgets; these are logical steps in a long process of info tech that began with the calculator, or even further back: with an abacus in a cave. What we in the Netherlands should focus on is getting really good at continuously improving our digital transformation performance, by managing the transitory nature of the technology in which we want to integrate AI later on. That way we can deliver a minimal set of functionality for a maximum effect on the population’s well-being. Once we can do that, and we have our data and regulations in order, and we can be (and are allowed to be) transparent about them, things will improve and the citizens will benefit. Meanwhile we’ll have time to optimize privacy and security. And then, once it is useful and we are ready, we can start applying artificial intelligence to our data. I’m not even sure I will witness the real AI in my lifetime. First we’ll have to find a solution for the overcompensation AI shows in its learning processes. People can feel when they’ve learned ‘enough’ to get by in any given situation. AI can learn within a framework, but if it is not given that framework it will make itself redundant in the end. The question we should answer now is: do we create a forum for Artificial Intelligence, or for Fear?