28 AUG, 2019 by Niamh Reed, author for Datafloq
For a technology ruled by logic, AI presents a series of illogical conflicts. Indeed, we’re still grappling to define what AI is and what it means even as we develop and deploy it at a never-before-seen level.
Here’s a closer look at four of the most hotly discussed AI paradoxes, and what they mean for today’s artificial intelligence.
Paradox 1: Moravec’s paradox
Moravec’s paradox revolves around the ability of AI tools. It observes that ‘high-level reasoning’ takes less computation than ‘low-level sensorimotor skills’.
In other words, the tricky things like advanced mathematics and logic take less for AI to pick up. We have put effort into learning these tasks and so know how to teach them to AI.
But when it comes to ‘simple’ skills — those we learn naturally as babies and toddlers — it’s a different story. These are skills such as sight, speech, comprehension and movement. And having an AI do these things is much harder, requiring more computation and effort.
This is why we already have AI that can handle complex mathematics, yet we’re only now starting to see AI capable of ‘seeing’ images. (Image recognition.) Or why we’ve had AI capable of winning logical games against us since the 90s, but they’re only now understanding our speech. (Natural language processing.)
Paradox 2: The AI effect paradox
The AI effect paradox is essentially that what is AI, isn’t AI.
Also known as the AI effect, this paradox sees AI tools lose their AI label over time. This is usually due to not being ‘real’ intelligence. (Despite, that is, no change to the technology behind them.) So, what’s once considered AI, isn’t AI.
There are a few reasons why, in the past, AI tools have lost their AI title. For instance, there was a time that the ‘AI’ label held a stigma. AI development had reduced funding, and many thought it an empty promise. As a result, tools once considered AI adopted new names.
Or, as another example, the catch-all nature of the term meant that a specific term would replace the non-specific ‘AI’ label. For example, ‘machine learning’ and ‘facial recognition’, instead of ‘AI’ referring to both tools.
As for today’s AI, there’s the question as to whether any of it will ‘count’ as AI in the future.
Paradox 3: The decision-making paradox
When fed with the same problem and data, different decision-making methods will yield different results. This is the decision-making paradox. The decision-making paradox forces us to choose the ‘best’ decision-making method.
While not directly related to artificial intelligence, it does pose an issue for AI technology. Namely, it means that in theory, two AI tools could offer a different output (decision) from the same input (problem).
So, there’s no way to say with any certainty that an AI-sourced decision is the ‘best’ decision. That’s without mentioning the potential for some AI tools to learn bias and incorporate it into their decision-making.
This relates to some of the calls for AI decision-making to be utterly transparent. We must understand how an AI reached a decision, so we can decide whether it’s the best decision.
Paradox 4: The AI paradox
As AI gets better and settles into more of our jobs, it’s easy to see why many people believe that robots will take all jobs. (Leaving us humans out of work.) But, by entering the workplace and taking over the robotic tasks, humans are being driven to upskill and complete work that is human-oriented. That is, the things that AI cannot do.
For instance, higher-level decision making or problem-solving. Or human interactions in customer service and care. Or innovating and inventing the future.
So, paradoxically, more robots equal less robotic.
The paradoxes in AI
Artificial intelligence has more than its share of paradoxes to work with, around and against. Some of AI’s paradoxes have happened, some might, and others are already being overcome.
The future of AI could bring with it the discovery of more paradoxes like these. Or, it might bury these paradoxes as problems of the past. Only time will tell.
Article Author: Niamh Reed, author for Datafloq
Niamh Reed works in content creation at Parker Software, a leading UK software house that offers live chat software and business process automation to businesses worldwide. She spends most of her time writing articles spanning topics such as programming, software development, customer service and user experience. During her downtime, she writes fiction, plays the violin, and hip-throws people twice her size in jiu-jitsu.