Today, Machine Learning systems can learn by themselves from preset data. The next step in AI evolution towards human-level intelligence is machine reasoning, or the ability to apply prior knowledge to new situations.

Since ancient times, humans have been interested in finding systematic approaches to reasoning and logical thinking.

Now, we want to make machines “think” like us and endow them with the reasoning ability that, unfortunately, we don’t quite understand ourselves.

But, why do we need machines that can deconstruct truths and validate reasons like we do?

One of our most recent AI-related posts discusses the story of an AI system that can detect skin cancer more accurately than dermatologists.

No doubt, this is a big deal in that an early diagnosis is one of the most effective methods for providing successful cancer treatments.

It’s much easier to make an AI software that can recognize a set of data patterns to diagnose skin cancer than an AI that understands what skin cancer actually is.

We want a Machine Reasoning AI that solves the problem, and before that, knows what the problem is.

Machine Learning Systems Aren’t Smart Enough

Many different AI systems can achieve performance comparable to that of humans without having to imitate human intelligence processes.

Let’s take game AI as an example.

The most advanced game-playing AI systems like Google’s AlphaGo can outperform humans, but can’t show human-like intelligence.

Their systems mainly consist of a well-optimized game tree algorithm that assesses all possible moves and chooses the best according to the opponent’s move.

While the end result looks like “intelligence”, in the background, it’s only a powerful combinatorial search engine. Human cognition doesn’t work this way.

Read More: The Difference Between AI, Machine Learning, and Deep Learning

Machine Learning systems can learn on their own, but only by recognizing patterns in large datasets and making decisions based on similar situations.

Machine Learning is dependent on large amounts of data to be able to predict outcomes.

If there are few or no structured inputs to extract patterns, Machine Learning systems can’t solve a new problem that has no apparent relation to its prior knowledge.

Even Deep Neural Networks that try to replicate the way the brain works only have a distant similarity to the structure of our brains.

Our concept of a true AI is a synthetic brain with a cognition faculty. That’s not too far from what the research community is after, except the “anthropomorphic” part.

We need machines that can generate and process data and learn from past experiences to face new challenges, like humans do, but not necessarily the exact way they do it.

Anyone who’s stubbed their toe or walked into a room and forgotten the reason for being there knows that our brains have flaws on every level. With a synthetic brain, these are flaws that can be changed, improved on, or just plain deleted.

From Learning Machines to Reasoning Machines

We have seen AI algorithms (Deep Blue, AlphaGo, and AlphaGo Zero) that can perform “reasoning” in very limited frames of strategy games like chess or go.

The AlphaGo algorithm was designed to play go, and it’s proven its chops in that regard. AlphpaGo Zero is far superior to the AlphaGo that already beat the world’s human champion.

Yet, AlphaGo versions are incapable of moving one pawn on a chessboard because they have no game tree for chess to pull from its moves.

We’re still far from machines capable of generic reasoning in a way that enables them to build on and optimize their existing knowledge to solve new problems.

Another example of a widely-used Machine Learning system is Facebook’s News Feed, which is good at personalizing individual feeds based on the member’s past interactions.

However, with a whole new account that the member has yet to set any preferences or perform any activity, the system would be in the dark at which content to throw at their feed.

Without already input structured data, and lots of it, there’d be no patterns for Machine Learning systems to identify and make predictions accordingly.

Reasoning Machines, on the other hand, train on and learn from available data, like Machine Learning systems, but tackle new problems with a deductive and inductive reasoning approach.

This is what sets Machine Reasoning apart from Machine Learning.

In a paper on Machine Reasoning, Léon Bottou, one of Facebook’s AI Research experts, gives us this definition:

“A plausible definition of ‘reasoning’ could be algebraically manipulating previously acquired knowledge in order to answer a new question.”

Computer scientists Jerry Kaplan, in his book “Artificial Intelligence: What Everyone Needs to Knowdescribes Reasoning AI as systems that deconstruct “tasks requiring expertise into two components: “knowledge base” – a collection of facts, rules and relationships about a specific domain of interest represented in symbolic form – and a general-purpose “inference engine” that described how to manipulate and combine these symbols.”

Kaplan thinks that reasoning AI can be programmed easily using facts and rules and goes on to say that “knowledge engineers” would create reasoning systems. These knowledge experts would interview practitioners and “incrementally incorporating their expertise into computer programs.”

At the moment, all of these systems are nothing but future plans and pipe dreams. However, for Industry 4.0 to further develop, our AI systems need to become more adaptive, intuitive, and flexible in their uses and abilities. It’s hard to say when we will see the first successful Machine Reasoning system, but it’s likely that it’s not as far away as you think.

Read More: AI 101: Why AI is the Next Step in our Evolution

How far could we be from AI that “thinks”, not necessarily in a human-like way?

banner ad to seo services page