Differentiating Between Artificial Intelligence, Machine Learning and Deep Learning – Part 3/7

In this blog series, we’ve been exploring how Artificial Intelligence (AI) is used in industrial markets to improve quality and drive innovation. What exactly do we mean by “AI,” and how does it differ from machine learning and deep learning? Let’s take a deeper dive into these terms.

Artificial Intelligence
AI is conceptual in nature. It is a term that is used in a general way to describe the capacity of machines to imitate some aspect of human intelligence. Accordingly, AI is a label often applied to software programs that perform cognitive tasks that humans can also do, like filtering junk email, interpreting handwriting, detecting fraud, translating speech, and recommending products.
But these types of applications can perform their functions with or without the aid of advanced analytics and behavioral models. They can execute their functions with or without any capacity to “learn,” that is to say, to independently improve their performance over time.
Certainly, however, their performance can be greatly enhanced with such attributes. Moreover, many would argue that the use of models and the capacity to “learn” are defining characteristics of “true” AI.

Regardless of the level of precision one wishes to attach to AI, however, machine learning is inarguably a core – if not the core – technology used to implement AI strategies today for all types of tasks. But what, precisely, is machine learning?

Machine Learning
Machine learning (ML) is a body of techniques that enables computers to learn how to execute tasks without having every step and every possible option scripted via software programs (i.e., without relying on rules-based programming). This is accomplished by processing data using ML algorithms.

An algorithm is simply a discrete, step-by-step set of procedures, like a recipe, that enables a computer to solve a specific problem or perform a particular task. While algorithms are used for both traditional software programming and ML, the “task” a ML algorithm performs is to build a (reusable) model that describes or predicts something about the world.

Inherent in the “learning” part of the definition is the idea that the models will improve over time with exposure to new data or feedback about the accuracy of predictions or descriptions.

A baseline ML model can be developed using training sets of real or simulated data prepared under the supervision of a human being (hence it is called “supervised learning”).

For example, in the case of specific types of equipment malfunctions, a scientist may feed a computer a set of carefully annotated documents detailing past malfunctions. These documents are used to train the computer to identify patterns associated with various types of malfunctions, thereby enabling it to detect possible future instances of these malfunctions.

Alternatively, a scientist may use simulation software to generate a very large number of simulated malfunction scenarios (alone or in tandem with real, historic data) to help train a malfunction-detection algorithm.

In unsupervised learning, the emphasis shifts from training a computer with the types of real or synthetic examples, to equipping it to sift through mountains of data and detect meaningful signals on its own (for example, identifying clusters of performance anomalies for a piece of equipment without being trained to look for specific malfunctions).

Unsupervised learning works best with very large data sets, and therefore computer-generated virtual data – which can be produced in massive quantities – is particularly useful. Virtual data is also valuable for a particular type of ML framework known as deep learning, which can be unsupervised, supervised, or semi-supervised, but always requires an extremely large data set.

Deep Learning
Deep learning is a ML framework patterned after the neural networks of the brain. It involves processing extremely large amounts of data in steps, or layers, with the results of each layer passed to the next layer to cumulatively build a complex and (hopefully) accurate digital model of a real-world behavior, like playing a game of chess or driving an autonomous car.
With the rise of the IoT/IIoT (Internet of Things/Industrial Internet of Things), millions of new sensors are being deployed daily to help gather the big data that deep learning thrives on, like periodic or streaming measures of pressure, volume, temperature, direction, etc. This sensor data – especially when combined with virtual data and with other types of real data like images, audio files and text documents – can fuel high-value AI applications.

In pursuit of such value, B2B investment in IoT/IIoT solutions is estimated to reach $267B by 2020, with 50% of that spending likely to be driven by discrete manufacturing, transportation and logistics, and utilities. (Boston Consulting Group 2017).

Part 1: Why AI Matters to Industrial Markets

Part 2: Industrial applications of Artificial Intelligence and Machine Learning

Part 3: Differentiating Between Artificial Intelligence, Machine Learning and Deep Learning

Part 4: Benefits of Machine Learning in Industrial Contexts

Part 5: Key Challenges of Artificial Intelligence in Industrial Sectors

Part 6: Realizing the Value of Artificial Intelligence in Industrial Sectors

Part 7: Artificial Intelligence and Machine Learning at Dassault Systèmes

 

 

 

Read our White Paper on Artificial Intelligence In Industrial Markets

 

 

 

 

 

 

Join our User Communities to stay on top of the latest industry news, ask questions and collaborate with peers:

 

Learn more about EXALEAD on the 3DEXPERIENCE platform.