Can Machines Really Learn? And Should We Be Afraid?

This post originally appeared in the Navigate the Future blog.


Science fiction books and movies love to portray machines that grow so smart that they have no use for flawed humans and want to either enslave them or eliminate them. With the growth of Artificial Intelligence and Machine Learning – is that future fast approaching?

When you buy a few different items from Amazon and pop-up ads on your web searches start showing items that you may be interested in – that’s machine learning. Analytical programs look for patterns and relationships, and develop what you might call theories, to identify likely future purchases. In machine learning, algorithms learn directly from data and adjust future analysis and activities based on the information it analyzed. Traditional programming performs exactly the same every time, following explicit instructions, and is not affected by experience – it does not learn. In essence, a learning-enabled machine creates (modifies, actually) its own programming.

Artificial Intelligence (AI) implies that the computer can perform tasks in a way that emulates the way the human brain operates. Machine learning may seem akin to AI in a way, but it does not imply human-like smarts, only the ability to use experience to adapt its behavior. Machine learning is very useful and powerful in many applications that have little or no resemblance to anything human like emotions, opinions or morality. So there is no need to worry about your new smart CNC machine joining forces with your MES system to overthrow the shop foreman and take control of the factory.

Machine learning is, indeed, making its way into the plant and will soon be a factor in management systems – in ERP and in supply chain management systems. Machine learning is the key to harvesting useful intelligence and business benefit from the rapidly increasing amount of data from the Industrial Internet of Things (IIoT) that is overwhelming existing software and human decision-makers.

The best that existing systems can do with that torrent of data is to cram (some portion of) it into current static models (programs) and perhaps execute those instructions more quickly or more efficiently because of better data input. They can also display the results of analytics in colorful, summarized form that seek to make it understandable to human users.

Machine learning would allow systems to thoroughly analyze detailed data for patterns and cause-and-effect, and react to what it “learns” by refining algorithms and making better decisions and recommendations. All this happens in real time so the systems can catch deviations quickly and correct undesirable conditions before bad parts are made or deadlines are missed.

Machine learning will continue to make programs and automation smarter, faster, and more efficient and is especially important in a world of Big Data that is simply too much information, coming too fast for traditional programming to handle. But you don’t have to worry about machines taking over the world… at least not yet.

Katie Corey

Katie is the Editor of the SIMULIA blog and also manages SIMULIA's social media and is an online communities and SEO expert. As a writer and technical communicator, she is interested in and passionate about creating an impactful user experience. Katie has a BA in English and Writing from the University of Rhode Island and a MS in Technical Communication from Northeastern University. She is also a proud SIMULIA advocate, passionate about democratizing simulation for all audiences. Katie is a native Rhode Islander and loves telling others about all it has to offer. As a self-proclaimed nerd, she enjoys a variety of hobbies including history, astronomy, science/technology, science fiction, geocaching, true crime, fashion and anything associated with nature and the outdoors. She is also mom to a 2-year old budding engineer and two crazy rescue pups.