This is the first of a three-part series of posts that will examine artificial intelligence and machine learning, and the role they play in cybersecurity.

Artificial intelligence (AI) and machine learning (ML) have been hot topics in the world of technology and cybersecurity for years. The concepts seem to have tremendous potential, and simultaneously spark debates about Skynet and The Singularity and the future (or lack thereof) of mankind. They are also often used as marketing buzzwords and confused as synonyms for each other—all of which muddies the waters and makes it that much more difficult to understand what separates artificial intelligence from machine learning (and vice versa) and what value they have in real-world application.

Defining Artificial Intelligence and Machine Learning

The relationship between artificial intelligence and machine learning is like the relationship between whales and dolphins, or between turtles and tortoises, or between Major League Baseball and the Houston Astros. Just as all dolphins are whales but not all whales are dolphins, and all tortoises are turtles but not all turtles are tortoises, and all Houston Astros are Major League Baseball players but not all Major League Baseball players are Houston Astros, all machine learning is artificial intelligence but not all artificial intelligence is machine learning.

Merriam-Webster defines artificial intelligence as:

1 : a branch of computer science dealing with the simulation of intelligent behavior in computers

2 : the capability of a machine to imitate intelligent human behavior

Machine learning is a type or subset of artificial intelligence. One of the clearest definitions of machine learning is, “Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.”

Put another way, artificial intelligence is the ability for a machine to emulate human intelligence to perform tasks such as speech recognition, decision making, or visual perception, and machine learning is a method of refining or improving artificial intelligence.

Getting Past the Buzzword Hype

Tech and cybersecurity companies have used artificial intelligence and machine learning in marketing hype for years. If you walk the aisles of the expo floor at any cybersecurity event, like the RSA Conference or Black Hat, you will find that a majority of vendors claim to be using either artificial intelligence or machine learning, or both. Some confuse or conflate the two terms. Most that make the claim really are using one or the other to some extent.

The operative part of that statement, though, is “to some extent.” The challenge is trying to separate reality from false claims and get past the buzzword hype. You have to look deeper and ask the right questions to determine whether a vendor’s use of artificial intelligence or machine learning is actually effective from a cybersecurity perspective, and if they add value to the products and services being offered.

Most of the implementations of artificial intelligence in cybersecurity are actually machine learning. Because all machine learning is a subset of artificial intelligence, though, a company could technically claim to be doing either or both.

However, not all machine learning is created equal. Machine learning uses algorithms to learn or improve by analyzing input data—but the quality of the machine learning results is dependent on the quality of both the data and the algorithm. Great data combined with a bad algorithm produces bad machine learning output. Poor data combined with an awesome algorithm produces bad machine learning output.

In the next post in this series, I will dig into why machine learning—at least effective machine learning—is a better way to do anomaly and threat detection.

Fortra's Alert Logic
About the Author
Fortra's Alert Logic

Related Post

Ready to protect your company with Alert Logic MDR?