Each week we find a new topic for our readers to learn about in our AI Education column.
This week on AI Education we’re going to explain another foundational artificial intelligence topic: machine learning, often abbreviated as ML. If we were to read the most popular AI-related material published to the web, it’s entirely possible that we would emerge convinced that machine learning is something entirely distinct from artificial intelligence. A common abbreviation, AI/ML, seems to imply that distinction. Let’s go ahead and throw that assumption out the window in our first paragraph today. Machine learning is artificial intelligence—all machine learning is artificial intelligence—but not all artificial intelligence is machine learning.
We’re still short of a definition of machine learning, but to really understand the concept, we should think about computers and software as a whole. Until the last decade or so, for a computer to know something, a user like you or me would have to impart that information in a language that the computer can already comprehend—code. A human user was responsible for finding important information that they wanted the computer to use to complete tasks, and then for translating and giving that information to the computer. So if we wanted to extract a piece of data from shareholder letters, 30 years ago we would have had to read each letter and manually enter that information. 15 years ago, we’d have to at least make sure that all of the shareholder letters were in the same format and then tell the computer exactly where to find that bit of information.
Machine learning helps eliminate human intervention from the equation—computers are now capable of identifying, learning and using information without being programmed by a human user to do so each and every time. Per IBM, machine learning “focuses on the using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy.”
How Does It Work
Machine learning, as a process, can be divided into three steps, according to IBM: A decision process, an error function, and a model optimization process. In the decision process, algorithms are used to make a prediction or classification. So when a user feeds data into a machine learning module, the AI finds patterns in the data and makes inferences based on those patterns. In the error function, the program’s predictions are evaluated for accuracy. In the model optimization process, the AI adjusts itself to try to make better predictions or classifications. This process is repeated until a threshold for accuracy is met.
In other words, computers are learning to make decisions autonomously based on trial and error. The coding in machine learning is to help computers understand data and complete tasks autonomously.
Machine learning models are trained on preprocessed data sets. To simplify, the models are fed a set of inputs and related outputs, from which they are asked to infer patterns or relationships—similar to solving for different variables in algebra and calculus—the model then tests its predictions against the data it is being trained with until the difference between its predictions and the actual output in the training data is negligible, and over time the software’s reliability and accuracy will approach (but should not reach) 100%.
We’ve already given several examples of where machine learning is being used. It’s powering computer vision the helps sort and label images and videos and underpins autonomous vehicles. It powers recommendation engines used by platforms like Amazon, Netflix, Spotify and Pandora. It underpins autonomous manufacturing and robots. In finance, it’s already widely used as a fraud detection tool, and is quickly being adopted to power automated investing and trading.
Categories of Machine Learning
There are three primary categories of machine learning models: Supervised, unsupervised and reinforced.
Supervised machine learning, according to IBM, uses large, processed, labeled datasets, specifically curated to train AI, to train algorithms to classify data or predict outcomes accurately. The model is able to adjust the way it weights individual pieces of data as it takes in input over time. A lot of the AI-related technology we already encounter in the wild is trained with supervised machine learning, like email spam filters.
In unsupervised machine learning, algorithms analyze and organize unlabeled data and find relationships and patterns with no human intervention. In unsupervised learning, the data can come from anywhere—some models are trained using gigantic websites like Wikipedia or even larger segments of the web. Unsupervised machine learning underpins a lot of the newfangled professional-grade fintech and wealthtech available today, like customer segmentation features in a CRM. It also powers image and pattern recognition programs, according to IBM.
Reinforcement learning is often thought of as a sub-species of supervised learning. Unlike supervised learning models, reinforcement models are not pre-trained using labeled data sets, but learn as they go using trial and error using a reinforcement algorithm. A reinforcement algorithm rewards behaviors that help the model achieve its objective and penalized behaviors that hinder it from reaching its objective—a little like using incentives and penalties to teach a pet to do tricks. Reinforcement learning is used to train autonomous vehicles and to power game-playing artificial intelligence (like IBM’s Deep Blue).
Types of Machine Learning Models
Machine learning models include neural networks, which we’ve already spent some time explaining on AI Education, but there are many other paths towards self-learning software, many of which follow theoretical models of human learning and cognition.
Regression analysis, for example, is used to make predictions. Linear regression can be used to predict numerical values, while logistic regression can help computers predict answers to yes or no questions.
Decision trees look like a flow chart, a branching sequence of decisions that can be easily represented using a diagram. AI uses decision trees to make predictions and to classify data. A random forest model uses a group of decision trees, combining their results to make a prediction.
Clustering involves using unsupervised learning to find patterns so that data can be grouped. While humans are skilled at classification and taxonomy, machine learning may find relationships between pieces of data overlooked by humans.