The second key stage came in the 1980s, when effective methods were developed to train neural networks, which had evolved in the meantime, on real-world data. This was made possible by a new class of algorithms that allowed computers to learn patterns and decision rules from data, without explicit rule-based programming. These systems were able to generalize from observed examples and make predictions or decisions on previously unseen inputs.
Thus, emerged machine learning, the ability of machines to learn automatically from experience, a prediction made by Alan Turing in 1947, who imagined a future with “machines that learn from experience.”
Thirty years later, starting from 2010, it became possible to build deep artificial neural networks with many layers of neurons. In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton introduced AlexNet, a deep neural network capable of recognizing images with unprecedented accuracy. The description and results of AlexNet were published in 2012 in one of the most influential research papers in the history of computer vision, cited over 130,000 times by 2023, paving the way for the extensive use of deep learning in visual recognition.
Deep learning established itself as a powerful tool capable of processing unstructured data such as images, videos, and audio, driving many of the most spectacular advances in AI. During the same years, GP-GPUs (General-Purpose Graphic Processing Units) became widespread, providing the computational power to perform vast numbers of parallel calculations required to train neural networks, dramatically reducing processing times. This led deep learning to maturity, enabling it to process enormous quantities of digital information, the Big Data, and ushering in a new season: the spring of AI.