Revolution AI

7 November 2025

The Seasons of AI

Artificial intelligence is undergoing an unprecedented phase of progress, establishing itself as one of the main driving forces behind the transformation of modern society. In recent years, there has been a significant increase in investment in the field, recognition of its strategic value at the international level, and the creation of new regulations to ensure its ethical and safe use.
At the same time, AI has become increasingly pervasive in everyday life: from production and decision-making processes to communication, marketing, education, and even social relationships. This transformation affects both the individual and collective dimensions and is also having a profound impact on scientific research.

 

To date, the story of AI spans about half a century and has been rather discontinuous.

The term was coined in 1955 by computer scientist John McCarthy, who included it in a proposal for a seminar that took place at the Dartmouth Conference in the summer of 1956, an event generally considered the official birth of AI. McCarthy is also known for having developed the Lisp programming language (1958), which within a few years became the most popular programming language used in AI research.

Also in 1958, psychologist Frank Rosenblatt built an electronic device with an evocative name “the Perceptron”. It was the first model of an artificial neural layer with learning capabilities, the ancestor of modern neural networks (computational models inspired by the human brain). Unfortunately, the performance of this machine was disappointing, or at least not well understood in its enormous potential, leading to a decline in funding and trust in this newly born discipline. This period came to be known as the first AI winter.

 

illustration of AI ©iStock
illustration of AI©iStock

The second key stage came in the 1980s, when effective methods were developed to train neural networks, which had evolved in the meantime, on real-world data. This was made possible by a new class of algorithms that allowed computers to learn patterns and decision rules from data, without explicit rule-based programming. These systems were able to generalize from observed examples and make predictions or decisions on previously unseen inputs.

Thus, emerged machine learning, the ability of machines to learn automatically from experience, a prediction made by Alan Turing in 1947, who imagined a future with “machines that learn from experience.”

Thirty years later, starting from 2010, it became possible to build deep artificial neural networks with many layers of neurons. In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton introduced AlexNet, a deep neural network capable of recognizing images with unprecedented accuracy. The description and results of AlexNet were published in 2012 in one of the most influential research papers in the history of computer vision, cited over 130,000 times by 2023, paving the way for the extensive use of deep learning in visual recognition.

Deep learning established itself as a powerful tool capable of processing unstructured data such as images, videos, and audio, driving many of the most spectacular advances in AI. During the same years, GP-GPUs (General-Purpose Graphic Processing Units) became widespread, providing the computational power to perform vast numbers of parallel calculations required to train neural networks, dramatically reducing processing times. This led deep learning to maturity, enabling it to process enormous quantities of digital information, the Big Data, and ushering in a new season: the spring of AI.

 

However, the true leap in AI occurred when several crucial technological conditions came together: fast networks, large datasets, sufficient computing power through dedicated hardware architectures, and new algorithms.
The cornerstone of AI is now the supercomputer, which supports an unprecedented shift: the creation of a new digital reality. The field has moved from discriminative AI models, designed to distinguish and classify existing data, to generative AI models capable of autonomously producing highly realistic new content. Diffusion models have become dominant. Training these models requires massive computational power and unprecedented data availability, enabling algorithms to be trained within months and released to the market shortly after.
When fundamental research, technological development, and market confidence, bringing massive investments, converge, the revolution we are experiencing today begins.

CERN Data Center (© Bennett, Sophia Elizabeth)
CERN Data Center (© Bennett, Sophia Elizabeth)

This leads us to wonder about the deeper nature of these transformations: are we living through an era of great change, or rather, the change of an era? A fourth revolution, founded on cyber-physical systems and artificial intelligence, is reshaping the future. The answer will be given by future generations, of both human and perhaps artificial intelligences.

digital city- illustration of AI©iStock
digital city- illustration of AI©iStock

Socioeconomic Impact of AI

A key indicator for understanding complex phenomena is to look at their economic and sociocultural impact. The year 2024 marked a record high for private investment in artificial intelligence. Globally, total AI investments reached $252.3 billion, a 26% increase over 2023, with global investments in generative AI amounting to $33.9 billion (+18.7% compared to 2023).
(Source: AI Index Report 2025, Stanford University HAI — link)
According to estimates from the new report “Generative AI Market Size, Share, Growth Trends and Forecast 2025–2032” published by DataM Intelligence, the global generative AI market is expected to grow from $45.56 billion in 2024 to $1.022 trillion by 2032, with a compound annual growth rate (CAGR) of 47.53%. This acceleration is being driven primarily by Asia, particularly China, Japan, South Korea, and India. (Source: DataM Intelligence)

From a cultural perspective, 2024 also brought major recognition to AI in the form of Nobel Prizes closely linked to its development. The 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John M. Jumper for developing an AI model capable of predicting the three-dimensional structure of proteins, while the 2024 Nobel Prize in Physics went to John J. Hopfield and Geoffrey E. Hinton for “fundamental discoveries and inventions that enable machine learning with artificial neural networks.”Hopfield, in 1982, introduced the neural network model that still bears his name today.
Language itself reflects this transformation: in just over a decade, the term “artificial intelligence” has evolved from a purely specialized concept to a widely used everyday expression, just as AI has become a daily presence and integration in our lives, likely making it one of the most frequently mentioned terms in 2025.
This unprecedented acceleration promises to revolutionize research at an even deeper level, including in physics. One of AI’s frontiers may soon be the ability to discover new physical laws or guide new research directions, thanks to its capacity to detect hidden correlations, both in data analysis and in the computational simulation of physical systems.

sibilla-biotech-2
Preliminary study of the ACE2 folding intermediate, for anti-COVID-19 research. @INFN-Sibylla biotech

AI and Particle Physics

In fundamental physics research, one of today’s biggest challenges is managing and analyzing the massive volumes of data generated by international scientific collaborations. Machine learning algorithms have become essential tools for identifying meaningful signals within experimental data, dramatically speeding up analysis compared to traditional approaches.
These techniques are also used for particle identification in high-energy collisions, offering greater efficiency than conventional methods. Moreover, machine learning enables the development and implementation of increasingly complex physical models, making numerical simulations faster, more detailed, and more realistic.
The roots of artificial intelligence are themselves deeply connected to physics. The theoretical foundations of AI stem from statistical physics and the theory of complex systems. The earliest conceptual models date back to the post–World War II era, but it was not until the late 20th century that the first practical applications became truly possible, thanks to advances in computer science and growing computational power.
As early as the 1980s, physicists began using neural networks and other algorithms to analyze experimental data. Such techniques played a key role, for example, in determining the Cabibbo–Kobayashi–Maskawa matrix elements, which describe the symmetry between matter and antimatter in quarks (recognized by the 2008 Nobel Prize in Physics awarded to Makoto Kobayashi and Toshihide Maskawa), and in the discovery of the Higgs boson, which explains the origin of mass in elementary particles (awarded the 2013 Nobel Prize in Physics to Peter Higgs and François Englert).
At CERN in Geneva and Fermilab in the United States, by the late 1980s, scientists were already using machine learning to analyze data and recognize the traces of heavy particles. A 1992 paper discussed the use of neural networks for identifying b and c quarks (two of the six “flavors” of quarks described by the Standard Model) in data from the DELPHI experiment at CERN’s LEP accelerator. Despite the technological limitations of the time, AI proved competitive with other analytical methods and earned its place among standard tools for data analysis in particle physics.

AI at the Heart of CERN Research

In 2015, during preparations for a new data-taking cycle (run) at the Large Hadron Collider (LHC), a small group of physicists at CERN began exploring the use of deep learning techniques for analyzing experimental data. The first experiments were based on AlexNet for particle recognition, particularly in data from the LHC and from neutrino experiments.
From 2017 onward, the ATLAS and CMS experiments at CERN started adopting neural models better suited to complex data structures, such as Graph Neural Networks (GNNs) and, more recently, Transformers, the same architecture behind ChatGPT.
These networks can capture intricate relationships among signals, improving particle or rare-event identification by an order of magnitude.

the CMS experiment @CERN
the CMS experiment @CERN

This progress led to the creation of an AI-focused community within the LHC collaborations, applying machine learning to challenges once addressed only through classical methods, such as detector behavior simulation, automatic identification of interesting events, and particle reconstruction from recorded signals.
AI has thus become a standard tool, now regarded as essential for research.
In the Run 3 phase, the CMS experiment took a decisive step by introducing deep neural networks into its real-time data selection system (trigger).
Starting in 2024, CMS has also been building a dedicated archive of anomalous events, aiming to identify potential signals of new, previously unobserved physical phenomena, possibly missed by traditional analysis methods.
This strategy marks a significant shift: AI is no longer just a support tool, it has become an active protagonist in frontier research. Should new discoveries emerge from these events, AI’s role in particle physics could become even more central than it already is.

Aerial view of the VIRGO interferometer (© EGO-INFN)
Aerial view of the VIRGO interferometer (© EGO-INFN)

Gravitational Waves and AI: a New Frontier for Multimessenger Research

The detection of gravitational waves represents one of the most fascinating and innovative areas of modern physics. After the initial discovery phase, gravitational waves are now essential for studying the universe from both an astrophysical and cosmological perspective.
However, the signals recorded by interferometers such as Virgo, LIGO, and KAGRA are extremely faint, often at the very limit of instrument sensitivity. Under these conditions, the ability to distinguish genuine cosmic signals from noise (caused by environmental or instrumental sources) is crucial, especially in multimessenger astronomy, where the goal is to combine data from different astrophysical sources.The timely classification of experimental signals is therefore a complex process that requires advanced methods beyond traditional approaches. This is where artificial intelligence plays an increasingly vital role: Since data are often represented as time–frequency images (spectrograms), convolutional neural networks (CNNs) are particularly effective in identifying and classifying real signals with high precision.

In the same field, a study published in Science in September 2024 presented “Deep Loop Shaping”, a new AI-based method essential for observing gravitational waves with next-generation, more sensitive instruments such as Einstein Telescope (ET). The method was tested on the LIGO interferometer in Livingston and is the result of collaboration among scientists from the Gran Sasso Science Institute (L’Aquila), INFN, the California Institute of Technology (Pasadena), and Google DeepMind (London).The Deep Loop Shaping technique reduces noise and improves control within gravitational observatory systems by 30 to 100 times. If applied to current detectors, LIGO in the United States, Virgo in Italy, and KAGRA in Japan, it could help astronomers detect and record hundreds more gravitational wave events per year, with far greater precision.

 

Integration of AI in INFN Research

With the technological and algorithmic evolution of recent decades, AI has become pervasive in many areas of research carried out by the Italian National Institute for Nuclear Physics (INFN).
There is now no complex data analysis that does not involve an AI-based approach, from event simulation to particle reconstruction from raw electronic signals, from data-analysis techniques to the study of elementary particles, astroparticles, and gravitational waves. In many cases, AI shortens analysis times and thus reduces costs. Machine learning is also successfully used in particle phenomenology, and there are promising prospects for applying generative AI to more theoretical problems.Equally widespread and effective is the use of AI in various fields of applied physics.In medical physics, for example, AI is extensively used, from protein folding to medical imaging, from digital twins to precision medicine.
The INFN is not only engaged in developing and applying AI and machine learning systems in the various branches of physics, but also promotes the growth of skills and infrastructures dedicated to these technologies. In recent years, the Institute has launched the national initiative ML_INFN, which coordinates and supports the widespread use of machine learning in INFN research, from particle physics to nuclear physics, from theoretical physics to interdisciplinary applications. The goal of the project is to strengthen INFN researchers’ expertise in the use of AI-driven technologies, such as machine learning and deep learning, by providing a shared, expandable hardware platform and encouraging knowledge sharing within the scientific community.

Governance and Infrastructures for Artificial Intelligence

At the institutional level, there is now growing international momentum toward defining shared rules and principles for the governance of artificial intelligence. Organizations such as the OECD, the European Union, the United Nations, and the African Union have developed frameworks promoting transparency, security, and trustworthiness in the use of AI-based technologies.
In this context, Italy is preparing to take on a leading role in Europe. The European Union will establish in our country one of its AI gigafactories, called IT4LIA AI Factory, designed to support the development and adoption of advanced technologies across the continent.

Il data center del CNAF al Tecnopolo DAMA, Bologna
Data center at CNAF - Tecnopolo DAMA, Bologna

IT4LIA AI Factory represents the evolution of a strategy launched in 2017 and strengthened through joint European and national contributions, particularly from Italy’s Ministry of Universities and Research (MUR), with the goal of making Italy a strategic hub for innovation in high-performance computing and artificial intelligence.
At the heart of the project will be a next-generation supercomputer, optimized for AI applications, to be installed at the DAMA Tecnopolo in Bologna. This infrastructure stands out as a European reference point for supercomputing, big data, artificial intelligence, and quantum computing, and also hosts the ICSC – National Research Center in High-Performance Computing, Big Data, and Quantum Computing.

 

The FAIR network, funded by Italy’s National Recovery and Resilience Plan (PNRR), is dedicated to the multidisciplinary applications of AI. It includes participation from INFN and Alma-Human Artificial Intelligence, the interdisciplinary center of the University of Bologna, as described by Professor Michela Milano in the interview published in the current issue of Particle Chronicle, INFN’s newsletter.
Link to Particle Chronicle

Among the most pressing challenges to address is the energy sustainability of artificial intelligence. The high energy consumption required to power the computing capacity needed for model execution on supercomputers demands a reconsideration of development strategies. From this perspective, it becomes essential to adopt a “sustainability-by-design” approach, integrating environmental sustainability from the earliest stages of algorithm and infrastructure design.The issue of sustainable computing is the focus of Project Spectrum, discussed in our feature on Physics and Sustainability.

Toward a New Understanding of Artificial Intelligence

When the steam engine was invented, thermodynamics, the science explaining its principles, had not yet been developed. In a similar way, we now find ourselves in a historical moment in which artificial intelligence works, but we do not always fully understand why or how it does.From this realization arises one of the most fascinating and crucial areas of contemporary research: explainable AI, the study of mechanisms that allow us to interpret and explain the results produced by algorithms. This will be one of the great challenges for the researchers of tomorrow, who can already begin to prepare through dedicated academic programs. In Italy, several excellent programs are active, including the PhD in Data Science and Computation at the University of Bologna and the Italian National PhD Program in Artificial Intelligence, a joint initiative involving the Università Campus Bio-Medico di Roma, the University of Naples Federico II, the University of Pisa, Sapienza University of Rome, the Polytechnic University of Turin, and the National Research Council (CNR).

3D rendering of a digital landscape ©mik38/iStock
3D rendering of a digital landscape ©mik38/iStock

The future of AI, as with every major technological revolution of the past, will therefore depend not only on technological innovation, but also on a new understanding of this phenomenon, and on our ability to guide its development according to principles of fairness, safety, and transparency. Achieving these goals will require a shared commitment among the scientific community, institutions, businesses, and citizens. AI is not only a technological frontier: it is a social, cultural, and political challenge that involves us all.

 


Content by Ufficio Comunicazione INFN – COMUNICAZIONEhttps://www.infn.it/en/infn-offices/communications-office/ ISTITUZIONALE E MEDIA


 

You might also be interested in

The quest for dark matter

10 years of gravitational waves

Physics and Technology

Gianluigi Arduini

Beyond LHC

Open Symposium ESPPU 2026

The European Strategy for particle physics

Grafica della sostenibilità

Physics for sustainability