n.3 | June 2025

High performance, minor impact:
the future of computing

In recent years, awareness of the importance of sustainability as a fundamental goal for our future has increasingly spread among the general public, scientific communities, and institutions. The urgency of mitigating climate change has led scientific communities to approach sustainability not only as a phenomenon to study but also as a crucial issue to incorporate into all major future scientific projects. From the development of energy-efficient technologies to the eco-design of new large-scale research infrastructures, and the creation of tools for understanding and managing global environmental challenges, the physics community is engaging with the issue both directly and indirectly. The topic of computing for sustainability and, from a reversed perspective, the sustainability of computing, is becoming ever more relevant. Indeed, while the models produced through supercomputing and artificial intelligence allow us to tackle complex phenomena such as epidemics or the impacts of climate change, computing centers employ increasingly powerful machines to generate them. These computers, which consume vast amounts of energy, have a significant footprint, and their energy demand is expected to continue rising in the coming years. Therefore, while they represent a resource for sustainability, they also pose an environmental sustainability challenge themselves. We talk in depth about supercomputing, future prospects, and the management of growing energy demands in data centers in our interview with Marco Aldinucci, coordinator of the High-Performance Centre for Artificial Intelligence (HPC4AI) at the University of Turin and co-leader of Spoke 1 of the National Center for Research in High Performance Computing, Big Data, and Quantum Computing (ICSC).

marco aldinicci

Marco Aldinucci is professor of Computer Science and coordinator of the Parallel Computing research group at the University of Turin. He founded the HPC4AI@UNITO laboratory and the national HPC laboratory of the CINI consortium of which he is director. He is co-leader of Spoke 1 of the National Center for Research in High Performance Computing, Big Data and Quantum Computing (ICSC), dedicated to developing highly innovative hardware and software technologies for future supercomputers and computing systems

Interview with

Marco Aldinucci

Interview with Marco Aldinucci, coordinator of the High-Performance Center for Artificial Intelligence (HPC4AI) at the University of Turin and co-leader of Spoke 1 of ICSC, dedicated to High Performance Computing (HPC) and Big Data.

What are we talking about when we refer to HPC?

High Performance Computing means using extremely high computing power to solve a problem faster or to solve a bigger problem than the initial one in the same amount of time. In the first case, HPC comes into play for solving scientific or industrial problems where the value of the information degrades over time. Examples include weather forecasting or simulations of natural phenomena that trigger operational scenarios, pharmaceutical chemistry, and materials science, where the computational complexity is enormous and the analysis needs to be completed in a reasonable time.

In the second case, HPC allows to solve problems where the computational grid becomes denser, increasing the number of calculations and the size of the problem. Staying with the example of weather forecasting, I might need to move from a 1 km by 1 km grid to a 100-meter by 100-meter or even a 1-meter by 1-meter grid, for instance to predict a flash flood or to determine whether a self-driving car will encounter ice at a specific intersection. Or, looking at Artificial Intelligence (AI), I might want to increase the number of parameters in my Large Language Model (LLM) to make it capable of addressing more complex tasks, moving from translating a text to reasoning, or understanding irony. To do this, I will need a larger model: I’ll have to go from 7 to 70, or even 700 billion parameters, which means a matrix with 700 billion cells, a nonlinear increase in computation time (the complexity of the basic operation, multiplying dense matrices, is cubic), and a multiplication of the space occupied. While in fact for a standard computer, a laptop, the power is achieved through miniaturisation, in the case of a computer that is a million times more powerful than a laptop, we do not have the technology to miniaturise a million times further. So, to solve this problem, we place many systems side by side, covering an area as large as a soccer field. Sometimes, jokingly, when someone asks me what HPC is, I say: “If you can see it from a satellite, it’s HPC,” but I could also say, “If it consumes more than 1 megawatt, it’s HPC.” In short, it’s a big deal, and an energy-hungry one too.

Read the interview Read the interview

SUBSCRIBE TO THE INFN NEWSLETTER

To subscribe to the free INFN newsletter please fill in the fields below and submit. The fields with * are mandatory. You can unsubscribe at any time by writing to grafica@list.infn.it with subject “Unsubscribe INFN Newsletter”.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Particle Chronicle © 2025 INFN

Newsletter archive

EDITORIAL BOARD
Coordinator  Martina Galli;
Project and contents Martina Bologna, Cecilia Collà Ruvolo, Eleonora Cossi,
Francesca Mazzotta, Antonella Varaschin;
Design and mailing coordinator Francesca Cuicchio; ICT service SSNN INFN