Interview with Michela Milano

7 November 2025

Interview with Michela Milano, Professor at the University of Bologna, where she directs the Alma Mater Research Institute on Human-Centred Artificial Intelligence (ALMA AI) Interdepartmental Centre, and one of the leading figures in major strategic initiatives on AI at national and European level

 

How did ALMA AI come about, and with what objectives?
ALMA AI is an interdepartmental centre involving 28 departments of the University of Bologna and over 520 researchers and lecturers, in addition to a large number of PhD students working in this field. It was established in 2020, with the aim of building a critical mass capable of working comprehensively on topics related to artificial intelligence. Specifically, we set ourselves the challenge of addressing artificial intelligence, within a generalist university such as ours, not only in terms of technological and scientific developments, but also in its strong connection with applications in medicine, agriculture, mechatronics, aerospace, and with the humanities – from social sciences to economics, from law to ethics, to psychology. On the one hand, we wanted to analyse the impact of these technologies on society, law, market, and on the other, to contribute to the development of AI models that take into account human dynamics and complexities. From this blending of knowledge a unique centre was born, populated not only by engineers, computer scientists, physicists and mathematicians, but also by scholars with very diverse expertise. Naturally, the first need was to foster mutual understanding, since in a university with nearly 100,000 students and thousands of lecturers, it is difficult to know what others are doing. From this process, extraordinary collaborations have emerged, which have made our participation in European calls for proposals on Horizon Europe particularly effective in projects requiring a strong SSH (Social Sciences and Humanities) component, with an acceptance rate of 67%.

How does ALMA AI dialogue with European initiatives dedicated to artificial intelligence?
We are an active part of all the major European initiatives, among which it is worth mentioning the AI-on-Demand Platform. This experiment by the European Commission, also known as the “one-stop shop for AI solutions in Europe”, consists of a platform that aggregates not only all the artificial intelligence tools developed in Europe – or elsewhere, provided they comply with European directives on AI, the so-called AI Act – but also the results and the researchers active in the field, as a genuine network. It is not the only one, of course. ALMA AI has in fact taken part in several European networks of excellence, dedicated, for example, to the human-AI interface and to trustworthy AI. At the national level too, we are part of a major extended partnership, the FAIR Foundation – Future Artificial Intelligence Research, funded by the PNRR with 115 million euros and managed by CNR National Research Council. The project is based on the hub-and-spoke model, which promotes collaboration between a coordinating centre and several partner universities. ALMA AI is the lead institution of Spoke 8, dedicated to Pervasive AI, that is, to the study of the pervasiveness of artificial intelligence: a topic that perfectly reflects the multidisciplinary approach that characterises our centre.

What activities are you carrying out within Spoke 8 of FAIR?
AI is entering every field of our society, in offices, schools, hospitals, cities: at Spoke 8 we started from this observation, which poses a series of technological and applicative challenges. From a technological point of view, we are working on three main fronts: the management of heterogeneous data, coming both from infrastructures and from human and social dynamics, which are less controllable; the management of decisions on different spatial and temporal scales, from real time to long-term strategic planning; the management of different computing infrastructures, from the edge model (in which data are processed near where they are generated) to the cloud and supercomputers. This heterogeneity of data, scales, and infrastructures has given rise to a number of research lines. The first line manages the entire data chain, from sensors to strategic planning. Another line concerns the reliability of these systems, especially the very large foundational systems, which sometimes produce results that are somewhat obscure, seemingly reasonable but factually incorrect. The third line concerns the management of multimodality, that is, the integration of textual, visual, numerical, and temporal data. On the applicative side, Spoke 8 addresses the main social and cultural challenges of artificial intelligence, structured into several work packages. One is dedicated to social challenges, such as the acceptance of AI systems in safety-critical contexts (for example in healthcare); another deals with education and training, understood both as AI supporting teaching, assessment, and student recruitment, and as AI education in a broader sense, for the upskilling and reskilling of professionals already employed in companies who find themselves interacting with these new systems without knowing their opportunities and risks; a work package explores the relationship between AI and creativity, studying how human-AI co-creation can open up new artistic pathways; and one work package focuses on legal implications: on the one hand, how to use artificial intelligence to support regulatory and compliance activities, and on the other, how to regulate AI itself. In this area we have also contributed to the definition of the European AI Act, and we are currently managing the EUSAIR project, which is very important, on regulatory sandboxes for AI, safe experimentation environments promoted by the AI Office of the European Commission. These environments are designed to test, possibly stress (for instance through cyberattacks), and validate AI systems before their release onto the market or their deployment. But going back to Spoke 8, all the activities mentioned are supported by a more experimental work package, managed by INFN, which helps us translate research into concrete use cases. Obviously, since these artificial intelligence solutions work in close contact with people, these use cases are built with great care, using models that are fair, inclusive, just, explainable, robust, secure, and privacy-respecting.

How are such complex models built?
Let’s take a practical example. A few years ago, we collaborated with the Emilia-Romagna Region, as part of the e-Policy project, to design incentives for the use of renewable energy, particularly for the installation of photovoltaic panels. Our task was to estimate the adoption rate of these solutions given incentive mechanisms. To obtain reliable forecasts, we integrated a classical AI model (which merely correlated economic availability with willingness to adopt) with data derived from several sociological studies. According to these studies, behind the decision to adopt a new technology there are barriers or drivers that have nothing to do with economic aspects. A potential user, for example, may be influenced by the behaviour of a neighbour who has already installed panels, by the desire to reduce their environmental impact, or by the idea of becoming energy independent. Conversely, they may be discouraged by distrust in incentives, fear of bureaucracy, or simply the inconvenience of having technicians in the house for installation. One would not imagine having to take these human and social aspects into account in the design of incentives, yet when we introduced them into our simulators, we obtained extremely accurate predictions.

What happens when the data on which models are built is polarised or reproduces stereotypes?
There are two ways to intervene: through dataset augmentation – that is, artificially expanding the dataset with new variants of existing data – or by constructing algorithms that, even when starting from biased data, generate models that are no longer biased. We adopted the first approach, dataset augmentation, in a collaboration with the paediatric dermatology department of Sant’Orsola Polyclinic, as part of the AEQUITAS project: we had developed for them a skin lesion recognition system that achieved 95% accuracy on light skin tones but remained below 50% on dark skin. This was because the original dataset consisted almost entirely of images of light skin. By using generative AI, we therefore created images of lesions on different skin tones and retrained the system with these new data. The result was a model with practically equivalent performance across all skin tones. As for the second approach, it acts directly on the model, introducing layers or constraints that impose fairness conditions. In practice, the model is “guided” not to generate distorted outputs with respect to certain sensitive attributes – such as gender, age, or ethnicity – even if the initial data contain biases. In this way, even starting from unbalanced information, the system is able to produce more representative results.

In which directions can Italy lead innovation in AI by leveraging its specific strengths?
Certainly Italy, like all countries belonging to the European Union, must take the AI Act into consideration. In Europe some uses of artificial intelligence are prohibited, and it is a good thing that they are. The challenge now is to translate the legal directives into technical requirements, a very difficult task that still lies ahead. As with the GDPR, which was much criticised and is now adopted everywhere, including in the United States, we have in this new regulation an opportunity. It is also interesting to delve deeper into the field of edge AI – the implementation of AI algorithms and models directly on local edge devices – in which Italy is truly very strong. To implement these models on devices with very limited computing and memory resources, distillation and compression techniques are used, which reduce the models. A small model has many advantages: it is more sustainable from an energy point of view, more flexible, that is, more easily customisable, more easily integrable with other systems, and more efficient in response times. At ALMA AI we are already pushing towards the miniaturisation of models, regardless of the edge, because it is an approach well suited to our industrial fabric, composed of small and medium-sized enterprises. It makes no sense for us to chase the fashionable American or Chinese models, ever larger and omniscient. If tomorrow morning all European companies, small, medium, large, start-ups decided to use such models for all their business activities – from administration to production optimisation, to maintenance – we would not have enough energy to run them. Not to mention that in Italy companies often do not know the potential of AI. We must therefore focus on the simplification and specialisation of models, using large models as “teachers” to build small models capable of performing a specific industrial application. A company that chooses, for example, a model capable of predictive maintenance does not need it to know everything about history, music, or art. The idea is to discard what is unnecessary in the specific context, thus also supporting companies in their adoption. And last but not least, Italy should move towards translating into technical requirements not only legal regulations but also ethical principles, such as fairness, starting by defining precisely what it means by “fairness”. There are many different metrics applicable to AI systems, so it is important to define them, to be able to measure them, and to implement systems that meet these metrics, assessing from the outset methods to mitigate issues if their performance proves unsatisfactory. These, in my opinion, are the directions towards which we should channel our efforts.

 


BIO

Michela Milano is Professor at the University of Bologna, where she directs the Alma Mater Research Institute on Human-Centred Artificial Intelligence (ALMA AI) Interdepartmental Centre. She also directs the Digital Societies centre at FBK Fondazione Bruno Kessler, and has been Vice-President of the European Association on Artificial Intelligence and Executive Advisor to the Association for the Advancement of Artificial Intelligence. She was part of the group of experts that drafted the national strategy on AI and is a member of the Italian delegation in the Horizon Europe Programme Committee for Cluster 4. She is author of over 180 papers for international journals and conferences, has won numerous awards and projects, and is involved in major strategic initiatives on AI at national and European level.

Michela Milano Michela Milano
You might also be interested in

Interview with Pierre Sikivie

Interview with Fulvio Ricci

Ilaria Marcolini

Interview with Ilaria Marcolini

Marco Aldinucci

Interview with Marco Aldinucci