“The impact of AI and Automated Decision-Making on care relationships: New challenges for the sociology of health”, is a paper published in Studi di Sociologia by Riccardo Pronzato, researcher and member of the Alma-Aging Team, and Marta Gibin, researcher at the Department of Sociology and Business Law of the University of Bologna.. The research examines the implications of AI and automated decision-making (ADM) systems on care relationships, bridging critical algorithm studies with the sociology of health.
The increasing adoption of AI and ADM technologies in healthcare presents fundamental challenges for social research, as these tools promise efficiency and clinical support but simultaneously raise concerns about the quality of care relationships, the reproduction of inequalities, and the transformation of professional practices. The study offers a theoretical framework connecting two fields that are often seen as distinct: the sociology of health and critical algorithm studies.
Pronzato and Gibin analyse how the introduction of AI and ADM systems in healthcare settings affects not only the adoption of new technologies but also relational dynamics, professional roles, and organizational logics.
Algorithmic systems are not neutral tools; rather, they are products of social and cultural contexts that influence their use and impact. Therefore, it is necessary to critically examine how these technologies are designed, implemented, and interpreted in specific healthcare environments characterized by expectations, power asymmetries, and inequalities.
The authors argue that digital innovation should not be understood as a neutral “solution” but rather as a sociocultural and processual phenomenon requiring critical understanding.
The paper pays particular attention to the risks of reproducing biases and inequalities in care processes, drawing on examples from the literature that highlight cases of discrimination in access to transplants, diagnoses, or clinical resources. Furthermore, it highlights how digital technologies can affect the quality of interactions between healthcare professionals and patients, facilitating processes of disintermediation or automation that may weaken trust-based relationships.
In light of this analysis, the study proposes a multilevel research perspective: analysing social representations of technologies, providing critical training for the actors involved, and advocating for regulation attentive to transparency and legal accountability in the use of automated decision-making tools in healthcare.
Moreover, it shows that the increasing adoption of automated systems in healthcare services requires a perspective that goes beyond a mere focus on technical efficiency.
When care is mediated by algorithmic systems, it reshapes the context in which expectations, timing, responsibilities, and trust are negotiated. For this reason, it is essential to question who designs these technologies, how they are introduced in clinical contexts, and which visions of care they embody.