Skip to content

Algorithmic Decision Systems and Social Inequality: A Comprehensive Review Across Healthcare, Education, and Social Services

1. Introduction

Algorithmic decision systems (ADS) have become increasingly prevalent in resource allocation contexts, such as healthcare, education, and social services. These systems are designed to optimize the distribution of limited resources, often leveraging advanced data analytics and machine learning techniques to make more informed and efficient decisions. However, the integration of ADS into these sectors is not without its challenges. The interactions between ADS and existing institutional logics, professional discretion, and ethical frameworks can produce distributional outcomes that either reinforce or disrupt existing social inequalities. Understanding these interactions is crucial for ensuring that ADS are used in a manner that promotes fairness and equity.

This report aims to provide a comprehensive review of the literature on ADS in resource allocation contexts, focusing on the ways in which these systems interact with institutional logics, professional discretion, and ethical frameworks. The objectives of this research are to identify the key factors that influence the design and implementation of ADS, to analyze the distributional outcomes of these systems, and to explore the ethical implications of their use. By examining these interactions, the report seeks to offer insights and recommendations for policymakers, practitioners, and researchers to develop and deploy ADS in a way that mitigates the risk of reinforcing social inequalities and enhances the ethical integrity of resource allocation processes.

The scope of this research includes a detailed examination of the historical context of ADS in healthcare, education, and social services, as well as an analysis of the institutional logics that shape their implementation. The role of professional discretion in the use of ADS is also a central focus, as it can significantly affect the outcomes of these systems. Finally, the report will explore the ethical frameworks that guide the design and deployment of ADS, including consequentialist, deontological, and virtue-based approaches, and discuss how these frameworks can be integrated into the development of ADS to ensure that they are both effective and fair.

Having outlined the objectives and scope of the research, the following section will delve into the historical context of ADS in various resource allocation contexts.

2. Literature Review

This literature review synthesizes existing academic research on algorithmic decision systems (ADS) in resource allocation contexts, such as healthcare, education, and social services. It explores how ADS interact with institutional logics, professional discretion, and ethical frameworks, ultimately influencing distributional outcomes that can either reinforce or disrupt existing social inequalities. The review begins by examining the historical context of ADS in these sectors, followed by an analysis of the institutional logics that shape their implementation and impact. Next, it delves into the role of professional discretion in the use of ADS, and concludes with a discussion of the ethical frameworks that guide their design and deployment.

2.1. Historical Context

The historical development of algorithmic decision systems (ADS) in resource allocation contexts has been marked by significant advancements and evolving applications across healthcare, education, and social services. In healthcare, the integration of ADS has been driven by the need to optimize resource allocation while balancing utilitarian and egalitarian objectives. For instance, Hynninen, Vilkkumaa, and Salo (2021) developed a two-phase optimization model to address the challenges of resource allocation in healthcare, which can be extended to other sectors. The first phase involves dynamic programming to determine the optimal testing and treatment strategies for each patient segment, defined by different risk levels, at various cost levels. This ensures that the strategies are tailored to the specific characteristics of each segment, enhancing transparency and accountability by providing clear, data-driven recommendations for resource use. The second phase employs binary linear programming to allocate resources across segments to maximize the chosen policy-level objective, whether it is utilitarian (maximizing aggregate health) or egalitarian (minimizing health differences among segments) [2].

Similarly, in the realm of medical appointment scheduling, the use of ADS has been shown to impact racial inequalities. The paper "Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling" provides empirical evidence of this impact. The authors conducted simulations to compare two objective functions: UOF-R (race-aware) and UOF-MM (race-unaware). The results indicate that UOF-R significantly reduces racial disparity in waiting times while maintaining a similar schedule cost to traditional methods. In contrast, UOF-MM, although it reduces racial disparity to some extent, does so at the expense of increased schedule cost and still fails to match the performance of UOF-R. The simulations show that black patients typically wait over 30% longer than non-black patients when using traditional scheduling methods, and this disparity is not mitigated by removing socio-economic indicators from the data. The paper suggests that race-aware methodologies, like UOF-R, are effective in reducing racial disparities in healthcare access, but raises ethical questions about the appropriateness of using race in scheduling decisions [7].

In education, the historical development of ADS has focused on improving resource allocation to enhance student outcomes and address inequalities. Early applications of ADS in education were primarily concerned with optimizing the allocation of limited resources, such as funding, teacher assignments, and classroom space. Over time, these systems have become more sophisticated, incorporating data from student performance, socio-economic backgrounds, and other relevant factors to tailor resource allocation more effectively. However, the integration of ADS in educational settings has also raised concerns about the potential reinforcement of existing inequalities, particularly in terms of access to quality education and resources for marginalized groups.

In social services, the evolution of ADS has been influenced by the need to manage large volumes of data and make efficient, fair decisions. Social service agencies have increasingly adopted ADS to allocate resources such as housing, food assistance, and welfare benefits. These systems aim to streamline processes, reduce administrative burdens, and ensure that resources are distributed based on need. However, the historical context reveals that the design and implementation of ADS in social services have often been shaped by institutional logics and professional discretion, which can either mitigate or exacerbate social inequalities. For example, the prioritization of cost-efficiency and administrative convenience may sometimes overshadow the need for equitable distribution of resources.

The historical development of ADS in these sectors underscores the importance of considering institutional logics, professional discretion, and ethical frameworks in their design and implementation. As discussed in the previous section, institutional logics such as professionalism and managerialism play a crucial role in shaping the use of ADS in healthcare. Professionalism emphasizes the central role of clinicians in health services delivery, where IT should support clinicians in their patient care, legitimized by their evidence-based knowledge, extensive training, and clinical experience. Managerialism, or the 'business-like healthcare logic,' focuses on hospital integration and standardization through information sharing, promoting overall cost-efficiencies, accountability, fulfillment of government requirements, and patient satisfaction [3].

Having explored the historical context of ADS in various resource allocation contexts, the following section will delve into the institutional logics that influence their implementation and impact.

2.2. Institutional Logics

The institutional logics that govern resource allocation decisions play a pivotal role in the implementation and impact of algorithmic decision systems (ADS). These logics are deeply embedded in the organizational structures and practices of public institutions, influencing how ADS are designed, deployed, and evaluated. One critical aspect of these logics is the phenomenon of organizational ignoring, which can lead to the blackboxing of ADS, thereby preventing stakeholders from recognizing and addressing the negative consequences of these systems [8].

Organizational ignoring practices can manifest in various ways, such as avoiding engagement with the detrimental outcomes of ADS, failing to respond adequately to algorithmic errors, and maintaining a blind eye to violations of applicable regulations and legislation. This multilayered blackboxing extends beyond the technical opacity of ADS to include social and institutional dimensions, making it challenging for stakeholders to hold public institutions accountable. A case study from Gothenburg, Sweden, illustrates this issue in the context of public school placements. The Public School Administration (PSA) used an ADS for school placements, leading to widespread breaches of applicable rules and regulations. Despite significant protests and evidence of violations, the PSA initially failed to address these issues, and the court system did not correct the injustices. This failure is attributed to organizational ignoring practices, which contributed to institutional blindness and prevented the recognition and correction of ADS errors [8].

In the healthcare sector, two prominent institutional logics are professionalism and managerialism. Professionalism emphasizes the central role of clinicians in health services delivery, where IT should support clinicians in their patient care, legitimized by their evidence-based knowledge, extensive training, and clinical experience. Clinicians determine their own information needs, functionality requirements, and IT design specifications, and IT and data exchange should be tailored to these requirements. Managerialism, or the 'business-like healthcare logic,' focuses on hospital integration and standardization through information sharing, promoting overall cost-efficiencies, accountability, fulfillment of government requirements, and patient satisfaction. The role of IT professionalism is also noted as a critical factor in IT governance, given the rapid technological developments and healthcare's increasing dependency on IT. IT professionals influence IT governance decisions, and their institutionalized beliefs and values must be considered alongside those of managers and clinicians [3].

The interaction between ADS and institutional logics in resource allocation contexts can result in significant social and legal injustices. Public institutions must be vigilant and proactive in addressing the negative consequences of ADS to ensure that distributional outcomes do not reinforce existing social inequalities. For instance, in the context of palliative care triage, professional discretion plays a crucial role in the implementation and use of ADS. The study by Lotfivand et al. (2024) highlights the importance of integrating medical expertise into the development and training of a convolutional neural network (CNN) used for patient prioritization. The CNN was trained on a dataset annotated by medical experts, ensuring that the model could accurately identify levels of patient urgency from complex clinical data. This approach underscores the necessity of professional input to enhance the precision and reliability of ADS in high-stakes environments like palliative care [4].

Moreover, the study emphasizes that while AI triage can significantly improve the efficiency and accuracy of patient prioritization, it does not replace the judgment and discretion of healthcare professionals. Instead, it serves as a supportive tool to assist clinicians in making rapid and informed decisions, particularly in scenarios where the demand for healthcare services exceeds the supply of medical personnel. The integration of AI into triage processes is seen as a means to alleviate the pressure on healthcare resources and reduce the risk of cross-infection during outbreaks or pandemics, without compromising the quality of care. The role of professional discretion in validating and fine-tuning the AI model is essential for fostering trust and ensuring that the ADS aligns with the ethical standards and professional practices of the healthcare sector [4].

In summary, the institutional logics that shape the use of ADS in resource allocation contexts, such as public school placements and healthcare, can either mitigate or exacerbate social inequalities. Organizational ignoring practices can lead to the blackboxing of ADS, preventing the recognition and correction of algorithmic injustices. Conversely, the integration of professional discretion and the alignment of ADS with institutional logics can enhance the precision, reliability, and ethical integrity of these systems. The following section will explore the role of professional discretion in the use of ADS in more detail.

2.3. Professional Discretion

The role of professional discretion in the implementation and use of algorithmic decision systems (ADS) is a critical factor in determining the distributional outcomes of these systems. Professional discretion refers to the autonomy and judgment that practitioners exercise in their decision-making processes, which can either mitigate or exacerbate the potential biases and errors introduced by ADS.

In the context of public school placements in Gothenburg, Sweden, the use of an ADS led to widespread breaches of applicable regulations and legislation, assigning thousands of children to schools in violation of relevant rules. Despite massive protests, most violations remained unaddressed initially by the Public School Administration (PSA) and subsequently by the court system. This suggests that professional discretion within public institutions can play a significant role in either addressing or avoiding the detrimental consequences of ADS. The authors highlight that public institutions might engage in practices of organizational ignoring, which contribute to making actors blind to the ADS and its consequences. These practices can lead to a form of social and institutional blackboxing, where the errors and injustices produced by the ADS system are not recognized or corrected. The case study underscores the importance of professional discretion in the implementation and use of ADS, as it can either mitigate or exacerbate distributional outcomes, depending on whether professionals actively engage with and address the issues or ignore them [8].

In professional services, particularly in the auditing sector, the integration of AI into decision-making processes involves a delicate balance between AI's formal rationality and the practitioner's substantive rationality. AI can be regarded as agentic artifacts that afford new forms of interaction and collaboration between humans and technology. Individual practitioners can delegate complex decision-making tasks to AI, but this requires careful consideration to ensure accountability and responsible decision-making. AI's role in decision-making tasks such as diagnosis, inference, and treatment is crucial, with inference being the core process and diagnosis and treatment acting as intermediaries. Traditional IT views AI tools as mechanisms to enhance information processing, but this approach fails to recognize the limitations of individual knowledge. An AI-proactive approach in diagnosis involves AI not just assimilating information but proactively identifying anomalies and irregularities. AI can exceed practitioners' performance in discovering problems, but it may lack the ability to contextualize its 'knowledge' to specific cases, leading to biased judgments and information overload. Practitioners should act as contextual interpreters, providing AI with insights to refine its problem identification and reduce false positives [6].

In the healthcare sector, the synergy between AI's analytical capabilities and the practitioner's contextual understanding is essential for enhancing the transparency and interpretability of AI-driven decisions. AI operates on formal rationality, using data, algorithms, and mathematical models, while humans rely on substantive rationality, drawing on personal experience, intuition, and context. The ideal approach to inference involves parallel processing by both AI and humans, followed by a comparative analysis of their conclusions. Discrepancies between AI and human inferences can be scrutinized, leading to corrections or recalibrations, which enhances the transparency and interpretability of AI-driven decisions. For instance, in the context of palliative care triage, a study by Lotfivand et al. (2024) highlights the importance of integrating medical expertise into the development and training of a convolutional neural network (CNN) used for patient prioritization. The CNN was trained on a dataset annotated by medical experts, ensuring that the model could accurately identify levels of patient urgency from complex clinical data. This approach underscores the necessity of professional input to enhance the precision and reliability of ADS in high-stakes environments like palliative care [4].

Practitioners must carefully manage the delegation of tasks to AI, ensuring that they maintain control and interpretability. They should serve as contextual interpreters, bridging the gap between AI's formal rationality and the unique, complex nature of professional service contexts. The synergy between AI's analytical capabilities and the practitioner's contextual understanding can lead to more nuanced and effective professional decision-making. Governance structures and policies are needed to promote accountable decision-making with AI in professional services firms. Balancing the empowerment of practitioners with AI and preserving managerial control is a critical challenge that warrants further research [6].

In summary, professional discretion plays a pivotal role in the implementation and use of ADS in resource allocation contexts. Practitioners must actively engage with and address the issues arising from ADS to ensure that distributional outcomes align with ethical and professional standards. The integration of professional discretion can help mitigate the risks of biased judgments and information overload, enhancing the transparency and interpretability of AI-driven decisions. Having examined the role of professional discretion, the following section will delve into the ethical frameworks that guide the design and deployment of ADS.

2.4. Ethical Frameworks

The ethical considerations and frameworks that guide the design and deployment of algorithmic decision systems (ADS) in resource allocation contexts are multifaceted and crucial for ensuring that these systems do not inadvertently reinforce existing social inequalities. Three major ethical frameworks—consequentialist, deontological, and virtue-based—provide a comprehensive basis for addressing the ethical issues associated with ADS.

Consequentialist Framework: This framework emphasizes the future effects of actions, focusing on achieving the greatest good. Decision-makers should consider the outcomes and their likelihood, aiming to produce the best consequences for all affected parties. However, predicting outcomes accurately can be challenging, especially in complex and uncertain environments. For instance, in the context of credit scoring, biased algorithms can lead to significant disparities in credit limits offered to individuals with identical financial positions and credit risks, as seen in the case of the Apple credit card algorithm [5]. Addressing such biases requires the development of fairness metrics and the use of experimental data to ensure that the system's outcomes are just and equitable.

Deontological Framework: This framework is rule-based, emphasizing the obligations and duties of individuals and organizations to "do the right thing." It involves defining ethical obligations and identifying actions that should never be taken. While treating everyone equally is ethically sound, it may lead to negative outcomes for some individuals. For example, in the healthcare sector, the use of race-unaware methodologies in medical appointment scheduling can reduce racial disparity to some extent but at the expense of increased schedule costs and still fail to match the performance of race-aware methodologies [7]. Therefore, a deontological approach must be balanced with other ethical considerations to ensure that the system's design and implementation are both fair and effective.

Virtue-Based Framework: This framework relies on virtuous traits and behaviors to guide ethical decisions. It focuses on the character of the decision-maker and what a virtuous person would do in similar circumstances. However, defining and agreeing on what constitutes virtue can be difficult, as different communities and individuals may have varying views on virtuous traits. In the context of human-AI interaction, the cultivation of moral virtues in developers and users is essential. Interdisciplinary ethics teams and AI ethics committees can collectively deliberate and make decisions, thereby enhancing the ethical alignment of AI systems [1]. These teams can help bridge the gap between technical capabilities and ethical responsibilities, ensuring that ADS are designed and used in a manner that reflects the values and norms of the community.

The SPED framework, which is particularly relevant in contexts with high surveillance and analytics sophistication and low levels of individual privacy, suggests that all three normative approaches—consequentialist, deontological, and virtue-based—should be used together to provide a comprehensive and holistic view of the ethical issues. This combination helps address the complexities and uncertainties inherent in such situations, though each framework has its own potential weaknesses [9].

Contextual Factors: The importance of context in ethical decision-making cannot be overstated. Key factors include the Place (where the ADS is deployed), Purpose (the intended use of the ADS), and Expectations of Privacy Rights (the legal and cultural norms regarding privacy). These context factors are crucial for determining the ethical justifiability of ADS innovations. For example, in the healthcare sector, the ethical responsibility dimension of the XAIOR framework ensures that advanced methods for transforming data into insights comply with societal expectations, including ethical, legal, and frugal norms [5].

Fairness and Bias Prevention: Ethical responsibility in ADS emphasizes the importance of fair analytics to avoid discrimination during the method development and decision-making process. Research in this domain, such as the work by De-Arteaga et al. (2022), investigates biased decision-making to understand and prevent it. Biased data can lead to poor performance and questionable attributability, as seen in credit risk modeling and historical job hiring data [5]. Measures to mitigate these biases include using experimental data and carefully designed data collection processes, which can help ensure that the system's outcomes are reliable and fair.

Trust and Transparency: Ethical responsibility also involves creating solutions that external stakeholders trust. This trust is built through transparency and the ability to explain the outcomes of the algorithm. For example, the General Data Protection Regulation (GDPR) mandates that individuals have the right to receive an explanation of any decision made by an algorithm, ensuring legal compliance and enhancing ethical responsibility [5]. Transparent and explainable AI can lead to more informed and accountable decision-making, which is essential for maintaining public trust and ensuring that ADS are used ethically.

Multi-Stakeholder Collaboration: The involvement of diverse stakeholders, including managers, data scientists, and policymakers, is essential for responsible AI adoption and to avoid reinforcing existing social inequalities. Interdisciplinary ethics teams and AI ethics committees can help establish an ethical ecosystem for AI, leading to more transparent and fair outcomes in resource distribution [1]. Multi-stakeholder collaboration ensures that the ethical dimensions of ADS are thoroughly considered and integrated into the system's design and implementation.

Having analyzed the ethical frameworks that guide the design and deployment of ADS in resource allocation contexts, the following section will explore the methodologies used to implement these systems, including the technical and practical considerations involved.

3. Methodology

The methodology section describes the research methods used to gather and analyze data, focusing on the sources of academic literature and the criteria for selecting and synthesizing the information. The primary goal of this section is to provide a transparent and systematic approach to the research, ensuring that the findings are robust and reliable.

Sources of Academic Literature

The academic literature reviewed for this study was sourced from a variety of peer-reviewed journals, conference proceedings, and books. Key sources include the survey "A survey of contextual optimization methods for decision-making under uncertainty" by Utsav Sadana, Abhilash Chenreddy, Erick Delage, Alexandre Forel, Emma Frejinger, and Thibaut Vidal, which provides a comprehensive overview of the methodologies used in algorithmic decision systems (ADS) [10]. Additionally, the paper "Operationalization of Utilitarian and Egalitarian Objectives for Optimal Allocation of Health Care Resources" by Yrjänä Hynninen, Eeva Vilkkumaa, and Ahti Salo (2021) offers insights into the integration of equity considerations in resource allocation models [2]. The study "Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling" by Samorani, Harris, Blount, Lu, and Santoro (2022) provides empirical evidence of the impact of ADS on social inequalities in healthcare [7]. Lastly, the research by Lotfivand et al. (2024) on the use of convolutional neural networks (CNNs) in palliative care triage highlights the importance of integrating medical expertise into the development and training of ADS [4].

Criteria for Selecting and Synthesizing Information

The selection and synthesis of academic literature were guided by several criteria to ensure that the research is relevant, robust, and aligned with the study's objectives. These criteria include:

  1. Relevance to Contextual Stochastic Optimization (CSO): Literature was selected based on its relevance to the three main frameworks for learning policies from data: decision rule optimization, sequential learning and optimization (SLO), and integrated learning and optimization (ILO) [10].
  2. Empirical Performance: Studies that demonstrate the effectiveness of decision rules through empirical data were prioritized. This criterion ensures that the methodologies are not only theoretically sound but also practically viable [10].
  3. Robustness: Research that addresses the robustness of models to avoid overfitting and ensure reliable decision-making was emphasized. This is crucial for the development of ADS that can perform well in real-world, uncertain environments [10].
  4. Integration of ML and Stochastic Programming: Sources that integrate machine learning (ML) techniques with stochastic programming to enhance decision-making under uncertainty were particularly valuable. This integration is essential for developing sophisticated ADS that can handle complex data and make informed decisions [10].

Methodological Approaches

The methodological approaches used in this study are designed to address the interaction between ADS and institutional logics, professional discretion, and ethical frameworks. Specifically, the following frameworks were examined:

  1. Decision Rule Optimization: This approach involves using a parameterized mapping as the decision rule and identifying the parameter that achieves the best empirical performance based on available data. The decision rule can be a linear combination of functions of the covariates or a deep neural network (DNN). Regularization techniques are applied to prevent overfitting, especially when data is limited [10].
  2. Sequential Learning and Optimization (SLO): SLO is a two-stage procedure that first predicts a conditional distribution for uncertain parameters given the covariates using a trained model, and then solves an associated contextual stochastic optimization (CSO) problem to determine the optimal action. Robustification techniques, such as regularization during training or adjustments in the CSO problem formulation, are employed to mitigate post-decision disappointment due to model overfitting or misspecification [10].
  3. Integrated Learning and Optimization (ILO): ILO combines the prediction and optimization stages to search for the predictive model that guides the CSO problem toward the best-performing actions. This framework is motivated by the idea that high precision predictors are not always necessary when the primary interest is in the quality of the prescribed action. ILO has been explored in various applications, including logistics and energy management [10].

Adapting Methodological Approaches to Institutional Logics, Professional Discretion, and Ethical Frameworks

While the primary methodological frameworks provide a strong technical foundation, they can be adapted to consider institutional logics, professional discretion, and ethical frameworks. For instance:

  • Institutional Logics: By incorporating contextual information that reflects institutional practices and norms, these frameworks can better align with the institutional environment in which decisions are made. This is particularly important in sectors such as healthcare and education, where institutional logics can significantly influence the outcomes of ADS [10].
  • Professional Discretion: Decision rules can be designed to allow for human intervention and discretion, ensuring that professional judgment is integrated into the decision-making process. This is crucial in high-stakes environments like healthcare, where the final decision should reflect both the technical recommendations of the ADS and the professional expertise of clinicians [10].
  • Ethical Frameworks: Robustification techniques and the focus on empirical performance can help ensure that the models adhere to ethical standards, reducing the risk of reinforcing social inequalities. For example, in the context of healthcare, race-aware methodologies can be used to minimize racial disparities in waiting times, achieving both efficiency and fairness [7].

Data Collection and Analysis

Data collection involved a systematic review of the literature, focusing on empirical studies and theoretical frameworks that address the interaction between ADS and institutional logics, professional discretion, and ethical frameworks. The data was analyzed using a combination of qualitative and quantitative methods, including thematic analysis for qualitative data and statistical analysis for quantitative data. Thematic analysis helped identify common themes and patterns in the literature, while statistical analysis provided insights into the empirical performance of different methodologies [10].

Case Studies

To illustrate the practical application of these methodological approaches, several case studies were included. For example, the study by Hynninen, Vilkkumaa, and Salo (2021) demonstrates the use of a two-phase optimization model in healthcare to achieve both utilitarian and egalitarian objectives [2]. The research by Samorani et al. (2022) on medical appointment scheduling provides empirical evidence of the impact of race-aware and race-unaware methodologies on racial disparities [7]. The case study by Lotfivand et al. (2024) on the use of a CNN-based DSS in palliative care highlights the importance of integrating medical expertise into the development and training of ADS [4].

Ethical Considerations in Methodology

Ethical considerations were integrated throughout the methodology to ensure that the research adheres to high standards of fairness and accountability. This included the use of fairness metrics to evaluate the performance of ADS, the development of race-aware methodologies to address racial disparities, and the inclusion of multi-stakeholder perspectives to enhance the ethical alignment of the systems. The ethical responsibility dimension of the XAIOR framework was particularly influential, ensuring that advanced methods for transforming data into insights comply with societal expectations, including ethical, legal, and frugal norms [5].

Having described the research methods and criteria for selecting and synthesizing information, the following section will present the results of the study, including the distributional outcomes of ADS in various resource allocation contexts.

4. Results

The findings from the literature review reveal significant interactions between algorithmic decision systems (ADS) and institutional logics, professional discretion, and ethical frameworks in resource allocation contexts. These interactions can either reinforce or disrupt existing social inequalities, depending on the specific design and implementation of ADS.

In the context of public school placements in Gothenburg, Sweden, the Public School Administration (PSA) utilized an ADS system that resulted in widespread breaches of applicable rules and regulations. Despite substantial protests and evidence of violations, the PSA initially failed to address these issues, and the court system did not correct the injustices. This failure is attributed to organizational ignoring practices, which contributed to institutional blindness and prevented the recognition and correction of ADS errors [8]. The study provides a theoretical framework that explains how social and legal implications of ADS can remain unaddressed by public institutions, highlighting the need for robust oversight and accountability mechanisms.

In healthcare, the paper "Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling" offers empirical evidence of the impact of ADS on racial inequalities. The authors conducted simulations to compare two objective functions: UOF-R (race-aware) and UOF-MM (race-unaware). The results indicate that UOF-R significantly reduces racial disparity in waiting times while maintaining a similar schedule cost to traditional methods. In contrast, UOF-MM, although it reduces racial disparity to some extent, does so at the expense of increased schedule cost and still fails to match the performance of UOF-R. Black patients typically wait over 30% longer than non-black patients when using traditional scheduling methods, and this disparity is not mitigated by removing socio-economic indicators from the data. The paper suggests that race-aware methodologies, like UOF-R, are effective in reducing racial disparities in healthcare access, but raises ethical questions about the appropriateness of using race in scheduling decisions [7].

The ethical management of human-AI interaction in resource allocation contexts is another critical area of focus. The document "Ethical management of human-AI interaction: Theory development review" by Heyder, Passlack, and Posegga (2023) emphasizes the importance of integrating duty ethics and virtue ethics to ensure ethical management of ADS. Duty ethics involves adhering to established guidelines and norms, both internal and external, which set boundaries for developers and guide the development of the system. Virtue ethics focuses on the ethical development and use of AI through the cultivation of moral virtues in individuals, including developers and users who provide feedback. The interplay between these two ethical perspectives is crucial, as duty ethics can be misaligned or contradicted by the virtue ethics of individual users, especially in widely used systems like ChatGPT. To address this, the authors suggest the formation of interdisciplinary ethics teams and AI ethics committees to collectively deliberate and make decisions, thereby enhancing the ethical alignment of AI systems. They also highlight the need for tailored ethical measures that reflect the organizational culture, goals, and priorities, as ethics is partly subjective and must be adapted to local conditions. The involvement of diverse stakeholders, including managers, data scientists, and policymakers, is essential for responsible AI adoption and to avoid reinforcing existing social inequalities [1].

The document "Ethical decision-making frameworks for surveillance analytics" provides insights into the ethical frameworks that should guide the design and deployment of ADS. It outlines three major ethical frameworks: Consequence, Duty, and Virtue. The Consequence framework emphasizes the future effects of actions, focusing on achieving the greatest good. However, predicting outcomes accurately can be challenging, especially in complex and uncertain environments. The Duty framework is rule-based, emphasizing the obligations and duties of individuals and organizations to "do the right thing." While treating everyone equally is ethically sound, it may lead to negative outcomes for some individuals. The Virtue framework relies on virtuous traits and behaviors to guide ethical decisions, focusing on the character of the decision-maker and what a virtuous person would do in similar circumstances. However, defining and agreeing on what constitutes virtue can be difficult, as different communities and individuals may have varying views on virtuous traits. The SPED framework suggests that in situations with high surveillance and analytics sophistication and low levels of individual privacy, all three normative approaches—Consequence, Duty, and Virtue—should be used together to provide a comprehensive and holistic view of the ethical issues. This combination helps address the complexities and uncertainties inherent in such situations, though each framework has its own potential weaknesses [9].

These findings underscore the importance of considering institutional logics, professional discretion, and ethical frameworks in the design and implementation of ADS. Public institutions must be vigilant and proactive in addressing the negative consequences of ADS to ensure that distributional outcomes do not reinforce existing social inequalities. The integration of professional discretion and the alignment of ADS with institutional logics can enhance the precision, reliability, and ethical integrity of these systems. Ethical frameworks, particularly the combination of Consequence, Duty, and Virtue, provide a robust basis for ensuring that ADS are designed and used in a manner that reflects the values and norms of the community.

Having presented the key findings from the literature review, the following section will delve into the discussion of these results and their implications for resource allocation and social equality.

5. Discussion

The findings from the literature review highlight the intricate and multifaceted interactions between algorithmic decision systems (ADS) and institutional logics, professional discretion, and ethical frameworks in resource allocation contexts. These interactions have significant implications for policy and practice, particularly in terms of how ADS can either reinforce or disrupt existing social inequalities.

In the context of public school placements in Gothenburg, Sweden, the use of ADS led to widespread breaches of applicable rules and regulations, assigning thousands of children to schools in violation of relevant laws. Despite massive protests and evidence of violations, the Public School Administration (PSA) and the court system initially failed to address these issues, a phenomenon attributed to organizational ignoring practices. These practices contributed to institutional blindness, preventing the recognition and correction of ADS errors. This case underscores the need for robust oversight and accountability mechanisms to ensure that ADS do not inadvertently perpetuate social injustices [8].

In healthcare, the empirical evidence from the study "Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling" reveals that race-aware methodologies, such as UOF-R, can significantly reduce racial disparities in waiting times while maintaining a similar schedule cost to traditional methods. In contrast, race-unaware methodologies, like UOF-MM, reduce racial disparity to some extent but at the expense of increased schedule costs and still fail to match the performance of race-aware methodologies. This finding suggests that while race-aware methodologies are effective in reducing racial disparities, they raise ethical questions about the appropriateness of using race in scheduling decisions. Balancing these ethical concerns with the need for equitable healthcare access is a critical challenge that policymakers and practitioners must address [7].

The ethical management of human-AI interaction in resource allocation contexts is another critical area of focus. According to Heyder, Passlack, and Posegga (2023), integrating duty ethics and virtue ethics is essential for ensuring ethical management of ADS. Duty ethics involves adhering to established guidelines and norms, setting boundaries for developers and guiding the development of the system. Virtue ethics focuses on the ethical development and use of AI through the cultivation of moral virtues in individuals, including developers and users who provide feedback. The interplay between these two ethical perspectives is crucial, as duty ethics can be misaligned or contradicted by the virtue ethics of individual users, especially in widely used systems like ChatGPT. To address this, the formation of interdisciplinary ethics teams and AI ethics committees is recommended to collectively deliberate and make decisions, enhancing the ethical alignment of AI systems. Tailored ethical measures that reflect the organizational culture, goals, and priorities are also necessary, as ethics is partly subjective and must be adapted to local conditions. The involvement of diverse stakeholders, including managers, data scientists, and policymakers, is essential for responsible AI adoption and to avoid reinforcing existing social inequalities [1].

The ethical frameworks outlined in "Ethical decision-making frameworks for surveillance analytics" provide a comprehensive basis for guiding the design and deployment of ADS. The Consequence framework emphasizes the future effects of actions, focusing on achieving the greatest good. However, predicting outcomes accurately can be challenging, especially in complex and uncertain environments. The Duty framework is rule-based, emphasizing the obligations and duties of individuals and organizations to "do the right thing." While treating everyone equally is ethically sound, it may lead to negative outcomes for some individuals. The Virtue framework relies on virtuous traits and behaviors to guide ethical decisions, focusing on the character of the decision-maker and what a virtuous person would do in similar circumstances. The SPED framework suggests that in situations with high surveillance and analytics sophistication and low levels of individual privacy, all three normative approaches—Consequence, Duty, and Virtue—should be used together to provide a comprehensive and holistic view of the ethical issues. This combination helps address the complexities and uncertainties inherent in such situations, though each framework has its own potential weaknesses [9].

The integration of professional discretion and the alignment of ADS with institutional logics can enhance the precision, reliability, and ethical integrity of these systems. In the healthcare sector, the study by Lotfivand et al. (2024) on the use of convolutional neural networks (CNNs) in palliative care triage highlights the importance of integrating medical expertise into the development and training of ADS. The CNN was trained on a dataset annotated by medical experts, ensuring that the model could accurately identify levels of patient urgency from complex clinical data. This approach underscores the necessity of professional input to enhance the precision and reliability of ADS in high-stakes environments like palliative care. Professional discretion in validating and fine-tuning the AI model is essential for fostering trust and ensuring that the ADS aligns with the ethical standards and professional practices of the healthcare sector [4].

In summary, the findings suggest that the design and implementation of ADS in resource allocation contexts must be carefully managed to avoid reinforcing social inequalities. Public institutions should be vigilant and proactive in addressing the negative consequences of ADS, and the integration of professional discretion and ethical frameworks is crucial for ensuring that these systems are both effective and fair. The combination of Consequence, Duty, and Virtue ethical frameworks provides a robust basis for guiding the development and use of ADS, reflecting the values and norms of the community. Having examined these implications, the following section will conclude the report by summarizing the key findings and suggesting directions for future research.

6. Conclusion

This report has systematically explored the interactions between algorithmic decision systems (ADS) and institutional logics, professional discretion, and ethical frameworks in resource allocation contexts such as healthcare, education, and social services. The findings reveal that these interactions can either reinforce or disrupt existing social inequalities, depending on the specific design and implementation of ADS.

In the historical context, the evolution of ADS has been marked by significant advancements and evolving applications across these sectors. In healthcare, ADS have been developed to optimize resource allocation while balancing utilitarian and egalitarian objectives. For instance, a two-phase optimization model was developed to address the challenges of resource allocation in healthcare, enhancing transparency and accountability [2]. Similarly, in education, ADS have focused on improving resource allocation to enhance student outcomes and address inequalities, though concerns remain about the potential reinforcement of existing inequalities. In social services, ADS have been adopted to streamline processes and ensure equitable distribution of resources, but the design and implementation have often been shaped by institutional logics and professional discretion, which can either mitigate or exacerbate social inequalities.

The institutional logics that govern resource allocation decisions play a pivotal role in the implementation and impact of ADS. Organizational ignoring practices can lead to the blackboxing of ADS, preventing stakeholders from recognizing and addressing negative consequences [8]. In healthcare, the logics of professionalism and managerialism influence the use of ADS, with the former emphasizing the central role of clinicians and the latter focusing on cost-efficiencies and standardization [3]. These logics can either enhance the precision and reliability of ADS or contribute to social and legal injustices.

Professional discretion is another critical factor in the implementation and use of ADS. Practitioners' autonomy and judgment can mitigate or exacerbate the potential biases and errors introduced by ADS. In the context of public school placements in Gothenburg, the initial failure to address ADS errors was attributed to organizational ignoring practices, highlighting the importance of professional engagement in recognizing and correcting algorithmic injustices [8]. In healthcare, the integration of medical expertise into the development and training of ADS, such as a convolutional neural network (CNN) for palliative care triage, ensures that the model aligns with ethical standards and professional practices [4].

Ethical frameworks are essential for guiding the design and deployment of ADS. The report examines three major ethical frameworks—consequentialist, deontological, and virtue-based—each offering unique insights into the ethical management of ADS. The consequentialist framework emphasizes achieving the greatest good, but accurate outcome prediction is challenging. The deontological framework focuses on adhering to ethical rules and obligations, though treating everyone equally may sometimes lead to negative outcomes. The virtue-based framework relies on the cultivation of moral virtues, but defining and agreeing on virtuous traits can be difficult. The SPED framework suggests that all three approaches should be used together to address the complexities and uncertainties inherent in ADS [9].

Methodologically, the research has identified key approaches for integrating institutional logics, professional discretion, and ethical frameworks into the design and implementation of ADS. Decision rule optimization, sequential learning and optimization (SLO), and integrated learning and optimization (ILO) provide a strong technical foundation, but they must be adapted to consider the broader institutional and ethical context. For example, in healthcare, race-aware methodologies can reduce racial disparities in waiting times, but they raise ethical questions that must be addressed [7].

The results of the study emphasize the need for robust oversight and accountability mechanisms to ensure that ADS do not inadvertently perpetuate social inequalities. Public institutions must be vigilant and proactive in addressing the negative consequences of ADS, and the integration of professional discretion and ethical frameworks is crucial for achieving both effectiveness and fairness. The combination of Consequence, Duty, and Virtue ethical frameworks provides a comprehensive basis for guiding the development and use of ADS, reflecting the values and norms of the community.

Future research should focus on several key areas to further advance the understanding and ethical implementation of ADS in resource allocation contexts. First, more empirical studies are needed to explore the specific ways in which ADS interact with institutional logics and professional discretion in different sectors. Second, the development of robust ethical frameworks that can be applied consistently across various contexts is essential. Third, the involvement of diverse stakeholders, including managers, data scientists, and policymakers, should be encouraged to ensure that ADS are designed and used responsibly. Finally, the creation of interdisciplinary ethics teams and AI ethics committees can help establish an ethical ecosystem for AI, leading to more transparent and fair outcomes in resource distribution [1].

By addressing these areas, future research can contribute to the development of ADS that not only optimize resource allocation but also promote social equality and ethical integrity.

References

  1. Teresa Heyder, Nina Passlack, Oliver Posegga. (2023). Ethical management of human-AI interaction: Theory development review. Journal of Strategic Information Systems.
  2. Yrjänä Hynninen, Eeva Vilkkumaa, Ahti Salo. (2021). Operationalization of Utilitarian and Egalitarian Objectives for Optimal Allocation of Health Care Resources. Decision Sciences.
  3. Albert Boonstra, U. Yeliz Eseryel, Marjolein A. G. van Offenbeek. (2018). Stakeholders’ enactment of competing logics in IT governance: polarization, compromise or synthesis?. European Journal of Information Systems.
  4. Nasser Lotfivand, Brian Dillon, Laura Lynch, Ciara Heavin. (2024). Enhancing palliative care triage: decision support system for patient prioritisation. Journal of Decision Systems.
  5. Koen W. De Bock, Kristof Coussement, Arno De Caigny, Roman Słowiński, Bart Baesens, Robert N. Boute, Tsan-Ming Choi, Dursun Delen, Mathias Kraus, Stefan Lessmann, Sebastián Maldonado, David Martens, María Óskarsdóttir, Carla Vairetti, Wouter Verbeke, Richard Weber. (2024). Explainable AI for Operational Research: A defining framework, methods, applications, and a research agenda. European Journal of Operational Research.
  6. Jiaqi Yang, Alireza Amrollahi, Mauricio Marrone. (2024). Harnessing the Potential of Artificial Intelligence: Affordances, Constraints, and Strategic Implications for Professional Services. Journal of Strategic Information Systems.
  7. Michele Samorani, Shannon L. Harris, Linda Goler Blount, Haibing Lu, Michael A. Santoro. (2022). Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling. Manufacturing & Service Operations Management.
  8. Charlotta Kronblad, Anna Essén, Magnus Mähring. (2024). When Justice is Blind to Algorithms: Multilayered Blackboxing of Algorithmic Decision Making in the Public Sector.
  9. Daniel J. Power, Ciara Heavin, Yvonne O’Connor. (2021). Balancing privacy rights and surveillance analytics: a decision process guide. Journal of Business Analytics.
  10. Utsav Sadana, Abhilash Chenreddy, Erick Delage, Alexandre Forel, Emma Frejinger, Thibaut Vidal. (2025). A survey of contextual optimization methods for decision-making under uncertainty. European Journal of Operational Research.