Ethical use of DSS

Human rights, equality & diversity

What are human rights, equality & diversity?

Human rights set out the basic rights and freedoms that belong to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status.  These are documented within the Human Rights Act 1998. Human rights include the right to life and liberty, freedom from slavery and torture, freedom of opinion and expression, the right to work and education, and many more.

All public bodies, including local governments and health services, must do all they can to protect people's human rights. They must ensure those rights are not infringed upon or interfered with, without justifiable cause.

The Equality Act published in 2010 legally protects people from unfair treatment because of any of the following ‘protected characteristics:’  Age, Disability, Gender reassignment, Marriage and civil partnership, Pregnancy and maternity, Race, Religion or belief, Sex, Sexual orientation.

The Equality Act states that it is unlawful to treat an individual less favourably, or unfavourably, because of their protected characteristic. Examples could include:

  • Being excluded from something
  • Refusal of service
  • Being deprived of a choice
  • If you are at a disadvantage compared to other service users
  • Poor quality of service compared to others.

Decision support can impact on human rights and equality and diversity in several ways. For example:

  • People with no access to technology or with low digital and health literacy will need extra support to get the benefit of digital decision support tools.
  • The original research or datasets on which a decision support tool is based are not representative of all groups with protected characteristics. This means that the decision support tool may make recommendations that are not suitable for these groups. 

How do I provide evidence of competency in this area? 

Can you...

  • Explain to others in basic terms how basic human rights and equality may be impacted by the use of decision support systems?
  • Outline ways in which these risks can be mitigated.

Blooms level 1: Understand

DDAT Framework roles: Data ethicist

Risk of bias in DSS - both knowledge based and those based on computer-generated algorithms

What is the risk of bias in decision support systems?

Bias can enter decision support systems, particularly algorithmic decision support sytems,  in a number of ways, including:


Historical bias: The data that the model is built, tested and operated on could introduce bias. This
may be because of previously biased human decision-making or due to societal or historical inequalities.
For example, if a workforce dataset is predominantly white males, then an algorithm may reinforce this. 

Data selection bias: How the data is collected and selected could mean it is not representative. For
example, over or under recording of particular groups could mean the algorithm was less accurate for some
people, or gave a skewed picture of particular groups. 

Algorithmic design bias: Algorithmic bias can also be caused by programming errors, such as a developer unfairly weighting factors in algorithm decision-making based on their own conscious or unconscious biases. For example, indicators like income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender.

Cognitive bias: When people process information and make judgments, we are inevitably influenced by our experiences and our preferences. As a result, people may build these biases into AI systems through the selection of data or how the data is weighted. For example, cognitive bias could lead to favoring datasets gathered from white Caucasian populations rather than sampling from a range of populations. Cognitive bias can also influence the way that health and care professionals interpret the results delivered by decision support systems.

Reducing the risk of bias

There are best practices that healthcare data scientists and developers can incorporate to address the risk of bias in decision support systems. These include:

  • Have a more diverse body of people review the algorithms and supervise machine learning approaches.
  • Use methods or techniques to best manage situations where there is not enough information available, like using synthetic data.
  • Work with diverse communities to ensure the algorithms are helpful and don't cause harm.
  • Introduce the algorithms gradually and carefully instead of all at once.
  • Create ways for people to provide feedback and improve the algorithms over time.
  • Involve diverse members of the workforce in developing the algorithms and validating patient data from various racial and ethnic backgrounds.

Nazer, L.H. et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital health. June 22, 2023 https://doi.org/10.1371/journal.pdig.0000278

How do I provide evidence of competency in this area? 

Can you...

  • Describe the main sources of bias in decision support systems?
  • Outline ways in which sources of bias can be mitigated?

Blooms level 2: Understand

DDAT Framework roles: Data scientist, Data engineer, Data ethicist

Impact on human judgement - professional, social and cultural context of DSS

What is 'Impact on human judgement – professional, social and cultural context of decision support systems'?

Decision support systems can impact on human judgement and professional and social context in a number of ways. For example:

Risk of automation bias. This is where clinicians may trust the outputs or recommendations of AI systems not due to evidence of efficacy, but rather because automated decision support systems are perceived as being objective, accurate or managing complexity.

Over-reliance on decision support systems can potentially result in "de-skilling" - i.e. inhibiting the development of skills, professional communities, and norms of ‘good practice’ developed through the wisdom that comes with practical experience. 

The impact of decision support systems on person-centred care and the relationship between professional and patient/client is likely to depend on whether it assists or replaces human practitioners. Assistive decision support may have a positive role to play in practitioner-patient/client relationships by providing a focus for improving openness and communication in shared decision-making.  On the other hand, the risk of decision support tools replacing human practitioners can have a negative impact as the relationship between practitioner and patient/client is key to the therapeutic process and working out together the best approach to support the person's needs. 

Decision support literacy and emotional intelligence are key skills to mitigating the potential risks that decision support tools present to professional judgement.

In terms of social impact, decision support tools can provide robust and trusted support to health and care practitioners battling complex workloads and increasing demands. They can minimise errors resulting from fatigue and challenges to concentration. They can be leveraged to reduce operational complexity and improve efficiency. For patients and clients, decision support tools can aid self-management and self-care, and can support them with information to become proactive participants in decisions about their care. r

How do I provide evidence of competency in this area? 

Can you...

  • Explain to others how decision support systems can impact on professional judgement and how to ensure that these systems assist rather than replace professional judgement?
  • Explain how decision support systems can impact both negatively and positively on the practitioner-patient/client relationship.
  • Outline some of the ways in which critical thinking needs to be applied to decision support?
  • Weigh up the risks associated with over-reliance on decision support and with not using it?
  • Appreciate that decision support has the potential to lead to change in roles, responsibility and in processes of care?
  • Apply this understanding in the way you use decision support tools in your practice?

Blooms level 3: Apply

DDAT Framework roles: Data ethicist, Business analyst

Social good

What is 'social good' in relation to decision support?

Social good or common good describes an action that benefits the general public, or at least a significant number of people in a community. The United Nations Sustainable Development Goals provide an overview of key elements of social good. They include ending poverty and hunger, improving health and education, addressing climate change, and creating economic opportunity. Decision support systems can contribute to all these outcomes. 

Five key ethical questions to ask about a decision support tool that will help to ensure positive impact on social good are:

  • Will the proposed use of the tool benefit the general public, or at least a significant number of people in the community?
  • Will it avoid harm?
  • Does it consider people fairly and impartially?
  • Have the people affected by the tool given informed consent to its use?
  • Can you explain how the decision support tool works - and who is accountable for its use?

How do I provide evidence of competency in this area? 

Can you...

  • Describe how decision support systems can contribute to social good and provide examples?
  • Cite key questions to ask to confirm whether a decision support system contributes to social good?

Blooms level 2: Understand

DDAT Framework roles: Data scientist, Data engineer, Data ethicist, Business analyst