Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

White Papers

Why Most Fairness Metrics Don’t Work in Healthcare AI/ML

Selecting an appropriate definition of fairness is difficult for healthcare algorithms, as they are applied to myriad diverse problems. Read the paper to learn why we need different definitions of fairness and to understand the most ideal fairness metric for population health AI/ML.

While most existing fairness metrics aren’t appropriate to assess algorithms that inform population health decisions, healthcare organizations still have a responsibility to ensure their algorithms help fairly distribute limited resources. Learn about a new fairness metric developed to address the unique challenges of assessing fairness in a healthcare setting: Group Benefit Equality (GBE). With GBE, healthcare organizations now have a fairness metric to ensure that their predictive models reduce health disparities rather than exacerbate them.

Read the paper to learn: 

  • The existing problem with measuring fairness
  • What an appropriate fairness metric must address
  • About Group Benefit Equality (GBE), the most ideal fairness metric for population health AI/ML

Read the white paper

Name(Required)

By clicking on “Submit,” you agree to our Privacy Policy and Terms of Use. You may receive marketing materials and can opt-out at any time.

Related Resources

Videos and Podcasts

Algorithmic Bias in Healthcare AI and How to Avoid It

In this AIMed webinar, ClosedLoop explains why assessing algorithmic bias is critical for ensuring healthcare resources are fairly allocated to members. Tune in as AIMed hosts an important conversation about avoiding hidden bias in healthcare AI.

Videos and Podcasts

How and Why You Should Assess Bias & Fairness in Healthcare AI Before Deploying to Clinical Workflows

Watch the on-demand session to learn why it's important to evaluate algorithms for bias before deployment and what metrics you can use to assess bias. Plus, get a demo of new product features built precisely for this purpose.

Videos and Podcasts

DiME Webinar: Addressing Bias in the Evolving Landscape of AI in Health Care

This on-demand webinar explores the role of bias related to the use of data for AI in healthcare solutions and how clinicians, tech companies, and other stakeholders are learning mitigation approaches as the AI landscape evolves.

Make AI/ML a core element of your care strategy.

Get in touch today to see the ClosedLoop platform in action.