Nov 21, 2022 4:45:03 PM | Ai Ai & Bioethics

How the White House’s recent Ai Bill of Rights Holds up to Bioethics

Blueprinting the Bioethics of Ai in the Whitehouse


In early October, the White House published an Ai Bill of Rights Blueprint detailing “principles set up for the protection of the American public with regard to the design, use, and deployment of automated systems in the age of artificial intelligence.” The "sector-agnostic toolkit" was built "through extensive consultation with the American public" to "align with democratic values and protect civil rights, civil liberties, and privacy". Equipped with a national values statement, it is meant to “inform building these protections into policy, practice, or technological design process where there might have previously been an oversight of requirements or lack of guidance for sector-specific privacy laws”

How does this relate to Ai in Healthcare?

As is best practice with any new development to ensure new systems are safe and effective, “automated systems should be developed with consultation from diverse communities”.

However, this language has preventative framing, focusing on “protection from something…(a “negative right”)” as opposed to the potential future for patients to “have a right to Ai systems (a “positive right”)”. This bill of rights is weighing the balance of Ai’s potential - on whether it will “be harmful and unsafe, or safe and effective”.

Further concerns addressed are discrimination and how to avoid it algorithmically through equitable design. “Dependent on the circumstance, algorithmic discrimination may violate legal protections”. Initially, you’d think the way to prevent this would have already been addressed in development stages with the “consultation of diverse communities”, however, the blueprint states protection should be included in system design and built with ”representative data and proactive equity assessments to ensure accessibility”.

There is a focus on developing protective measures against “'unjustified different treatment’ by the algorithm”, which raises concerns about data input and output regarding patient predictions. Based on race or age ranges, data input and output can “contribute to different treatments and impacts, disfavoring certain groups”. This begs the question what qualifies “unjustified disfavoring”?

Having built-in protections within these systems create complex, intricate concerns. Beyond ensuring equity in the accessibility and usability of these systems, data privacy is also covered in the Ai Bill of Rights. With the goal of “protection from abusive data practices via built-in protections”, consent is highlighted in “data collection and usage, privacy by design defaults where consent is not possible, and freedom from surveillance technologies”. While data protection and privacy are of the utmost importance, it is in opposition to the functionality, safety, and efficacy of Ai and machine learning, which are data-centric.

System transparency and disclosure statements are also covered to ensure users and affected patients are given proper notice of their involvement and implementation within treatment and testing.

Human involvement is covered, not only for system training and data input purposes but as standby availability to address any potential system issues, as well as an alternative option for those interested in opting out of the use of automated systems, on the grounds that users and patients have the right to choose per their preference, human or Ai system.

In looking at the future of life with Ai technology, the Ai Bill of Rights is a key step forward toward balancing political and moral duties “while mitigating potential negatives and protecting human rights”.

For more, read work cited in Jennifer Blumenthal-Barby’s Article, 'An Ai Bill of Rights' at Bioethics Today or read The White House Ai Bill of Rights in full.

Written By: Kaitlynn Clement