Prescribing Exploitation

Charlotte Tschider

Charlotte Tschider will present her paper, Prescribing Exploitation, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Patients increasingly rely on connected wearable medical devices that use artificial intelligence infrastructures and physical housing that directly interacts with the human body. Many folks who have traditionally relied on compulsory medical wearables have been members of legally protected groups specifically enumerated in anti-discrimination law, such as disability status. With continued aging of our generations and a longer average lifespan, the field of medical wearables is about to encounter a patient population explosion that will force the medical industry, lawyers, and advocates to find ways of balancing immensely larger scales of patient health data with maintaining a focus on patient dignity.

Michelle Johnson (moderator)

Health data discrimination results from a combination of factors essential to effective medical device AI operation: 1) existence, or approximation, of a fiduciary relationship, 2) a technology-user relationship independent of the expertise of the fiduciary, 3) existence of a critical health event or status requiring use of a medical device, 4) ubiquitous sensitive data collection essential to AI functionality and the exceptional nature of health data, 5) lack of reasonably similar analog technology alternatives, and 6) compulsory reliance on a medical device. Each of these factors increase the probability of inherent discrimination, or a deontological privacy risk resulting from healthcare AI use.

We conclude that health technologies introduce a unique combination of circumstances that create a new conception of discrimination: discrimination created by technology reliance, rather than automated or exacerbated by it. Specific groups are protected under anti-discrimination laws because there is an inherent risk of potential injury due to an individual’s status. If individuals who are compulsorily dependent on AI-enabled healthcare technologies are uniquely vulnerable relative to their non-technology-dependent peers, they are owed some additional duties.