The #WeRobot poster session will take place at 6:15pm on Friday, September 24. Short video previews of each poster will be available during the Lighting Poster Session at 12:30pm. The session will showcase several of the most late-breaking research developments and projects in robotics.
Privacy’s Algorithmic Turn
By Maria P. Angel
- The increasing relevance of algorithms has created a pivot in American legal scholars’ privacy discourse, broadening the scope of privacy’s values and rights and pushing scholars to rethink the very nature of privacy.
- My research aims to trace how American legal scholars’ conception of the right to privacy has changed in the last 30 years.
- I intend to conduct document analysis of documents from the Privacy Law Scholars Conferences (PLSC), as well as use the Science, Technology, and Social Studies (STS) theory of “sociotechnical imaginaries” as my theoretical framework to make sense of the changing nature of privacy.
Egalitarian Machine Learning
By Clinton Castro, David O’Brien and Ben Schwan
- The increased reliance on prediction-based decision making has been accompanied by growing concerns about the fairness of the use of this technology, made more difficult by the lack of a consensus definition of “fairness” in this context.
- Fairness, as used in the fair machine learning community, is best understood as a placeholder term for a variety of normative egalitarian considerations, namely to not be wrongfully discriminatory.
- We are interested in exploring how to choose a fairness measure within a context. We present a general picture for thinking about the choice of a measure and talk about the choiceworthiness of three measures (“fairness through unawareness”, “equalised odds”, and “counterfactual fairness”).
Exploring Robotic Technologies to Mediate and Reduce Risk in Domestic Violence
By Mark Juszczak
- I am researching applications of robotic technologies to reduce domestic violence using two different perspectives: a problem-based perspective and a robotics-platform based perspective.
- The problem-based perspective seeks to classify the spatial-temporal conditions under which a quantifiable threat or hazard of domestic violence occurs for women.
- The robotics-platform based perspective seeks to determine the functional limits of embodied AI in providing an enhanced security function to mediate and reduce domestic violence.
Examining Correlations between Human Empathy and Vicbots
By Catherine McWhorter
- This project focuses on robots capable of fulfilling victim roles – “vicbots” – and defines them as anthropomorphic bots with advanced a.i. that plead for the cessation of harm.
- Whether or not vicbots negatively affect their human agent’s capacity for human-to-human empathy and compassion has implications for the health of the human agent, as well as the overall safety and well-being of communities.
- Federal regulation is difficult due to a lack of consensus in research and discourse, so it is important to first categorize these bots and understand their impacts before moving on to appropriate regulation.
Reported Ethical Concerns Over Use of Robots for COVID-19 and Recommendations for Responsible Innovation for Future Pandemics
By Robin Murphy, Paula Dewitte, Jason Moats, Angela Clendenin, Vignesh Gandudi, Henry Arendes and Shawn Abraham
- The coronavirus pandemic has led to new robots for healthcare, public safety, continuity of work and education, and social interactions.
- As with any new application of technology, this may pose new ethical challenges for civil and administrative law, policy, and professional ethics.
- While responsible innovation typically takes a lengthy engagement of direct and indirect stakeholders, disasters require immediate action, so we propose a short-term framework for stakeholders and roboticists to perform a proactive demand analysis.
Artificial Intelligence: The Challenges For Criminal Law In Facing The Passage From Technological Automation To Artificial Autonomy
By Beatrice Panattoni
- The project aims to analyze the possible and future criminal policies regarding the regulation of harms related to the use and functioning of AI systems.
- A possible technically oriented classifications of “AI crimes” will be suggested, organized into three groups:
- (1) Cases where the AI system is used by a criminal agent as the means to realize the crime;
- (2) Cases where the AI system is the “object” against which is committed the crime; and
- (3) Cases where the realization of a crime is caused by the emergent behavior of an AI system.
- The main issue is whether there is still space for criminal law when it comes to harms related to emergent behavior of an AI system, and, if so, what kind of criminal policies are better suited in this context; we outline possible scenarios in this presentation.
Roboethics to Design & Development Competition: Translating Moral Values Into Practice
By Jimin Rhim, Cheng Lin and Ajung Moon
- As robots enter our everyday spaces, human-robot interactions with ethically sensitive situations are bound to occur. For instance, designing a robot to evaluate whether to obey a teenager’s request to fetch alcohol remains a socio-technical challenge.
- Our proposed project addresses this by hosting a first-of-its-kind global robotics design competition to explore new ways of considering human values and translating this information for robots.
- In addition to illuminating the translation process, the accumulated competition results will form the basis for an in-depth ethics audit framework to evaluate interactive robotic systems.
How Do AI Systems Fail Socially? Social Failure Mode and Effect Analysis (FMEA) for Artificial Intelligence Systems
By Shalaleh Rismani and Ajung Moon
- Developers of Artificial Intelligent Systems (AIS) have unearthed various sociotechnical failures in many applications, including inappropriate use of language in chatbots and discriminatory automated decision support systems.
- Our open-ended research question is: how can AIS developers use FMEAs as one of the tools for creating accountability and improving design for sociotechnical failures?
- In this work, we build on Raji et al.’s end-to-end auditability framework and develop a novel FMEA process that allows developers to effectively discover AIS’s social and ethical failures.
Nudging Robot Engineers To Do Good: Developing Standards for Ethical AI and Robot Nudges
By John Sullins, Sean Dougherty, Vivek Nallur and Ken Bell
- HyperNudging or A/IS (autonomous Intelligent systems) Nudging allows programmers to engage in changing the behavior of users, such as encouraging exercise when a user has been sedentary, as opposed to simply predicting the value of some variable.
- Soon, we will see more robot systems designed to do similar things on a larger scale, such as promoting safe behavior in public spaces, helping officials monitor public health or enforce quarantines, or encouraging people to stay longer in shops, museums and malls.
- In this poster, we describe two use case scenarios of A/IS Nudging and show how new standards are being designed to help engineers build systems that are attuned to producing more ethical outcomes.
Machine Learning Algorithms in the Administrative State: The New Frontier for Democratic Experimentalism
By Amit Haim
- Administrative agencies are utilizing machine learning (ML) algorithms to ameliorate inaccuracies, inconsistencies, and inefficiencies. Due to the leeway these agencies have, especially at the local level, there is significant variation in agencies’ procedures, which may lead to reduced transparency and accountability.
- Nevertheless, prescriptive approaches fail to recognize that flexible schemes are important for enhancing the values the administrative state often lacks; schemes are likely to stifle innovation and urge agencies to stick to the status quo.
- I argue that internal governance processes (e.g., partnerships, independent evaluations) can promote transparency while addressing problems in algorithms such as disparities and opacity.