If you have not yet registered, the Registration Page awaits you.
Michelle Johnson is the moderator for the Field Robotics panel at 4:30pm on Saturday, September 25th at #werobot. The panel will feature the following papers:
Her research is centered in the area of robot-mediated rehabilitation. She is focused on the investigation and rehabilitation of dysfunction due to aging, neural disease, and neural injury. In particular, she is interested in 1) exploring the relationships between brain plasticity and behavioral/motor control changes after robot-assisted interventions; 2) quantifying motor impairment and motor control of the upper limb in real world tasks such as drinking; and 3) defining the methods to maintain therapeutic effectiveness while administering local and remote, robot-mediated interventions.
She directs the Rehabilitation Robotics Lab at the University of Pennsylvania Perelman School of Medicine. This is a new Lab within the Department of Physical, Medicine, and Rehabilitation in the School of Medicine. The Rehabilitation Robotics Lab mission is to use robotics, rehabilitation, and neuroscience techniques to translate research findings into the development of assistive and therapeutic rehabilitation robots capable of functioning in real-world rehabilitation environments. Michelle and the Lab’s goal is to improve the quality of life and function on activities of daily living (ADLs) of their target population in supervised or under-supervised settings.
Cynthia Khoo will discuss Anti-Discrimination Law’s Cybernetic Black Hole at 3:00pm on Saturday, September 25th at #werobot.
Cynthia Khoo is a digital rights lawyer and founder of Tekhnos Law. She is also a full-time Associate at the Center on Privacy & Technology at Georgetown Law Center, a Research Fellow at the Citizen Lab (Munk School of Global Affairs & Public Policy, University of Toronto), and a member of the Board of Directors of Open Privacy Research Society.
She has extensive experience representing clients in proceedings before the Canadian Radio-television and Telecommunications Commission (CRTC), and has represented clients as interveners before the Supreme Court of Canada. She regularly researches and writes policy submissions to government consultations and advises on legal, policy, advocacy, and campaign strategies.
In April 2021, she completed a research grant by the Women’s Legal Education and Action Fund (LEAF), resulting in the publication of the landmark report, Deplatforming Misogyny: Report on Platform Liability for Technology-Facilitated Gender-Based Violence. The report provides recommendations for legislative and other reforms, and will inform LEAF’s future litigation and legal reform strategy concerning technology-facilitate gender-based violence, abuse, and harassment (TFGBV).
Cynthia Khoo earned her J.D. from the University of Victoria and B.A. (Honours English) from the University of British Columbia. This included exchange semesters at Université Jean-Moulin Lyon III and the National University of Singapore, Faculty of Law (NUS Law). ShI also holds an LL.M. (Concentration in Law and Technology) from the University of Ottawa, where she specialized in online platform regulation and platform liability for harms to marginalized communities. Her paper based on this work was delivered at We Robot 2020, where she received the inaugural Ian R. Kerr Robotnik Memorial Award for the Best Paper by an Emerging Scholar.
Meg Mitchell Will Lead Discussion on Understanding Consumer Contracts with Computational Language Models
Meg Mitchell will discuss Predicting Consumer Contracts at 1:30pm on Saturday, September 25th at #werobot.
Meg Mitchell’s research primarily involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. This includes research on helping computers to communicate based on what they can process, as well as projects to create assistive and clinical technology from the state of the art in AI. Her work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science.
Before founding Ethical AI and co-founding ML Fairness at Google Research, she was a founding member of Microsoft Research’s “Cognition” group, focused on advancing artificial intelligence, and a researcher in Microsoft Research’s Natural Language Processing group.She was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where she focused on structured prediction, semantic role labeling, and sentiment analysis, working under Benjamin Van Durme. Before that, she was a postgraduate (PhD) student in the natural language generation (NLG) group at the University of Aberdeen, where she focused on how to naturally refer to visible, everyday objects. I primarily worked with Kees van Deemter and Ehud Reiter.
In 2008, she received a Master’s in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia. From 2005 to 2012, she worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. She worked on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders.
Madeleine Clare Elish will discuss Autonomous Vehicle Fleets as Public Infrastructure at 11:30am on Saturday, September 25th at #werobot.
Madeleine Clare Elish previously led the AI on the Ground Initiative at Data & Society, where she and her team investigated the promises and risks of integrating AI technologies into society. Through human-centered and ethnographic research, AI on the Ground sheds light on the consequences of deploying AI systems beyond the research lab, examining who benefits, who is harmed, and who is accountable. The initiative’s work has focused on how organizations grapple with the challenges and opportunities of AI, from changing work practices and responsibilities to new ethics practices and forms of AI governance.
As a researcher and anthropologist, Madeleine has worked to reframe debates about the ethical design, use, and governance of AI systems. She has conducted field work across varied industries and communities, ranging from the Air Force, the driverless car industry, and commercial aviation to precision agriculture and emergency healthcare. Her research has been published and cited in scholarly journals as well as publications including The New York Times, Slate, The Guardian, Vice, and USA Today. She holds a PhD in Anthropology from Columbia University and an S.M. in Comparative Media Studies from MIT.
Deb Raji will discuss Debunking Robot Rights: Metaphysically, Ethically and Legally at 10:00am on Saturday, September 25th at #werobot.
Deb Raji is a computer scientist and activist whose work centers on algorithmic bias, AI accountability, and algorithmic auditing. She received her degree in Engineering Science from the University of Toronto in 2019. In 2015, she founded Project Include, a nonprofit providing increased student access to engineering education, mentorship, and resources in low income and immigrant communities in the Greater Toronto Area.
The #WeRobot poster session will take place at 6:15pm on Friday, September 24. Short video previews of each poster will be available during the Lighting Poster Session at 12:30pm. The session will showcase several of the most late-breaking research developments and projects in robotics.
Privacy’s Algorithmic Turn
By Maria P. Angel
- The increasing relevance of algorithms has created a pivot in American legal scholars’ privacy discourse, broadening the scope of privacy’s values and rights and pushing scholars to rethink the very nature of privacy.
- My research aims to trace how American legal scholars’ conception of the right to privacy has changed in the last 30 years.
- I intend to conduct document analysis of documents from the Privacy Law Scholars Conferences (PLSC), as well as use the Science, Technology, and Social Studies (STS) theory of “sociotechnical imaginaries” as my theoretical framework to make sense of the changing nature of privacy.
Egalitarian Machine Learning
By Clinton Castro, David O’Brien and Ben Schwan
- The increased reliance on prediction-based decision making has been accompanied by growing concerns about the fairness of the use of this technology, made more difficult by the lack of a consensus definition of “fairness” in this context.
- Fairness, as used in the fair machine learning community, is best understood as a placeholder term for a variety of normative egalitarian considerations, namely to not be wrongfully discriminatory.
- We are interested in exploring how to choose a fairness measure within a context. We present a general picture for thinking about the choice of a measure and talk about the choiceworthiness of three measures (“fairness through unawareness”, “equalised odds”, and “counterfactual fairness”).
Exploring Robotic Technologies to Mediate and Reduce Risk in Domestic Violence
By Mark Juszczak
- I am researching applications of robotic technologies to reduce domestic violence using two different perspectives: a problem-based perspective and a robotics-platform based perspective.
- The problem-based perspective seeks to classify the spatial-temporal conditions under which a quantifiable threat or hazard of domestic violence occurs for women.
- The robotics-platform based perspective seeks to determine the functional limits of embodied AI in providing an enhanced security function to mediate and reduce domestic violence.
Examining Correlations between Human Empathy and Vicbots
By Catherine McWhorter
- This project focuses on robots capable of fulfilling victim roles – “vicbots” – and defines them as anthropomorphic bots with advanced a.i. that plead for the cessation of harm.
- Whether or not vicbots negatively affect their human agent’s capacity for human-to-human empathy and compassion has implications for the health of the human agent, as well as the overall safety and well-being of communities.
- Federal regulation is difficult due to a lack of consensus in research and discourse, so it is important to first categorize these bots and understand their impacts before moving on to appropriate regulation.
Reported Ethical Concerns Over Use of Robots for COVID-19 and Recommendations for Responsible Innovation for Future Pandemics
By Robin Murphy, Paula Dewitte, Jason Moats, Angela Clendenin, Vignesh Gandudi, Henry Arendes and Shawn Abraham
- The coronavirus pandemic has led to new robots for healthcare, public safety, continuity of work and education, and social interactions.
- As with any new application of technology, this may pose new ethical challenges for civil and administrative law, policy, and professional ethics.
- While responsible innovation typically takes a lengthy engagement of direct and indirect stakeholders, disasters require immediate action, so we propose a short-term framework for stakeholders and roboticists to perform a proactive demand analysis.
Artificial Intelligence: The Challenges For Criminal Law In Facing The Passage From Technological Automation To Artificial Autonomy
By Beatrice Panattoni
- The project aims to analyze the possible and future criminal policies regarding the regulation of harms related to the use and functioning of AI systems.
- A possible technically oriented classifications of “AI crimes” will be suggested, organized into three groups:
- (1) Cases where the AI system is used by a criminal agent as the means to realize the crime;
- (2) Cases where the AI system is the “object” against which is committed the crime; and
- (3) Cases where the realization of a crime is caused by the emergent behavior of an AI system.
- The main issue is whether there is still space for criminal law when it comes to harms related to emergent behavior of an AI system, and, if so, what kind of criminal policies are better suited in this context; we outline possible scenarios in this presentation.
Roboethics to Design & Development Competition: Translating Moral Values Into Practice
By Jimin Rhim, Cheng Lin and Ajung Moon
- As robots enter our everyday spaces, human-robot interactions with ethically sensitive situations are bound to occur. For instance, designing a robot to evaluate whether to obey a teenager’s request to fetch alcohol remains a socio-technical challenge.
- Our proposed project addresses this by hosting a first-of-its-kind global robotics design competition to explore new ways of considering human values and translating this information for robots.
- In addition to illuminating the translation process, the accumulated competition results will form the basis for an in-depth ethics audit framework to evaluate interactive robotic systems.
How Do AI Systems Fail Socially? Social Failure Mode and Effect Analysis (FMEA) for Artificial Intelligence Systems
By Shalaleh Rismani and Ajung Moon
- Developers of Artificial Intelligent Systems (AIS) have unearthed various sociotechnical failures in many applications, including inappropriate use of language in chatbots and discriminatory automated decision support systems.
- Our open-ended research question is: how can AIS developers use FMEAs as one of the tools for creating accountability and improving design for sociotechnical failures?
- In this work, we build on Raji et al.’s end-to-end auditability framework and develop a novel FMEA process that allows developers to effectively discover AIS’s social and ethical failures.
Nudging Robot Engineers To Do Good: Developing Standards for Ethical AI and Robot Nudges
By John Sullins, Sean Dougherty, Vivek Nallur and Ken Bell
- HyperNudging or A/IS (autonomous Intelligent systems) Nudging allows programmers to engage in changing the behavior of users, such as encouraging exercise when a user has been sedentary, as opposed to simply predicting the value of some variable.
- Soon, we will see more robot systems designed to do similar things on a larger scale, such as promoting safe behavior in public spaces, helping officials monitor public health or enforce quarantines, or encouraging people to stay longer in shops, museums and malls.
- In this poster, we describe two use case scenarios of A/IS Nudging and show how new standards are being designed to help engineers build systems that are attuned to producing more ethical outcomes.
Machine Learning Algorithms in the Administrative State: The New Frontier for Democratic Experimentalism
By Amit Haim
- Administrative agencies are utilizing machine learning (ML) algorithms to ameliorate inaccuracies, inconsistencies, and inefficiencies. Due to the leeway these agencies have, especially at the local level, there is significant variation in agencies’ procedures, which may lead to reduced transparency and accountability.
- Nevertheless, prescriptive approaches fail to recognize that flexible schemes are important for enhancing the values the administrative state often lacks; schemes are likely to stifle innovation and urge agencies to stick to the status quo.
- I argue that internal governance processes (e.g., partnerships, independent evaluations) can promote transparency while addressing problems in algorithms such as disparities and opacity.
Meg Leta Jones will discuss Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems at 5:15pm on Friday, September 24th at #werobot.
Meg Leta Jones is an Associate Professor in the Communication, Culture & Technology program at Georgetown University where she researches rules and technological change with a focus on privacy, memory, innovation, and automation in digital information and computing technologies. She is also a core faculty member of the Science, Technology, and International Affairs program in Georgetown’s School of Foreign Service, a faculty affiliate with the Institute for Technology Law & Policy at Georgetown Law Center, a faculty fellow at the Georgetown Ethics Lab, and visiting faculty at the Brussels Privacy Hub at Vrije Universiteit Brussel.
Meg Leta Jones’s research covers comparative information and communication technology law, critical information and data studies, governance of emerging technologies, and the legal history of technology. Ctrl+Z: The Right to be Forgotten, Meg’s first book, is about the social, legal, and technical issues surrounding digital oblivion. Her second book project, The Character of Consent: The History of Cookies and Future of Technology Policy, tells the transatlantic history of digital consent through the lens of a familiar technical object. She is also editing a volume with Amanda Levendowski called Feminist Cyberlaw that explores how gender, race, sexuality and disability shape cyberspace and the laws that govern it. More details about her work can be found at MegLeta.com and iSPYlab.net.
Veronica Ahumada-Newhart Will Lead Discussion on How Child-Robot Interactions Can Affect Social Development
Veronica Ahumada-Newhart will discuss Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment at 3:45pm on Friday, September 24th at #werobot.
Dr. Newhart received her M.A. and Ph.D. in Education from the University of California, Irvine. She completed her M.Ed. in Adult Education from the University of Georgia and her B.A. in English Language and Literature from Loma Linda University. Prior to beginning her doctoral work, Dr. Newhart was a public health leader in her role as Director of Oral Health programs for the state of Montana. Her work in oral health supported key measures of Montana’s Title V Maternal and Child Health block grant and developed strong collaborations with the Centers for Disease Control and Prevention (CDC) as well as the Health Resources and Services Administration (HRSA).
She is an NIH funded postdoctoral fellow in UC Irvine’s Institute for Clinical Translational Science. Her research is focused on the use of interactive technologies (e.g., telepresence robots) to establish or augment social connectedness for improved health, academic, and social outcomes. Her research encompasses strong interdisciplinary efforts between UCI’s School of Medicine, School of Education, Department of Informatics, and Department of Cognitive Sciences. Her research interests include child health and human development, virtual inclusion, human-computer interaction, human-robot interaction, and emerging technologies that facilitate learning, human development, and social connectedness.
Edward Tunstel is the moderator for the #WeRobot Field Robotics panel at 1:45pm on Friday, September 24th. The panel will feature the following papers:
Edward Tunstel received his B.S. and M.E. degrees in Mechanical Engineering, with a concentration in robotics, from Howard University. His thesis addressed the use of AI-based symbolic computation for automated modeling of robotic manipulators / arms. In 1989 he joined the Robotic Intelligence Group at the NASA Jet Propulsion Laboratory (JPL) supporting research & development activities on NASA planetary rover projects. As a JPL Fellow he received the Ph.D. in Electrical Engineering from the University of New Mexico. His dissertation addresses distributed fuzzy logic & knowledge-based control of adaptive hierarchical behavior-based systems with application to mobile robot navigation.
After 18 years at JPL, Dr. Tunstel joined the Space Department of the Johns Hopkins Applied Physics Laboratory (APL) in 2007 as its Space Robotics and Autonomous Control Lead and later served as Senior Roboticist in its Research & Exploratory Development Department and Intelligent Systems Center. After a decade with APL, Dr. Tunstel joined Motiv Space Systems, Inc., where he is currently the CTO. He is a Fellow of IEEE and Jr. Past President of the IEEE SMC Society, having previously served as its President, in several of its VP roles, and as General Chair of the 2011 IEEE SMC conference. He is an active member of the IEEE SMC Technical Committees on Robotics & Intelligent Sensing, on Brain-Inspired Cognitive Systems, and on Model-Based Systems Engineering, IEEE RAS Technical Committee on Space Robotics, and the AIAA Space Automation and Robotics Technical Committee. He is an Associate Editor or Editorial Board Member of five international engineering journals. He previously served as Chief Technologist of NSBE Space, a special interest group of NSBE Professionals, and held memberships in the Sigma Xi Scientific Research Society, the New York Academy of Sciences, and ASME.
In academia, he is an adjunct faculty member of Deakin University in Australia, holds the distinction of Honorary Professor at Obuda University in Hungary, chairs an advisory board for an autonomy center of excellence (TECHLAV) at N.C. A&T State University, and has also served as NASA Technical Monitor for undergraduate student research programs and for NASA Faculty Awards for Research as well as co-advisor and committee member for graduate thesis and dissertation research at several universities. He has authored over 170 journal, book chapter and conference publications, and has edited or co-authored 5 books in his areas of expertise.
About We Robot 2021
We Robot 2021 will be in Coral Gables, FL, hosted by the University of Miami School of Law.
We Robot is the most exciting interdisciplinary conference on the legal and policy questions relating to robots. The increasing sophistication of robots and their widespread deployment everywhere—from the home, to hospitals, to public spaces, and even to the battlefield—disrupts existing legal regimes and requires new thinking on policy issues.
If you are on the front lines of robot theory, design, or development, we hope to see you in either live or virtual formats. Come join the conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate.
We would also love to have you as a sponsor. If you are interested in discussing sponsorship opportunities, please get in touch.
- December 14, 2020. Submission portal for papers, demos, and posters opened.
- January 8, 2021. Early Bird Registration begins.
- February 8, 2021. [EXTENDED] Call for papers closes. All paper proposals submitted by this date will be treated equally. Demo & Poster proposals remain open.
- March 8, 2021. We aim to have responses to paper and demo proposals by this date.
- July 15, 2021. Call for posters closes, but acceptances may be offered on a rolling basis (i.e. it may be beneficial to submit earlier).
- July 31, 2021. Last day for Early Bird Registration
- August 16, 2021. Full papers due. They will be posted online at the conference web site unless otherwise agreed.
- September 23, 2021. We Robot Workshops, University of Miami, Coral Gables, FL, USA.
- September 24-25, 2021. We Robot Conference, University of Miami, Coral Gables, FL, USA. (Note – poster session on Sept 24, 2021.)
Register for We Robot 2021
Join Our Mailing List
We plan to offer at a livestream of proceedings, and have an active Twitter presence.
See what you missed on our archived Livestream
This is our tenth annual We Robot conference. Past conferences were great. Read more about them:
- Michelle Johnson Will Moderate the Health Robotics Panel
- Cynthia Khoo Will Lead Discussion of the Problems With Liability in Anti-Discrimination Systems
- Meg Mitchell Will Lead Discussion on Understanding Consumer Contracts with Computational Language Models
- Madeleine Clare Elish Will Lead Discussion on the Political Implications of Autonomous Vehicles
- Deb Raji Will Lead Discussion on “Debunking Robot Rights: Metaphysically, Ethically and Legally”