AUTHOR

We-Robot-2021

Author Archive | We-Robot-2021

Predicting Consumer Contracts

Noam Kolt

Noam Kolt will present his paper, Predicting Consumer Contracts, on Saturday, September 25th at 1:30pm at #werobot 2021. Meg Mitchell will lead the discussion.

This paper empirically examines whether a computational language model can read and understand consumer contracts. Language models are able to perform a wide range of complex tasks by predicting the next word in a sequence. In the legal domain, language models can summarize laws, translate legalese into plain English, and, as this paper will explore, inform consumers of their contractual rights and obligations.

Meg Mitchell (discussant)

To showcase the opportunities and challenges of using language models to read consumer contracts, this paper studies the performance of GPT-3, a powerful language model released in June 2020. The case study employs a novel dataset comprised of questions relating to the terms of service of popular U.S. websites. Although the results are not definitive, they offer several important insights. First, owing to its immense training data, the model can exploit subtle informational cues embedded in questions. Second, the model performed poorly on contractual provisions that favor the rights and interests of consumers, suggesting that it may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.

While language models could potentially empower consumers, they could also provide misleading legal advice and entrench harmful biases. Leveraging the benefits of language models in reading consumer contracts and confronting the challenges they pose requires a combination of engineering and governance. Policymakers, together with developers and users of language models, should begin exploring technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.

Comments { 0 }

Debunking Robot Rights: Metaphysically, Ethically and Legally

Abeba Birhane

Abeba BirhaneJelle van Dijk, and Frank Pasquale will present their paper, Debunking Robot Rights: Metaphysically, Ethically and Legally, on Saturday, September 25th at 10:00am at #werobot 2021. Deb Raji will lead the discussion.

In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that could be denied or granted rights. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, rights which have undermined the US electoral process, as well as workers; and consumers’ rights. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists to fantasize about benevolently sentient machines, while so much of current AI and robotics is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering.

Jelle van Dijk

Building on theories of phenomenology, post-Cartesian approaches to cognitive science, and critical race studies, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled and surveilled society. What we find is the seamless integration of machinic systems into daily lives in the name of convenience and efficiency. The last thing these systems need is legally enforceable “rights” to ensure persons defer to them. Conversely, the ‘autonomous intelligent machine’ is a sci-fi fantasy, a meme that functions to mask the environmental costs and human labour and which are the backbone of contemporary AI. The robot rights debate further mystifies and obscures these problems. And it could easily provide a normative rationale for permitting powerful entities developing and selling AI to be absolved from accountability and responsibility, given the general association of rights with responsibility.

Frank Pasquale

Existing robotic systems (from chatbots to humanoid robots) are often portrayed as fully autonomous systems and that is part of the appeal for granting them rights. However, these systems are never fully autonomous, but always human-machine systems that run on human labour and environmental resources and are necessarily embedded in social systems from their conception, to development to deployment and beyond. Yet, the “rights” debate proceeds from the assumption that the entity in question is somewhat autonomous, or worse that it is devoid of exploited human labour. Approaching ethics requires reimagining ethics from the perspective, needs, and rights of the most marginalized and underserved. This means that any robot rights discussion that overlooks underpaid and exploited populations that serve as the backbone for “robots” as well as the environmental cost of creating AI, risks being disingenuous. The question should not be whether robotic systems deserve rights, but rather if we grant or deny rights to a robotic system, what consequences and implications arise for people owning, using, developing, and affected by the actual robots?

Deb Raji (discussant)

The time has come to change the narrative, from “robot rights” to the duties of the corporations and powerful persons now profiting from sociotechnical systems (including, but not limited to, robots). Damages, harm and suffering have been repeatedly documented as a result of the creation and integration (into the social world) of AI systems. Rather than speculating about the desert of hypothetical machines, the far more urgent conversation concerns robots and AI as concrete artifacts built by powerful corporations, further invading our private, public, and political space, and perpetuating injustice. A purely intellectual and theoretical debate is at risk of obscuring the real threat here: that many of the actual robots that corporations are building are doing people harm both directly and indirectly, and that a premature and speculative robot rights discourse risks even further unravelling our frail systems of accountability for technological harms.

Comments { 0 }

Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems

Katie Szilagyi

Katie Szilagyi, Jason Millar, Ajung Moon, and Shalaleh Rismani will present their paper, Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems, on Friday, September 24th at 5:15pm at #werobot 2021. Meg Leta Jones will lead the discussion.

Advanced Driver Assistance Systems (ADAS) are transforming the modern driving experience with technology including emergency braking, blind spot monitoring, and sirens to ward off driver errors.

The assumption appears straightforward: automation will improve driver safety because automation just reduces the occurrence of human driving errors. But, is this a safe assumption? Our current regulatory reality, within which this assumption operates, demands that drivers be able to effectively monitor and operate ADAS without any formal training or licensing to do so. This is premised on the outdated notion that drivers are still in full control of critical driving tasks. Meanwhile, ADAS now asks drivers to both drive the vehicle and simultaneously monitor these complex systems. This significant shift is not yet reflected in driving licensing regimes or transportation liability regulations.

Jason Millar

Ajung Moon

Conversations about liability and automated vehicles often jump straight to the pesky problems posed by hardcoding ethical decisions into machines. By focusing on tomorrow’s automated driving technologies, such investigations overlook today’s mundane, yet still instructive, driving automation: blind spot monitoring, adaptive cruise control, and automated parking. These are the robotic co-pilots truly transforming human-robot interaction in the cabin, which could have serious legal consequences for transportation statutes, regulations, and liability schemes. Robotics scholarship has effectively teased out a distinction between humans in-the-loop and on-the-loop for automation. ADAS start to blur these lines, by automating some tasks but not others, by varying system behaviours between different vehicle manufacturers, and by expecting the driver to serve as advanced systems monitor without ever receiving appropriate training.

Meg Leta Jones (discussant)

Shalaleh Rismani

In Part I, we explain the technical aspects of today’s most common assistive driving technologies and show how they are best situated in the litany of automation concerns documented by the current SAE framework. In Part II, we offer theoretical framing through automation bias, explaining some of the key concerns that arise when untrained people take control of dangerous automated machinery. In Part III, we provide an overview of driver licensing regimes, demonstrating the paucity of regulations designed to ensure the effective use of ADAS on public roads. In Part IV, we compare the automation challenges generated by ADAS to legal accounts of robotics, asking how liability accrues for operators of untrained robotic systems in other social spheres. Finally, in Part V, we offer clear policy advice on how to better integrate assistive driving technologies with today’s schemes for driver regulation.

Comments { 0 }

Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Policy

Vicky Charisi

Vicky CharisiUrs GasserRandy Gomez, and Selma Šabanović will present their paper, Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Policy, on Friday, September 24th at #werobot 2021. Veronica Ahumada-Newhart will lead the discussion at 3:45pm.

Urs Gasser

Social robotics now reach vulnerable populations, notably children, who are in a critical period of their development. These systems, algorithmically mediated by Artificial Intelligence (AI), can be used to successfully supplement children’s learning and entertainment because children can effectively and cognitively engage these systems. Demand for social robots that interact with children is likely to increase in the coming years. As a result there are increasing concerns that need to be addressed due to the profound impact that this technology can have on children.

Selma Šabanović

To date, the majority of AI policies, strategies and guidelines make only a cursory mention of children. To help fill this gap, UNICEF is currently exploring approaches to uphold children’s rights in the context of AI and to create opportunities for children’s participation. Robots bring unique opportunities but also robot-specific considerations for children. This combination calls into question how existing protection regimes might be applied; at present it remains unclear what the rules for children’s protection relating to their interaction with robots should look like.

Veronica Ahumada-Newhart (discussant)

Children dynamically and rapidly develop through social interactions, and evidence shows that they can perceive robots as part of their social groups, which means that robots can affect their development. However, the lack of AI literacy and relevant skills that would support children’s critical reflection towards robotic technology is currently missing from the majority of formal educational systems. This paper will elaborate on the development of a dynamic framework which will identify the key risks and opportunities for robot adoption for children, the key actors, and a set of operationable actions that might support future implications of robots for children.

Comments { 0 }

On the Practicalities of Robots in Public Spaces

Cindy Grimm

Cindy Grimm and Kristen Thomasen will present their paper, On the Practicalities of Robots in Public Spaces, on Friday, September 24th at #werobot 2021. Edward Tunstel will lead the 1:45pm – 3:15pm panel on Field Robotics.
There has been recent debate over how to regulate autonomous robots that enter into spaces where they must interact with members of the public. Sidewalk delivery robots, drones, and autonomous vehicles, among other examples, are pushing this conversation forward. Those involved in the law are often not well-versed in the intricacies of the latest sensor or AI technologies, and robot builders often do not have a deep understanding of the law. How can we bridge these two sides so that we can form appropriate law and policy around autonomous robots that provides robust protections for people, but does not place an undue burden on the robot developers?

Kristen Thomasen

This paper proposes a framework for thinking about how law and policy interact with the practicalities of autonomous mobile robotics. We discuss how it is possible to start from the two extremes, in regulation (without regard to implementation) and in robot design (without regard to regulation), and iteratively bring these viewpoints together to form a holistic understanding of the most robust set of regulation that still results in a viable product. We also focus particular attention on the case where it is not possible to do this due to a gap between the minimal set of realistic regulation and the ability to autonomously comply with it. In this case, we show how that can drive scholarship in the legal and policy world and innovation in technology. By shining a light on these gaps, we can focus our collective attention on them, and close them faster.

Edward Tunstel (moderator)


As a concrete example of how to apply our framework, we consider the case of sidewalk delivery robots on public sidewalks. This specific example has the additional benefit of comparing the outcomes of applying the framework to emergent regulations. Starting with the “ideal” regulation and the most elegant robot design, we look at what it would take to implement or enforce the ideal rules and dig down into the technologies involved, discussing their practicality, cost, and the risks involved when they do not work perfectly.

Do imperfect sensors cause the robot to stop working due to an abundance of caution, or do they cause it to violate the law or policy? If there is a violation, how sure can we be that it was a faulty piece of technology, rather than a purposeful act by the designer? Does implementing the law or policy mean a technology so expensive that the robot is no longer a viable product? In the end, the laws and policies that will govern autonomous robots as they do their work in our public spaces need to be designed with consideration of the technology. We must strive for fair, equitable laws, that are practically and realistically enforceable with current or near-future technologies.

Comments { 0 }

Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account

Jeremy de BeerLaura FosterChidi OguamanamKatie Szilagyi, and Angeline Wairegi will present their paper, Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account, on Friday, September 24th at #werobot 2021. Edward Tunstel will moderate the 1:45pm – 3:15pm panel on Field Robotics.

Laura Foster


Robots are on their way to East African farms. Deploying robotic farm workers and corresponding smart systems is lauded as the solution for improving crop yields, strengthening food security, generating GDP growth, and combating poverty. These optimistic narratives mimic those of the Green Revolution and activate memories of its underperformance in Africa. Expanding upon previous contributions on smart farming and the colonial aspects of AI technology, this paper investigates how AI-related technologies are deployed across East Africa.

Edward Tunstel (moderator)

Chidi Oguamanam


The creation of AI algorithms and datasets are processes driven by human judgements; the resultant technology is shaped by society. Cognizant of this, this paper provides an overview of emerging smart farming technologies across the East African region, situated within contemporary agricultural industries and the colonial legacies that inform women’s lives in the region.

Angeline Wairegi


After establishing the gendered implications of smart farming as a central concern, this paper provides rich analysis of the state-of-the-art scholarly and policy literature on smart farming in the region, as well as key intergovernmental agricultural AI initiatives being led by national governments, the United Nations, the African Union, and other agencies. This enables an understanding of how smart farming is being articulated across multiple material-discursive sites—media, government, civil society, and industry.

Katie Szilagyi

What becomes apparent is that smart farming technologies are being articulated through four key assumptions: not only techno-optimism, as above, but also ahistoricism, ownership, and human exceptionalism. These assumptions, and the multiple tensions they reveal, limit possibilities for governing smart farming to benefit small-scale women farmers. Using these four frames, our interdisciplinary author team identifies the key ethical implications for adopting AI technologies for East African female farmers.

Comments { 0 }

Robots in the Ocean

Annie Brett

Annie Brett will present her paper, Robots in the Ocean, on Friday, September 24th at #werobot 2021. Edward Tunstel will lead the 1:45pm – 3:15pm panel on Field Robotics.

Academics (and particularly legal academics) have not paid much attention to robots in the ocean. The small amount of existing work is focused on relatively narrow questions, from whether robots qualify as vessels under the Law of the Sea to whether robotic telepresence can be used to establish a salvage claim on shipwrecks.

This paper looks at how two major robotic advances are creating fundamental challenges for current ocean governance frameworks. The first is a proliferation in robots actively altering ocean conditions through both exploitative alteration, such as deep sea mining, and alteration with conservation goals, such as waste removal. This is best illustrated by The Ocean Cleanup, who defied warnings from scientists in deploying an ocean waste capture prototype that became irreparable merely six months into its voyage. The second is in observational robots that are being used, primarily by scientific and defense entities, to further understand of ocean ecosystems and human activities in them.

Edward Tunstel (moderator)

Annie Brett focuses on the regulatory grey area of international law implicated by robots with the capacity to actively alter ocean conditions. She also focuses on analogues in terrestrial environmental law and climate geoengineering literature to propose a mechanism for regulating robotic interventions in the ocean. Specifically, she argues for a modified form of environmental impact review that attempts to strike a balance between allowing innovation in ocean robots and providing a measure of oversight for interventions that have the potential to permanently alter ocean ecosystems.

Comments { 0 }

Being “Seen” vs. “Mis-seen”: Tensions Between Privacy and Fairness in Computer Vision

Alice Xiang

Alice Xiang will present her paper, Being “Seen” vs. “Mis-seen”: Tensions Between Privacy and Fairness in Computer Vision, on Friday, September 24th at 11:30am at #werobot 2021. Daniel Susser will lead the discussion.

The rise of AI technologies has caused growing anxiety that AI may create mass surveillance systems and entrench societal biases. Major facial recognition systems are less accurate for women and individuals with darker skin tones due to a lack of diversity in the training datasets. Efforts to diversify datasets can raise privacy issues; plaintiffs can argue that they had not consented to having their images used in facial recognition training datasets.

This highlights the tension that AI technologies create between representation vs. surveillance: we want AI to “see” and “recognize” us, but we are uncomfortable with the idea of AI having access to personal data about us. This tension is further amplified when the need for sensitive attribute data to detect or mitigate bias is considered. Existing privacy law addresses this area primarily by erring on the side of hiding people’s sensitive attributes unless there is explicit informed consent. While some have argued that not being “seen” by AI is preferable—that being under-represented in training data might allow one to evade mass surveillance—incomplete datasets may result in detrimental false-positive identification. Thus, not being “seen” by AI does not protect against being “mis-seen.”

Daniel Susser (discussant)

The first contribution of this article is to characterize this tension between privacy and fairness in the context of algorithmic bias mitigation. In particular, this article argues that the irreducible paradox underlying current efforts to design less biased algorithms is the simultaneous desire to be both “seen” yet “unseen” by AI. Second, the Article reviews the viability of strategies that have been proposed for addressing the tension between privacy and fairness and evaluates whether they adequately address associated technical, operational, legal, and ethical challenges. Finally, this article argues that solving the tension between representation and surveillance requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” Untethering these concepts (being seen, unseen, vs. mis-seen) can bring greater clarity around what rights relevant laws and policies should seek to protect. Given that privacy and fairness are both critical objectives for ethical AI, it is vital to address this tension head-on. Approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.

Comments { 0 }

The Legal Construction of Black Boxes

Elizabeth Kumar

Elizabeth KumarAndrew Selbst, and Suresh Venkatasubramanian will present their paper, The Legal Construction of Black Boxes, on Saturday, September 25th at 10:00am at #werobot 2021. Ryan Calo will lead the discussion.

Abstraction is a fundamental technique in computer science. Formal abstraction treats a system as defined entirely by its inputs, outputs, and the relationship that transforms inputs to outputs. If a system’s user knows those details, they need not know anything else about how the system works; the internal elements can be hidden from them in a “black box.” Abstraction also entails abstraction choices: What are the relevant inputs and outputs? What properties should the transformation between them have? What constitutes the “abstraction boundary?” These choices are necessary, but they have implications for legal proceedings that involve the use of machine learning (ML).

Andrew Selbst

This paper makes two arguments about abstraction in ML and legal proceedings. The first is that abstraction choices that can be treated as normative and epistemic claims made by developers that compete with judgments properly belonging to courts. Abstraction constitutes a claim as to the division of responsibility: what is inside the black box is the province of the developer; what is outside belongs to the user. Abstraction also is a factual definition, rendering the system an intelligible and interrogable object. Yet the abstraction that defines the boundary of a system is itself a design choice. When courts treat technology as a black box with a fixed outer boundary, they unwittingly abdicate their responsibility to make normative judgments as to the division of responsibility for certain wrongs, and abdicate part of their factfinding roles by taking the abstraction boundaries as a given. We demonstrate these effects in discussions of foreseeability in tort law, liability in discrimination law, and evidentiary burdens more broadly.

Suresh Venkatasubramanian

Our second argument builds from that observation. By interpreting the abstraction as one of many possible design choices, rather than a simple fact, courts can surface those choices as evidence to draw new lines of responsibility without necessarily interrogating the interior of the black box itself. Courts may draw on evidence about the system as presented to support these alternative lines of responsibility, but by analyzing the construction of the implied abstraction boundary of a system, they can also consider the context around its development and deployment.

Ryan Calo

Courts can rely on experts to compare a designer’s choices with emerging standard practices in the field of ML or assign a burden to a user to justify their use of off-the-shelf technology. After resurfacing the normative and epistemic contentions embedded in the technology, courts can use familiar lines of reasoning to assign liability as proper.

 

Comments { 0 }

We Robot 2021: Ten Year Anniversary — New Dates!

We Robot 2021 is proud to celebrate its 10th anniversary at the University of Miami School of Law. We have changed the dates to Sept. 23-25, 2021. We hope you will join us live, but we’re making plans for a virtual backup (or even perhaps….parallel….just in case.)

More info …. soonish….

Comments { 0 }