Armitage Archive

Moral AI

by Jana Schaich Borg

This page contains highlights I saved while reading Moral AI by Jana Schaich Borg. These quotes were captured using Readwise and reflect the ideas or passages that stood out to me most.

Highlights

AI teams repeatedly report that their organizations' financial requirements, timelines, expectations, and compensation structures are not compatible with the investment necessary to grapple with the ethical problems their AI products pose.

Permalink to this highlight


Transparency enables trust, which is necessary for wide adoption of the AI system, along with all of its potential social benefits.

Permalink to this highlight


Waymo, one of Uber's competitors in the self-driving car race, decided that it was too dangerous to rely on humans to intervene with self-driving cars at all, so they made the strategic decision to make their AI-driven cars fully autonomous.

Permalink to this highlight


It is very difficult to figure out how to present and describe information about AI models in ways that stakeholders can make sense of.

Permalink to this highlight


Sometimes the problem will be engaging the human operators' attention fast enough. In the words of one general, 'How do you establish vigilance at the proper time? Three hours and 59 minutes of boredom followed by one minute of panic.'33 The human brain is poorly designed for these types of contexts. More frequently, it may simply be impossible for operating teams to get the information they need to evaluate the AI system's recommendation in time.

Permalink to this highlight


The fact that society is still figuring out who is responsible for harm in cases like these can lead to the result that nobody is held responsible in the meantime, even if we all agree that somebody is responsible. These 'responsibility gaps' remove important incentives for people to use AI with due care and make it difficult to compensate victims of AI harms. The more pronounced these types of responsibility gap are, the less likely it is that people take sufficient action to avoid AI harms and the less recourse victims of AI have.

Permalink to this highlight


Figuring out how to make an ethical AI model currently takes more time and resources than figuring out how to make any functioning AI model, so many teams will elect (or be forced) to postpone thinking about ethical issues until after they succeed in creating the most straightforward, sufficiently accurate AI model.

Permalink to this highlight


AI product teams often have to navigate a mismatch between the inherent uncertainty associated with AI projects and what organizational leaders have come to expect from traditional software products, like apps. This mismatch has profound implications for whether moral AI will be given the organizational support it needs

Permalink to this highlight


In particular, the people who create AI algorithms themselves – especially cutting-edge algorithms that try to address ethical concerns, like the 'fair AI' algorithms or 'interpretable AI' algorithms we have mentioned – are usually not part of an AI product team or even part of a product's sponsoring organization at all.

Permalink to this highlight


we can all be misinformed, forgetful, confused, emotional, or biased when we make moral judgments. If AIs are trained on these judgments, the AIs will reflect and perpetuate the results of our misinformation, forgetfulness, confusion, emotion, and bias.

Permalink to this highlight


If a new company creates a medical advice chatbot based on OpenAI's GPT models, and this chatbot ends up giving misleading and harmful advice, is the new company responsible for this or is OpenAI?

Permalink to this highlight


when social media AI algorithms facilitate hate crimes, as discussed in previous chapters, who should be held liable?

Permalink to this highlight


When open-source chatbots give false medical advice that causes citizens to take poisons they believe are effective weight-loss treatments, who should be punished and who should help pay for the resulting medical care?

Permalink to this highlight


if hospitals use AI to detect cancer and the AI makes diagnostic mistakes that lead to some patients receiving unnecessary treatments or treatments too late, who is responsible for these misdiagnoses? Who, if anyone, should be forced to pay damages?

Permalink to this highlight


At the time of the accident, Uber did not have a safety division, a safety plan, a dedicated safety manager, or people in charge with experience of managing safety. These are glaring omissions for a company working in a safety-critical domain.

Permalink to this highlight


Nonetheless, according to an investigation by the National Transportation Safety Board (and confirmed by Uber itself), Uber's system was not designed to consider the possibility that pedestrians might jaywalk, i.e., walk across roads outside designated crossings, or push bicycles across the road while on foot. These oversights were why the car never identified Elaine Herzberg as a human pedestrian walking a bicycle across the road and, instead, got hung up trying to decide what kind of object was ahead of it instead of promptly slowing down once an object was detected.

Permalink to this highlight


The flow of information between AI creators and AI community stakeholders needs to be bidirectional, so it should be convenient both for AI creators to share their new AI ideas and prototypes and for stakeholders to give their feedback.

Permalink to this highlight


Bias in our societal structures and police procedures will lead to bias in data used to train AIs ('bias in'), which will in turn lead AI algorithms to predict too much risk of recidivism for members of certain disadvantaged communities ('bias out').

Permalink to this highlight


In general, individual technology contributors are usually not held legally responsible for technology harms or failures unless they break their employment contracts, violate their company's ethical guidelines, or avoid the steps a company put in place to avoid the harms. Instead, the company that employs the AI engineers and AI technology contributors is typically held legally liable.

Permalink to this highlight


We are all very susceptible to automation complacency: as soon as any part of a task is automated, human overseers trust the automated system to get the job done and therefore stop paying attention.

Permalink to this highlight


Technical tools are still not sufficient to address all of the ways in which AI can be unfair. Many of these tools are hard to use, or it's hard to figure out how to apply them to specific cases. There are still significant gaps in what is provided by the tools and what is needed for people to implement them successfully.

Permalink to this highlight


judicial uses of proprietary, unexplainable, opaque, or black-box AI without vetted accuracy records can contribute to unjust and unfair procedures in our legal system, even if AI algorithms that are used distribute punishments and benefits fairly.

Permalink to this highlight


we should never just assume that an AI will be more accurate than humans, and even if an AI is more accurate in some contexts, that doesn't mean it will be more accurate than humans in other contexts or at other time points when new factors become important.

Permalink to this highlight


in the words of journalist Karl Bode, 'It's not clear how many studies like this we need before we stop using "anonymized" as some kind of magic word in privacy circles.'73 At best, data are very, very hard to anonymize and are usually not handled appropriately to ensure anonymity. At worst, there is so much data available in the world that it is no longer possible to truly anonymize a data set.

Permalink to this highlight


The privacy paradox could be AI's greatest threat to privacy. AI's promise has contributed to a cultural ecosystem where our privacy is violated so continuously that many in society no longer do much to try to stop it. This trend poses a grave danger. Privacy is worth protecting. It enables us to maintain our autonomy, individuality, creativity, and social relationships by preventing exploitation and affording critical psychological and functional benefits. Despite the way privacy violations have been normalized and naturalized, privacy violations are not an inevitable consequence of AI. We do not have to accept a culture that pits privacy against innovation and pursuit of knowledge. AI and privacy can co-exist in a society that fosters innovation as well as human dignity and autonomy. But we need to be willing to work hard to make sure that all of these values are sufficiently respected.

Permalink to this highlight


Imagine a data set with columns for customer ID, ZIP code, birth date, and gender. The customer ID is a random number, so there is no way to use it to figure out whom it represents. However, one of the customers lives in a ZIP code with a small population, and it turns out they are the only male in that ZIP code born on their birthday. As a result, the values in the ZIP code, birth date, and gender columns allow that customer to be 're-identified' in the data set, even if that customer's name is not explicitly included in the data set. This won't happen just for people in small towns. About 87 per cent of the population of the United States can be uniquely identified through a combination of only their birth date, gender, and five-digit ZIP code.

Permalink to this highlight


Dr John Hawley, a US Army researcher who was involved in the initial development of the Patriot system as well as subsequent investigations of its mistakes, concluded, 'One of the hard lessons of my 35 years of experience with Patriot is that an automated system in the hands of an inadequately trained crew is a de facto fully automated system.'

Permalink to this highlight


However, making some AI systems behave morally will require addressing an additional problem: AIs that make the same moral judgments as humans will also make the same moral mistakes as humans.

Permalink to this highlight


For example, a consequentialist AI that was programmed to 'minimize the total amount of suffering and death in the long run' might determine that it should kill all humans now, since that would prevent all human suffering in the future.

Permalink to this highlight


Many technology leaders, such as Google, developed AI products years ago that they refrained from introducing to the public due to ethical concerns, but many companies no longer feel they can afford to keep some of these products off the market.25 If these companies are not held responsible for harms their AIs cause, they will face increasing pressure to create lucrative AI products with questionable safety profiles, even if they originally had good intentions.

Permalink to this highlight


When companies use AI to gather private financial and medical information about their customers in order to target their advertisements and coupons more effectively, such as when Target's coupons revealed a 16-year-old's pregnancy to her father, who is responsible for this invasion of privacy?

Permalink to this highlight


if military units use AI to guide missiles and drones, but an AI-driven weapon kills an innocent family instead of the targeted terrorist? Who is responsible for the civilian losses? If no human is held responsible because no human knew what the AI would do, will military leaders have sufficient incentives to avoid such accidents? Will they get away with murder?

Permalink to this highlight


there are a lot of incentives either to ignore the reasons to be pessimistic about AI or to tell ourselves that AI's possible negative impacts are a foregone conclusion that we can't do anything about. We must not accept that line of thinking. We can build AI that can morally self-regulate by avoiding actions it predicts we would see as immoral, but it will take patience and dedicated research. We can also design regulations, organizational practices, educational resources, and democratic technologies that will make it much less likely that AI will be used in harmful and immoral ways.

Permalink to this highlight


It is unrealistic to assume that pursuing short-term financial gain will reliably lead to optimal moral outcomes for society.

Permalink to this highlight


Leaders should instruct their AI product teams to include ethical and social concerns in the initial design requirements of all AI products. When leaders identify these concerns as practical constraints that are of equal priority to engineering or financial constraints, prototypes will be less likely to qualify as 'minimum viable products' if they have strong potential for harming society.

Permalink to this highlight


But the AIs of today cannot interact with or affect society without our help. Humans still have to build AI models, train AI models, power AI models, and make AI models accessible to others for AI to do much of anything. Even if an AI of the future does eventually take over the world, we will first have to create that AI.

Permalink to this highlight


Whether AI creators realize it or not, they cannot make AI systems without making some decisions or assumptions about what is morally right or wrong. Further, all AI systems will inevitably have morally relevant effects. The idea behind moral AI is that decisions about AI systems that have moral consequences should be made intentionally and thoughtfully, rather than by accident or default.

Permalink to this highlight


A moral AI strategy should, therefore, create resources to help leaders: (i) assess discrepancies in their organizations' structures, practices, and ethical goals; and (ii) learn the most effective methods for convincing AI contributors that they are supported in allocating resources to anticipating and solving moral problems related to their AI products.

Permalink to this highlight


Until the people who are creating, packaging, applying, scaling, and monitoring AI feel confident that the effort moral AI requires is consistent with what their organizations want them to prioritize, ethical issues are likely to go largely unaddressed, regardless of what technical tools are available or what guiding principles are advertised.

Permalink to this highlight


it is often not clear what level of accuracy, fairness, or transparency is necessary for a minimum viable AI product. It may be fine to deploy a prototype of a wine-recommendation AI which is 70 per cent accurate, but most people would reject a self-driving car that stops at only 99 per cent of stop signs, ignoring the other 1 per cent.

Permalink to this highlight


The field of AI product development has a dirty secret: most AI initiatives fail. According to one report, eight out of ten AI projects do not succeed or provide value, typically because they can't find enough appropriate data to train models, can't generate accurate enough models, are too expensive to train, or address a problem that users don't end up caring about.

Permalink to this highlight


Despite our imperfections, most of us want our own moral judgments and decisions to be less subject to such distorting influences. We also often want the morality built into an AI to reflect the moral judgments that we ourselves would make if we were more informed, rational, and unbiased, even if we will never actually be in an ideal state. In such cases, we want AIs to predict and reflect our idealized human moral judgments, not our actual moral judgments.

Permalink to this highlight