Highlight from Moral AI
Bias in our societal structures and police procedures will lead to bias in data used to train AIs ('bias in'), which will in turn lead AI algorithms to predict too much risk of recidivism for members of certain disadvantaged communities ('bias out').