For example, a consequentialist AI that was programmed to 'minimize the total amount of suffering and death in the long run' might determine that it should kill all humans now, since that would prevent all human suffering in the future.
For example, a consequentialist AI that was programmed to 'minimize the total amount of suffering and death in the long run' might determine that it should kill all humans now, since that would prevent all human suffering in the future.