Similarly, in discussions about AI, terms like proxy gaming, emergent goals, power-seeking, or unaligned behaviors are frequently and inaccurately cited as risks, when they are actually potential threats or hazards. They don't inherently include the dimension of outcome.