The fact that society is still figuring out who is responsible for harm in cases like these can lead to the result that nobody is held responsible in the meantime, even if we all agree that somebody is responsible. These 'responsibility gaps' remove important incentives for people to use AI with due care and make it difficult to compensate victims of AI harms. The more pronounced these types of responsibility gap are, the less likely it is that people take sufficient action to avoid AI harms and the less recourse victims of AI have.