Have they? There is the air canada thing, but that was kinda a different situation, the chat bot was explicitly acting for the company, and made direct claims for the company?
IANAL, but proving discrimination was already hard, and now they can just point at the black box and blame it, so its gonna get harder?
The courts have already established that the user is still responsible for the tool, even if the tool is very sophisticated.
Have they? There is the air canada thing, but that was kinda a different situation, the chat bot was explicitly acting for the company, and made direct claims for the company?
IANAL, but proving discrimination was already hard, and now they can just point at the black box and blame it, so its gonna get harder?
Especially if it gets rolled into other checks, like a police check, or a “personality fit”, which makes it more ambiguous.