When an Air Canada chatbot provided bad information to a customer, the airline attempted denying responsibility, as well blaming the customer for failing to do further research. These arguments did not fly.
Air Canada, for its part, argued that it could not be held liable for information provided by the bot.
“In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website,” Rivers wrote.
…
The airline also argued that the chatbot’s response to Moffatt’s inquiry included a link to a section of its website that outlined the company’s policy and said that requests for a discounted fare are not allowed after someone has travelled.
Rivers rejected this argument as well.
“While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website,” he wrote.
The whole thing is remarkably clear, and a lovely example of justice at work. My only quibble is that Air Canada wasn’t punished beyond paying $125 in fees.

