Jake Moffatt was booking a flight to Toronto and asked the bot about the airline’s bereavement rates – reduced fares provided in the event someone needs to travel due to the death of an immediate family member.

Moffatt said he was told that these fares could be claimed retroactively by completing a refund application within 90 days of the date the ticket was issued, and submitted a screenshot of his conversation with the bot as evidence supporting this claim.

The airline refused the refund because it said its policy was that bereavement fare could not, in fact, be claimed retroactively.

Air Canada argued that it could not be held liable for information provided by the bot.

  • Godort@lemm.ee
    link
    fedilink
    arrow-up
    77
    ·
    10 months ago

    Air Canada probably spent more trying to fight this claim rather than just issuing payment when the chatbot logs were sent in

    • Evkob@lemmy.caOP
      link
      fedilink
      arrow-up
      64
      ·
      10 months ago

      I wonder how anyone in their right mind would propose the defense “we can’t be held liable for what the chatbot we purposefully put on our website said”. Did Air Canada’s lawyers truly think this would fly?

      If you don’t want to be held to AI hallucinations, don’t put an AI chatbot on your website, seems easy enough.

      • Monument@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        10 months ago

        My organization won’t even allow auto translation widgets on our site. Instead, we refer people to using web translation services on their own, with clear language that says we’re not liable for third party mistranslations. (In multiple languages, by a company that has signed an indemnity agreement with us if their translation becomes an issue.)

        It’s a bit heavy-handed, but the lawyers hold more sway than the communications folks, and I don’t disagree with the approach – you don’t want users misunderstanding what your site says, and being able to blame you for it.

      • Truck_kun@beehaw.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        This article does not actually mention the chatbot being AI. As chatbots have been around for many years, it is possible this is just a normal non-‘AI’ chatbot that someone programmed that information with (potentially old information that had long since changed, but no one has updated).

        Either way, they are liable for what it tells customers. If it is AI, well… no company should be using AI to make legally binding statements, or advertisements to customers (without human review).

        At the moment, companies deploying an AI, should be doing so with AI as the product, not integrated into selling non-AI related products, or services.

        • Evkob@lemmy.caOP
          link
          fedilink
          arrow-up
          6
          ·
          10 months ago

          You know what, you’re completely right. Thanks for pointing that out, my brain just auto-completed that detail because of how prevalent “AI” is in the news these days.

          Honestly though, if it’s a more traditional chatbot that they had to program themselves, it’s all the more embarrassing for Air Canada that they were trying to weasel themselves out of this.

    • Drusas@kbin.social
      link
      fedilink
      arrow-up
      13
      ·
      10 months ago

      I am completely certain that’s the case. For them, this is more about precedent.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      10 months ago

      Surely they’re scared of more people realizing that saving these chats is important. How else will they get away with scummy practices?