this post was submitted on 16 Feb 2024
155 points (100.0% liked)

World News

22057 readers
150 users here now

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] ptz@dubvee.org 62 points 9 months ago (1 children)

Such a good and sane outcome. Not only am I happy for the affected customers, I'm also going to file this away in my bookmarks for later use.

One of the main concerns I bring up every time one of the managers wants to throw "AI" at something because it's trendy is "who's responsible when it just makes something up?".

I know this is a Canadian ruling instead of a US one, but at least I can point to it and say "probably us".

[–] Gaywallet@beehaw.org 34 points 9 months ago

Yup! I immediately sent this link to anyone who's had to deal with the "throw a chatbot at it" management response

[–] circuscritic@lemmy.ca 44 points 9 months ago (1 children)

This guy was only saved because he saved the screenshots.

This may seem like overkill, but whenever I'm dealing with customer service for a significant financial concern, I always take detailed notes: time, date, name, what was discussed, and screenshots when relevant.

Just a few years ago a large appliance company tried to dick me around on a faulty product that was several thousand dollars.

It took weeks of back and forth, but when I was able to make contact with a layer of management just outside and above their customer service channel with my very long and detailed log of their bullshit, they authorized a check the next day.

[–] TheEntity@kbin.social 12 points 9 months ago (3 children)

Are screenshots even still considered evidence? They should be absolutely trivial to manipulate.

[–] otter@lemmy.ca 25 points 9 months ago

It wouldn't be direct evidence, but it might put it on the company to provide their logs. If they didn't retain them, then that could affect the ruling

[–] specialdealer@lemm.ee 13 points 9 months ago

If you’re willing to commit outright fraud, lots of crime is easy. But also the penalty for ever being discovered can be severe.

They also didn’t deny it happened. They would check logs before paying out.

[–] mozz@mbin.grits.dev 4 points 9 months ago* (last edited 9 months ago)

Yes, but submitting a manipulated screenshot in court is a whole different type of fraud and can carry whole different types of penalties than just claiming X happened when it was Y, so very reasonably puts the onus back on the company to look through their logs and try to prove what's in your screenshots didn't happen that way.

[–] Computerchairgeneral@kbin.social 21 points 9 months ago (1 children)

Good. I'm sure the chatbot will be back up and running soon, but anything that reminds companies there are risks to replacing humans with "AI-enhanced" chatbots is good. Unfortunately, I'm sure the lesson companies are going to take away from this is to include a disclaimer that the chatbot isn't always correct. Which kind of defeats the whole point of using a chatbot to me. Why would I want to use something to try and solve a problem that you just told me could give me inaccurate information?

[–] bedrooms@kbin.social 1 points 9 months ago* (last edited 9 months ago)

Expedia does it right. Just stop stupid customers at the brigade, and connect people with the real need immediately to real people.

Amazon is similar but the real people there are useless as fuck in my country. They're foreign part timers barely speaking the language if my country... Can't do anything specific.

[–] LoamImprovement@beehaw.org 4 points 9 months ago

Yeah, I bet now we'll be seeing some real people in chats while they scramble to cover their asses.

[–] autotldr@lemmings.world 4 points 9 months ago (2 children)

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryOn the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto.

In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked.

Experts told the Vancouver Sun that Moffatt's case appeared to be the first time a Canadian company tried to argue that it wasn't liable for information provided by its chatbot.

Last March, Air Canada's chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI "experiment."

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

It was worth it, Crocker said, because "the airline believes investing in automation and machine learning technology will lower its expenses" and "fundamentally" create "a better customer experience."


Saved 81% of original text.

[–] sub_ubi@lemmy.ml 12 points 9 months ago

Maybe your kind isn't so bad. Thanks for the refund.

[–] A1kmm@lemmy.amxl.com 5 points 9 months ago

Ironically the bot summary missed the crucial point that Air Canada's chatbot gave inaccurate information.

[–] intrepid@lemmy.ca 4 points 9 months ago (1 children)

There are two disturbing tendencies being demonstrated here:

  1. Using useless AI to engage and disperse complaining customers. The AI can't make meaningful solutions to many customer complaints. But companies use it to annoy the customers into giving up, so that they can save the cost of real customer support.
  2. Either blaming the AI or insisting that it's right, when it makes a mistake. AI by nature is biased and unpredictable. But that doesn't stop the companies from saying 'the computer says so'.

These companies need a few high profile hefty penalties as a motivation to avoid such dirty tricks.

[–] RickRussell_CA@beehaw.org 4 points 9 months ago* (last edited 9 months ago)

\3. Asserting that their IT system is a "separate legal entity" and that they are not responsible for the accuracy of the system. They are eating legal loco weed.

[–] jarfil@beehaw.org 3 points 9 months ago

Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"

Another step back for the AI Liberation Front... can't file patents, can't own copyrights, can't be a legal entity, can't incorporate... what's next, denying AI sentience? This dehumanizing and discrimination against AIs needs to stop. 🤡