this post was submitted on 20 Aug 2024
18 points (95.0% liked)

Hacker News

2171 readers
117 users here now

A mirror of Hacker News' best submissions.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zos_Kia@lemmynsfw.com 2 points 2 months ago (1 children)

Eh that one study was mostly about stupid products like "AI coffee machine" or "AI fridge". AI products that make sense sell pretty well.

[–] lvxferre@mander.xyz 1 points 2 months ago (1 children)

At least acc. to TechSpot, the negative sentiment is general. It's just more pronounced for some products (high risk and/or price) than others.

So even where plopping a LLM or similar would make sense, there'll be likely strong market resistance.

The fact that plenty actually sensible products are plagued with issues due to GAFAM disingenuousness/stupidity doesn't help either. (See: Windows Recall, Google abusing search monopoly to feed its AI, etc.)

[–] Zos_Kia@lemmynsfw.com 1 points 2 months ago (1 children)

At least acc. to TechSpot, the negative sentiment is general. It’s just more pronounced for some products (high risk and/or price) than others.

That's not what the study says. I'm no AI-hater but I would sure stay away from an AI car or medical diagnosis. Those products make absolutely no sense.

So even where plopping a LLM or similar would make sense, there’ll be likely strong market resistance.

I work tangentially to the industry (not making models, not making apps based on models, but making tools to help people who do) and that is not what i observed. Just like in every market, products that make sense make fucking bank. It's mostly boring B2B stuff which doesn't make headlines but there is some money being made right now, with some very satisfied customers.

The "market resistance" story is anecdotal clickbait.

[–] lvxferre@mander.xyz 1 points 2 months ago

At least acc. to TechSpot, the negative sentiment is general. It’s just more pronounced for some products (high risk and/or price) than others.

That’s not what the study says.

Thanks for linking the study itself. I was having a hard time finding it, that's why I relied on press coverage, even with all associated small bits of inaccuracy.

Look at Study 3b:

More specifically, the indirect effect through emotional trust was significant for the high-risk product (indirect effect = −0.656, SE = 0.163, 95% CI = [−0.979, − 0.343]), but not significant for the low-risk products (indirect effect = −0.099, SE = 0.143, 95% CI = [−0.383, 0.181]), which provided further support for hypothesis-3.

With H₃ being "Perceived product or service risk moderates the indirect effect of inclusion of the AI term in the product or service description on purchase intention, mediated by emotional trust.".

What they're saying here is neither the same as my statement (as what I said implies that the effect would remain strong for low-risk) nor yours (as you implied that the effect is not general - it is, but modulated by another factor).

It’s mostly boring B2B stuff

That's a fair point - if they're marketing it for businesses the attitude is bound to be different from marketing it to end customers.