this post was submitted on 04 Sep 2024
263 points (100.0% liked)

TechTakes

1428 readers
289 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] RagnarokOnline@programming.dev 9 points 2 months ago (6 children)

I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.

Beats manually summarizing that info myself.

Maybe their prompt sucks?

[–] froztbyte@awful.systems 41 points 2 months ago (1 children)

“Are you sure you’re holding it correctly?”

christ, every damn time

[–] dgerard@awful.systems 29 points 2 months ago

I got AcausalRobotGPT to summarise your post and it said "I'm not saying it's always programming.dev, but"

[–] pikesley@mastodon.me.uk 24 points 2 months ago

@RagnarokOnline @dgerard "They failed to say the magic spells correctly"

[–] HootinNHollerin@lemmy.world 18 points 2 months ago

Did you conduct or read all the interviews in full in order to verify no hallucinations?

[–] sxan@midwest.social 8 points 2 months ago

How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?