this post was submitted on 02 Jan 2024
202 points (95.9% liked)

Technology

59197 readers
2873 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Supreme Court chief justice warns of dangers of AI in judicial work, suggests it is “always a bad idea” to cite non-existent court cases::Mr Roberts suggested there may come a time when conducting legal research without the help of AI is ‘unthinkable’

all 24 comments
sorted by: hot top controversial new old
[–] farcaster@lemmy.world 86 points 10 months ago (1 children)

Perhaps he ought to address the overt corruption in his own court before worrying about literally anything else

[–] Massada42@lemmy.world 32 points 10 months ago

They benefit from said corruption and have no incentive to address it.

[–] alienanimals@lemmy.world 38 points 10 months ago

It's a good thing current supreme court justices don't rule in favor of the highest bidder! Oh... wait.

[–] meeeeetch@lemmy.world 17 points 10 months ago

That's good advice. Shame he and his colleagues didn't follow it in 303 Creative

[–] SkybreakerEngineer@lemmy.world 14 points 10 months ago (1 children)

Citing non-existent facts in your judgment is just fine though

[–] TootSweet@lemmy.world 13 points 10 months ago (3 children)

My new-year's wish is for the AI bubble to pop as soon as possible.

[–] Eheran@lemmy.world 5 points 10 months ago (3 children)
[–] whatwhatwhatwhat@lemmy.world 24 points 10 months ago (1 children)

Not OC, but there’s definitely an AI bubble.

First of all, real “AI” doesn’t even exist yet. It’s all machine learning, which is a component of AI, but it’s not the same as AI. “AI” is really just a marketing buzzword at this point. Every company is claiming their app is “AI-powered” and most of them aren’t even close.

Secondly, “AI” seems to be where crypto was a few years ago. The bitcoin bubble popped (along with many other currencies), and so will the AI bubble. Crypto didn’t go away, nor will it, and AI isn’t going away either. However, it’s a fad right now that isn’t going to last in its current form. (This one is just my opinion.)

[–] wantd2B1ofthestrokes@discuss.tchncs.de 0 points 10 months ago* (last edited 10 months ago)

Crypto didn’t go anywhere? Sure it’s not completely eradicated but it’s way less in the mainstream now than it was at its height; and it basically has the rep of being a scam now.

AI is currently used in far more real world use cases than crypto ever was. Maybe it doesn’t take off into infinity but it’s definitely going to be a lot more prevalent

[–] TootSweet@lemmy.world 13 points 10 months ago

The... AI... bubble.

I said that. It's right there.

As for "why," because it's causing problems as people trust a technology that can just straight up give them false information. The sooner the bubble bursts, the fewer people will be harmed by AI hallucination.

[–] andros_rex@lemmy.world 6 points 10 months ago* (last edited 10 months ago)

This isn’t the first time there has been a ton of hype surrounding “AI” - folks back in the 60s were having conversations with “Eliza.” IIRC there were also a similar boom in the early 90s.

“AI” has been entirely misrepresented to investors and the public at large. The computing resources needed to produce the impressive results we saw a few months ago are not sustainable long term, and will not produce profit. We can already see ChatGPT being tuned down and giving worse results.

Like crypto/NFT/previous hypes, it’s also being shoved into places it doesn’t belong. The education system is collapsing, why not have kids learn from “AI” teachers? Facebook/every other social platform refuses to pay for effective content moderation - just get an “AI” to do it. It’s not effective at all, but it works enough for the c-suite who believe the hype.

“AI” has also essentially become a digital oil spill - the internet has always been lousy with garbage but “AI” makes it easy to pump out thousands of shitty comments to promote whatever agenda you’d like. You can already see this on Facebook - threads of hundreds of boomers admiring imaginary statues of Jesus or whatever.

[–] bruhduh@lemmy.world 3 points 10 months ago* (last edited 10 months ago)

Nvidia doesn't like this statement

[–] Ghyste@sh.itjust.works 2 points 10 months ago

My New Years wish is for the conservative half of the supreme court to die in a fire.

[–] CADmonkey@lemmy.world 12 points 10 months ago (2 children)

"Counsel, can you cite precedent?"

"Why, yes I can your honor. It's a precedent I made up."

[–] bruhduh@lemmy.world 6 points 10 months ago* (last edited 10 months ago)

"Counsel, can you cite precedent?"

"Why, yes I can your honor. Trust me bro."

[–] wagoner@infosec.pub 5 points 10 months ago

Cool, just like the majority on SCOTUS has done in the last year. Made up doctrines, faked originalism. All without the aid of AI!

[–] Dkarma@lemmy.world 10 points 10 months ago

LIKE DOBBS?!?!?

[–] PrincessLeiasCat@sh.itjust.works 10 points 10 months ago* (last edited 10 months ago)

Oh no! What if it grants women reproductive rights!

[–] paddirn@lemmy.world 9 points 10 months ago

Yeah, or we could just hold lawyers to higher standards and expect them to do their due diligence like they should anytime they submit court documents. The one time I had to go through a lawyer for something involving a court case, they sent a PDF document of a court filing they were going to submit on my behalf for me to review and sign. I noticed multiple errors and made a detailed list of pg# and paragraph where each correction was needed, sent it back to them. A day or two later I got a "revised" copy of the document back that not only missed some of the errors I had called out, but introduced additional errors. At that point, given what I was paying per hour for their "services", I said fuck it, opened up the PDF and made the corrections myself, then signed it and sent it on.

I'm sure it was just being handled by a paralegal or an intern or whatever, but it was aggravating that I basically had to do the lawyer's job for them, since going through multiple rounds of corrections would've likely cost me more than just doing it myself.

[–] autotldr@lemmings.world 3 points 10 months ago

This is the best summary I could come up with:


Supreme Court Chief Justice John Roberts discussed AI and its possible impact on the judicial system in a year-end report published over the weekend.

Mr Roberts acknowledged that the emerging tech was likely to play an increased role in the work of attorneys and judges, but he did not expect to be fully replaced anytime soon.

In addition to those risks, popular LLM chatbots like ChatGPT and Google's Bard can produce false information — referred to as "hallucinations" rather than "mistakes" — which means users are rolling the dice anytime they take trust the bots without checking their work first.

Michael Cohen, Donald Trump's former lawyer and fixer, admitted that he had used an AI to look up court case records, which he then gave as a list of citations to his legal team.

Due to the potential pitfalls of AI reliance, Mr Roberts urged legal workers to exercise "caution and humility" when relying on the chatbots for their work.

The court has proposed a rule that would require lawyers to either certify that that did not rely on AI software to draft briefs, or that a human fact-checked and edited any text generated by a chatbot.


The original article contains 421 words, the summary contains 197 words. Saved 53%. I'm a bot and I'm open source!

[–] 1984@lemmy.today 1 points 10 months ago* (last edited 10 months ago)

Well of course there will be a time when doing legal research without AI is unthinkable. It's the same as doing math today without a calculator, or washing clothes without a washing machine.

People do these things only because they have a reason not to (can't afford a washing machine, or learning math and need to do it manually).

I would be surprised if that time is longer than 10 years away too.

[–] DeadWorld@lemm.ee 1 points 10 months ago

Didnt they rule on 2 seperate cases based of situations that didnt actually happen, but just "may infact, someday" happen? Like yeah, dont use AI for ruleings but we have some deeper issues here

I warn of dangers of authentic stupidity in judicial work