Fuck AI

1332 readers
141 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 7 months ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
5
6
7
 
 

Meta is “working with the public sector to adopt Llama across the US government,” according to CEO Mark Zuckerberg.

The comment, made during his opening remarks for Meta’s Q3 earnings call on Wednesday, raises a lot of important questions: Exactly which parts of the government will use Meta’s AI models? What will the AI be used for? Will there be any kind of military-specific applications of Llama? Is Meta getting paid for any of this?

When I asked Meta to elaborate, spokesperson Faith Eischen told me via email that “we’ve partnered with the US State Department to see how Llama could help address different challenges — from expanding access to safe water and reliable electricity, to helping support small businesses.” She also said the company has “been in touch with the Department of Education to learn how Llama could help make the financial aid process more user friendly for students and are in discussions with others about how Llama could be utilized to benefit the government.”

She added that there was “no payment involved” in these partnerships.

yeah fck them, for now until the government relies on their AI

8
9
9
submitted 3 days ago* (last edited 3 days ago) by Dot@feddit.org to c/fuck_ai@lemmy.world
 
 

This article is talking about phishing websites made by scammers with obvious signs that it was made by LLMs.

I thought it might be interesting here.

10
 
 
  • A new OpenAI study using their SimpleQA benchmark shows that even the most advanced AI language models fail more often than they succeed when answering factual questions, with OpenAI's best model achieving only a 42.7% success rate.
  • The SimpleQA test contains 4,326 questions across science, politics, and art, with each question designed to have one clear correct answer. Anthropic's Claude models performed worse than OpenAI's, but smaller Claude models more often declined to answer when uncertain (which is good!).
  • The study also shows that AI models significantly overestimate their capabilities, consistently giving inflated confidence scores. OpenAI has made SimpleQA publicly available to support the development of more reliable language models.
11
12
 
 

    (archived link)

13
 
 

Somehow it missed the massive forest fire this summer that destroyed much of the park and the town ... until it was reminded.

Remind me why anybody takes this tech seriously?

14
 
 

These are better than those weird videos.

15
 
 

Google could preview its own take on Rabbit’s large action model concept as soon as December, reports The Information. “Project Jarvis,” as it’s reportedly codenamed, would carry tasks out for users, including “gathering research, purchasing a product, or booking a flight,” according to three people the outlet spoke with who have direct knowledge of the project.

If a robot ever buys something on my behalf, I'm lawyering the fuck up.

16
 
 

cross-posted from: https://lemmy.world/post/21301373

Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

17
18
19
 
 

Is OpenAI breaking U.S. copyright law? A former employee of the company says yes.

A former researcher at the OpenAI has come out against the company’s business model, writing, in a personal blog, that he believes the company is not complying with U.S. copyright law. That makes him one of a growing chorus of voices that sees the tech giant’s data-hoovering business as based on shaky (if not plainly illegitimate) legal ground.


Guess it's time to cut off OpenAI's internet service. That's how it works, right, Copyright Cartel??

20
 
 

Imagine walking into your office to find that your company just hired thousands of new employees overnight – except they're not human. That's exactly what Microsoft has made possible with its groundbreaking announcement of autonomous AI agents, marking a fundamental shift in how businesses will operate in the coming years.

Unlike traditional AI assistants that simply respond to commands, these new autonomous agents can independently initiate and complete complex business tasks. Through Microsoft's Copilot Studio, organizations can create AI employees that handle everything from qualifying sales leads to managing supplier communications. These agents don't just follow predetermined scripts – they analyze situations, make decisions, and take action without human intervention.

Early adopters are seeing remarkable results.

McKinsey & Company implemented an AI agent that reduced client onboarding lead times by 90% and cut administrative work by 30%. At Pets at Home, the UK's leading pet care business, an AI agent handling profit protection cases is projected to deliver seven-figure annual savings. These aren't just incremental improvements – they represent a fundamental transformation in business operations.


M$ has invested a lot of money in AI in various countries. They will take contracts from the government, as a result there will be less new hires for government jobs.

https://www.reuters.com/technology/microsoft-make-27-billion-cloud-ai-investments-brazil-2024-09-26/

https://www.apac-business.com/companies/market/microsoft-announces-ai-skilling-opportunities-for-2-5-million-people-in-the-asean-region-by-2025/

21
22
 
 

...and Graphene stands alone.

23
 
 

Widely shared on social media, the atmospheric black and white shots -- a mother and her child starving in the Great Depression; an exhausted soldier in the Vietnam war -- may look at first like real historic documents.

But they were created by artificial intelligence, and researchers fear they are muddying the waters of real history.

"AI has caused a tsunami of fake history, especially images," said Jo Hedwig Teeuwisse, a Dutch historian who debunks false claims online.

"In some cases, they even make an AI version of a real old photo. It is really weird, especially when the original is very famous."

24
 
 

A Massachusetts couple claims that their son's high school attempted to derail his future by giving him detention and a bad grade on an assignment he wrote using generative AI.

An old and powerful force has entered the fraught debate over generative AI in schools: litigious parents angry that their child may not be accepted into a prestigious university.

In what appears to be the first case of its kind, at least in Massachusetts, a couple has sued their local school district after it disciplined their son for using generative AI tools on a history project. Dale and Jennifer Harris allege that the Hingham High School student handbook did not explicitly prohibit the use of AI to complete assignments and that the punishment visited upon their son for using an AI tool—he received Saturday detention and a grade of 65 out of 100 on the assignment—has harmed his chances of getting into Stanford University and other elite schools.

Yeah, I'm 100% with the school on this one.

25
 
 

A European delivery company had to disable its AI chatbot after it started swearing at a customer and admitting it was the “worse delivery firm in the world.”

Dynamic Parcel Distribution (DPD) had to turn off its AI chatbot feature after disgruntled UK customer Ashley Beauchamp managed to get it to swear at him and write a disparaging poem.

view more: next ›