this post was submitted on 16 Aug 2023
223 points (87.8% liked)

Technology

34775 readers
169 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Investors are barely breaking even as the venture is hardly making any profits due to a shortage of chips, divided interests, and more.

... OpenAI has already seen a $540 million loss since debuting ChatGPT.

... OpenAI uses approximately 700,000 dollars to run the tool daily.


⚠️ First off, apologies as I didn't cross check. Take it w/ a grain of salt.


This piece of news, if true, somehow explains why OpenAI has been coming up w/ weird schemes for making $$$ like entering the content moderation space.

On a similar note, I wonder if this had been a key driver (behind the scenes) in the recent investment in open source AI initiatives (Haidra comes to my mind?) Perhaps some corporations who haven't got enough $$$ to fund their own dedicated research group are looking to benefit from an open source model?

you are viewing a single comment's thread
view the rest of the comments
[–] j4k3@lemmy.world 50 points 1 year ago (3 children)

Open AI died the moment Meta's Llama model weights were replicated completely open source. The outcome is guaranteed. It does not matter how much better the enormous proprietary model can be, people will never be okay with the level of intrusive data mining required for OpenAI or Google's business model. Personal AI tech must be open source and transparent with offline execution. AI is the framework of a new digital economy, not the product.

[–] TheEntity@kbin.social 85 points 1 year ago (3 children)

people will never be okay with the level of intrusive data mining required for OpenAI or Google’s business model

Where do you meet these people? I need more of such people in my life.

[–] Riker_Maneuver@startrek.website 27 points 1 year ago

Yeah, I was about to say, 99% of people are either unaware or do not care. Don't mistake Lemmy's privacy opinions as representative of the general population.

[–] drlecompte@discuss.tchncs.de 11 points 1 year ago

'People' in this respect are also the owners of media sites.

[–] HumbertTetere@feddit.de 7 points 1 year ago

There's currently a meeting in Germany with about 4000 of them.

But it's not the prevailing mindset in the general population.

[–] griD@feddit.de 16 points 1 year ago

AI is the framework of a new digital economy, not the product.

That is one interesting sentence. Thanks.

[–] krellor@kbin.social 4 points 1 year ago (1 children)

I don't think it's as much that the meta model was replicated as much as they fully open sourced it with a license for research and commercial use.

I actually think the market demand will be fairly small for fully offline AI. The largest potential customers might be government who require full offline hosting, and there is a small group of companies servicing that niche. But even government customers who require that their data is segmented are simply having enclaves setup by the big cloud platforms where they guarantee that inputed data isn't fed into the training process and doesn't leave the customer environment.

I fully support folks who sustain open source AI frameworks, but in terms of commercial customers that will drive industry trends with dollars, I expect there will be demand for hosted solutions that use proprietary models.

[–] drlecompte@discuss.tchncs.de 3 points 1 year ago (1 children)

Yeah, but not models that are trained on data that raises copyright concerns, which is currently the case.

[–] AngrilyEatingMuffins@kbin.social 8 points 1 year ago* (last edited 1 year ago)

The courts aren't going to side with copyright holders. As much as the US loves its ridiculous copyright laws it loves profit and being on the bleeding edge of tech more. There is absolutely ZERO chance that the United States will let China, who does not care about IP, carry the keys to the AI kingdom