this post was submitted on 22 Nov 2024
654 points (98.2% liked)

Comic Strips

12704 readers
4341 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 1 year ago
MODERATORS
654
submitted 17 hours ago* (last edited 17 hours ago) by Joker@sh.itjust.works to c/comicstrips@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[–] dan@upvote.au 31 points 15 hours ago* (last edited 14 hours ago) (2 children)

Creators on Facebook do get paid though, at least if they're big enough I guess πŸ€” https://creators.facebook.com/earn-money

Also the AI model Meta maintains (Llama) is the most powerful open-source model that anyone can use and even build their own commercial products on top of for free, so I'm not sure it's accurate that nobody wants it?

[–] Ephera@lemmy.ml 11 points 12 hours ago (2 children)

Only the inference code of LLaMA (which runs the model) is open-source. The model itself is not, as you're given neither the training data, nor the model weights.

[–] dan@upvote.au 5 points 10 hours ago (2 children)

I don't know much about AI models, but that's still more than other vendors are giving away, right? Especially "Open"AI. A lot of people just care if they can use the model for free.

How useful would the training data be? Training of the largest Llama model was done on a cluster of over 100,000 Nvidia H100s so I'm not sure how many people would want to repeat that.

scientific institutions and governments could rent enough GPUs to train their own models, with potentially public funding and public accountability, and also it’d be nice to know if the data llama was trained with was literally just facebook user data. i’m not really in the camp of "if user content is on my site then the content belongs to me".

[–] Martineski@lemmy.dbzer0.com 2 points 8 hours ago

Without the same training data you wouldn't be able to recreate the results even when having the computing power. Thus it's not fully open source. Training data is a part of the source to create the result, "LLM". It's like having to add your own lines of code to open source program to make it work because the company doesn't provide it.

[–] ryedaft@sh.itjust.works 1 points 8 hours ago

How on earth would you distribute the model for inference without the weights? The gradients are obviously gone so you can't continue training on the model. Maybe you can still do some kind of LORA?

I had no idea llama was made by Facebook. This changes things