this post was submitted on 23 Oct 2024
74 points (100.0% liked)

Linux

47866 readers
2318 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] WalnutLum@lemmy.ml 14 points 3 days ago (2 children)

Ironically thanks in no small part to Facebook releasing Llama and kind of salting the earth for similar companies trying to create proprietary equivalents.

Nowadays you either have gigantic LLMs with hundreds of billions of parameters like Claude and ChatGPT or you have open Models that are sub-200B.

[–] vintageballs@feddit.org 1 points 1 day ago (2 children)

Llama 3.1 405b has entered the chat

[–] WalnutLum@lemmy.ml 1 points 1 day ago* (last edited 20 hours ago)

Waiting for a 8x1B MoE

[–] bruhduh@lemmy.world 1 points 1 day ago

Waiting till mixtral gonna optimise it enough to run on home computer, and then till dolphin uncensor it

[–] possiblylinux127@lemmy.zip 7 points 3 days ago (1 children)

I personally think the really large models are useless. What is very impressive is the small ones that somehow manage to be good. It blows my mind that so much information can fit in 8b.

[–] bruhduh@lemmy.world 1 points 1 day ago

True that, llms could be the future of lossy compression

[–] DavidGarcia@feddit.nl 18 points 3 days ago (1 children)

It is kind of interesting how open machine learning already is without much explicit advocacy for it.

It's the only field I can think of where the open version is just a few months behind SOTA in all of IT.

Open training pipelines and open data are the only aspects that could still use improvements in ML, but there are plenty of projects that are near-SOTA and fully open.

ML is extremely open compared to consumer mobile or desktop apps that are always ~10 years behind SOTA

[–] underscores@lemmy.dbzer0.com 8 points 3 days ago (2 children)

I feel like it's really far from being open. Besides the training data not being open, the more popular ones like llama and stable diffusion have these weird source available licenses with anti-competitive clauses, user count limits, or arbitrary morality clauses.

[–] Danterious@lemmy.dbzer0.com 5 points 3 days ago

Yeah there has also been an increase in the amount of companies either making FLOSS work more closed off or just not caring about them if it benefits their bottom line.

Unrelated I like your new profile pic.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

[–] brucethemoose@lemmy.world 4 points 3 days ago

Almost all of Qwen 2.5 is Apache 2.0, SOTA for the size, and frankly obsoletes many bigger API models.