this post was submitted on 24 Jan 2024
268 points (90.9% liked)

Open Source

31243 readers
245 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
 

First, they restricted code search without logging in so I'm using sourcegraph But now, I cant even view discussions or wiki without logging in.

It was a nice run

you are viewing a single comment's thread
view the rest of the comments
[–] Omega_Haxors@lemmy.ml 57 points 9 months ago (13 children)

The writing was on the wall when they established a generative AI using everyone's code and of course without asking anyone for permission.

[–] xilliah@beehaw.org 2 points 9 months ago (11 children)

It's an interesting debate isn't it? Does AI transform something free into something that's not? Or does it simply study the code?

[–] Omega_Haxors@lemmy.ml 5 points 9 months ago* (last edited 9 months ago) (7 children)

There's no debate. LLMs are plagiarism with extra steps. They take data (usually illegally) wholesale and then launder it.

A lot of people have been doing research into the ethics of these systems and that's more or less what they found. The reason why they're black boxes is precisely the reason we all suspected; they were made that way because if they weren't we'd all see them for what they are.

[–] count_duckula@discuss.tchncs.de 3 points 9 months ago* (last edited 9 months ago)

The reason they are blackboxes is because they are function approximators with billions of parameters. Theory has not caught up with practical results. This is why you tune hyperparameters (learning rate, number of layers, number of neurons ina layer, etc.) and have multiple iterations of training to get an approximation of the distribution of the inputs. Training is also sensitive to the order of inputs to the network. A network trained on the same training set but in a different order might converge to an entirely different function. This is why you train on the same inputs in random order over multiple episodes to hopefully average out such variations. They are blackboxes simply because you can't yet prove theoretically the function it has approximated or converged to given the input.

load more comments (6 replies)
load more comments (9 replies)
load more comments (10 replies)