this post was submitted on 31 Jan 2024
9 points (61.5% liked)
Futurology
1776 readers
214 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What the heck is this source. Excerpts:
(that second one is him quoting some researcher, but this transparently absurd statement simply slides past unnoticed, and indeed is cited as support for something similar he's saying which also seem dubious to me.)
I barely pay attention to this stuff, and I noticed the CodeLlama 70B release, which I would describe as significant -- simply stacking up the number of papers and saying one side of the equation is making more progress because they're releasing more things (and specifically saying that he "doesn't count" the two most prolific US sources in terms of commonly-used models), is very weird. You can look at benchmarks, or try the models yourselves or see what results they claim in their papers, if you're going to write an article saying something about the comparative output.
There is a whole conversation to be had about AI research in China, and I'm 100% open to the idea that I and the rest of the West is missing something important, but it would have been nice to see this citation-less statement:
Backed up by something more than:
He also compares things to GPT 3.5 (in his mind, not by testing). Personally I dislike using 3.5 for anything, because there's something already available to consumers that's clearly way better and has been for quite some time. GPT-4 is clearly the model to beat and the model that most US researchers compare their stuff to when they're publishing stuff.
Etc etc. In short:
BOOOOOOOOOOOOOOO
BOOOOOOOOOOOOOOOO
Ty