this post was submitted on 31 Oct 2023
49 points (87.7% liked)

Technology

59381 readers
4205 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Breakthrough Technique: Meta-learning for Compositionality

Original :
https://www.nature.com/articles/s41586-023-06668-3

Vulgarization :
https://scitechdaily.com/the-future-of-machine-learning-a-new-breakthrough-technique/

How MLC Works
In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally—for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.

you are viewing a single comment's thread
view the rest of the comments
[–] DigitalMus@feddit.dk 5 points 1 year ago (3 children)

While in not in the field either, I do know that it is quite unusual in computer science academics to publish in actual peer reviewed journals. This is because it can be a long process, and the field is very fast moving, so your results would be outdated by the time you publish. Thus, a paper is typically synonymous with a conference proceeding, and can be found on arxiv. I found this Paper on the arxiv from 2017/2018 which seems to be when this paper was originally published for the scientific community and presented at a very "good" (if I had to guess) conference. Google scholar says this paper has 650 citations, so it probably has had quite some impact. However, I would guess this method is well known and is already implemented in many models, if it was truly disruptive.

[–] Chobbes@lemmy.world 2 points 1 year ago

To be clear, the papers at conferences undergo a peer review process as well. There are journal publications in CS, but a lot of publishing is done through conferences. Arxiv, while a great resource, has little to do with the conferences and it is worth noting that the papers on arxiv do not go through a peer review process (but are often published at conferences where the paper has gone under peer review — some papers on arxiv may be preprint versions from before the peer review process).

[–] KingRandomGuy@lemmy.world 2 points 1 year ago

For reference, ICML is one of the most prestigious machine learning conferences alongside ICLR and NeurIPS.

[–] A_A@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Good to know, Thanks.
Yours is the type of comment I was really hoping to read here.

You are right : it's the same authors (Brenden M. Lake & Marco Baroni) with mostly the same content.

But, they also write (in nature) that modern systems (GPT-4) do not yet incorporate these abilities :

Preliminary experiments reported in Supplementary Information 3 suggest that systematicity is still a challenge, or at the very least an open question, even for recent large language models such as GPT-4.

[–] DigitalMus@feddit.dk 2 points 1 year ago

This certainly could be part of the motivation for publishing it this way, to make themselves more noticed by the big players. Btw, publishing in open source nature is expensive, it's like 6-8000 euro for the big ones, so there definitely is a reason.