itsybitesyspider

joined 1 year ago
[–] itsybitesyspider@beehaw.org 3 points 1 year ago

I like library providers that can provide mechanical upgrade instructions. For example:

model.adjust(x,1,y) is now model.single(Adjustment.Foo, x).with_attribute(y)

Or whatever. Then people can go through your instructions find-and-replacing the changes, or even better, have an automated tool do it.

Also you pay some of the maintenance burden by writing all this documentation, so you have a some stake in keeping the changes minimal.

[–] itsybitesyspider@beehaw.org 33 points 1 year ago

But in the case of Brendan, he had recently been exposed as a white supremacist and lost his job when he was enrolled in the study. He was full of regret about getting caught out.

I imagine that this person was already contemplating personal growth, and the drugs just kicked his not-fully-conscious or not-fully=acknowledged feelings into conscious, actionable thought.

[–] itsybitesyspider@beehaw.org 6 points 1 year ago

The chess engine's training is anchored by the win/lose outcome of the game. LLM training is anchored by what humans like to read and write. This means that a human needs to somehow be in the loop.