[-] sonori@beehaw.org 2 points 4 hours ago

According to the rules you can post anything related to the Chicago area, so it’s probably just that their arn’t enough people on lemmy for a singe city community to have much other than news posts. Be the change you want to see in the community.

[-] sonori@beehaw.org 3 points 4 days ago

My mind went to otter or beaver so it’s definitely not that easy even for us organics.

[-] sonori@beehaw.org 1 points 4 days ago

They are definitely to simple to represent the entirety of an concepts meaning on their own. Yep, I don’t believe it’s likely that such an incrediblely intricate thing as a nuron, much less the idea of conceptual meaning, can be replicated by a high school math problem. Maybe they could be a part, but your off by about a half a dozen order of magnitude at least from where we are now with love being a matrix with a few hundred numbers in it.

[-] sonori@beehaw.org 2 points 4 days ago

No part of a human or animal brain operates on subtracting tables of cleanly defined numbers from each other so I think it’s pretty safe to say that no matrix calculation is done on a handful of numbers as part of much less as our sole means of understanding concepts or objects.

I don’t know exactly how one could tell true understanding from minicry, far smarter and more well researched people than me have debated that for decades, i’m just pretty sure what we think an kindness is boils down to something a bit more complex than a high school math problem discribing a word cloud.

[-] sonori@beehaw.org 1 points 4 days ago

Generally the term Markov chain is used to discribe a model with a few dozen weights, while the large in large language model refers to having millions or billions of weights, but the fundamental principle of operation is exactly the same, they just differ in scale.

Word Embeddings are when you associate a mathematical vector to the word as a way of grouping similar words are weighted together, I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

Subtracting vectors from each other can give you a lot of things, but not the actual meaning of the concept represented by a word.

[-] sonori@beehaw.org 2 points 4 days ago

To note the obvious, an large language model is by definition at its core a mathematical formula and a massive collection of values from zero to one which when combined give a weighted average of the percentage that word B follows word A crossed with another weighted average word cloud given as the input ‘context’.

A nuron in machine learning terms is a matrix (ie table) of numbers between zero and 1 by contrast a single human nuron is a biomechanical machine with literally hundreds of trillions of moving parts that darfs any machine humanity has ever built in terms of complexity. This is just a single one of the 86 billion nurons in an average human brain.

LLM’s and organic brains are completely different and in both design, complexity, and function, and to treat them as closely related much less synonymous betrays a complete lack of understanding of how one or both of them fundamentally functions.

We do not teach a kindergartner how to write by having them read for thousands of years until they recognize the exact mathematical odds that string of letters B comes after string A, and is followed by string C x percent of the time. Indeed humans don’t naturally compose sentences one word at a time starting from the beginning, instead staring with the key concepts they wish to express and then filling in the phrasing and grammar.

We also would not expect that increasing from hundreds of years of reading text to thousands would improve things, and the fact that this is the primary way we’ve seen progress in LLMs in the last half decade is yet another example of why animal learning and a word cloud are very different things.

For us a word actually correlates to a concept of what that word represents. They might make mistakes and missunderstand what concept a given word maps to in a given language, but we do generally expect it to correlate to something. To us a chair is a object made to sit down on, and not just the string of letters that comes after the word the in .0021798 percent of cases weighted against the .0092814 percent of cases related to the collection of strings that are being used as the ‘context’.

Do I believe there is something intrinsically impossible for a mathematical program to replicate about human thought, probably not. But this this not that, and is nowhere close to that on a fundamental level. It’s comparing apples to airplanes and saying that soon this apple will inevitably take anyone it touches to Paris because their both objects you can touch.

[-] sonori@beehaw.org 5 points 5 days ago* (last edited 5 days ago)

Like say, treating a program that shows you the next most likely word to follow the previous one on the internet like it is capable of understanding a sentence beyond this is the most likely string of words to follow the given input on the internet. Boy it sure is a good thing no one would ever do something so brainless as that in the current wave of hype.

It’s also definitely becuse autocompletes have made massive progress recently, and not just because we’ve fed simpler and simpler transformers more and more data to the point we’ve run out of new text on the internet to feed them. We definitely shouldn’t expect that the field as a whole should be valued what it was say back in 2018, when there were about the same number of practical uses and the foucus was on better programs instead of just throwing more training data at it and calling that progress that will continue to grow rapidly even though the amount of said data is very much finite.

[-] sonori@beehaw.org 12 points 5 days ago

Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.

I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.

This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.

Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.

[-] sonori@beehaw.org 9 points 6 days ago

I mean, regulating air pollution and managing air quality in cities was literally the reason Republican president Richard Nixon created the environmental protection agency in the first place, and it has managed vehicle emissions standards for decades, so this very much feels like the agency doing exactly what it was created to do and has long done.

[-] sonori@beehaw.org 7 points 6 days ago

Isn’t Waymo famously reliant on having nearly as many remote operators as the have vehicles when they first had to disclose the number, and have outright refused to reveal it more recently?

[-] sonori@beehaw.org 24 points 1 week ago

At first glance I thought it was reuseing the coal plants turbines, but looking though the article the only connection I can find is that it’s located several miles away and the only connection is that it plans to hire a hundred or so people from the coal plant it’s replacing and that Wyoming’s powder river basin is nearby and its associated highly automated low sulfur coal mines are in the vauge area.

All this to say, yes it has practically nothing to do with coal.

[-] sonori@beehaw.org 4 points 1 week ago

Personally I tend to be hesitant on relaxing the duel means of egres rule completely when i’ve seen buildings in Vancouver use two sets of stairs interwoven in the same stairwell to achieve the same effect with only a 30% or so increase in floor space. Even if it’s statistically not much help knowing there are two ways out of the building in an emergency does have an advantage, and i’m not convinced that it’s actually as much of a factor into the proliferation of double stacked corridors as them just being the cheapest way build.

Otherwise i’m definitely a big fan of the suggestions, especially more interconnections between buildings.

63
submitted 1 week ago by sonori@beehaw.org to c/space@beehaw.org

Evidently the joints on the flaps still need a little work into not letting gases through, but it seemed to still have enough actuation to keep the spacecraft stable until the engines took over for the landing burn.

24
submitted 3 weeks ago by sonori@beehaw.org to c/space@beehaw.org

A detailed discussion of the Shuttle program as well as some ethics in airspace.

30
submitted 2 months ago* (last edited 2 months ago) by sonori@beehaw.org to c/usa@midwest.social

Party of personal freedom everybody.

5
submitted 2 months ago by sonori@beehaw.org to c/videos@lemmy.ml

Come for the two hour review of Rings of Power by a guy who has elvish on his wedding ring, stay for the Hbomberguy style twist into discussion of the way the far right uses the appearance of media criticism to radicalize vunrable young men and draw them into the manosphere.

81
submitted 4 months ago by sonori@beehaw.org to c/technology@beehaw.org
  • A video about disposable vapes, and how addiction became the goal of every single company on the planet.
32
submitted 5 months ago* (last edited 5 months ago) by sonori@beehaw.org to c/space@beehaw.org

It’s their first ever attempt to launch a Vulcan, and their launching an lunar lander. Window opens at 1:53 AM EST. Here’s to hoping for a successful launch.

Edit:

Liftoff at 47:40.

We saw a successful launch, translunar injection, and the Peregrine lander successfully powered on before detaching from the Centaur upper stage, which proceeded to relight its engines and complete a burn into a solar orbit at part of its memorial mission.

The lunar landing attempt is expected to be on Feb 23, and it is expected to remain operational on the surface of the moon for at least ten days.

According to NASA, “-Scientific instruments will study the lunar exosphere, thermal properties of the lunar regolith, hydrogen abundances in the soil at the landing site, magnetic fields, and conduct radiation environment monitoring.”

More on Vulcan and its history.

30
submitted 7 months ago by sonori@beehaw.org to c/space@beehaw.org

I don’t think that this has been posted yet, but if not here’s the summary.

https://youtu.be/O3F8aTBLLx0?si=GPVB2xtC5wwnSC6V

Just the highlights.

view more: next ›

sonori

joined 1 year ago