SmoothIsFast

joined 1 year ago
[–] SmoothIsFast@citizensgaming.com 3 points 11 months ago

Lmao alright bud go fire all your employees and see how you do. Then you will understand who needs to be loyal to who.

[–] SmoothIsFast@citizensgaming.com 13 points 11 months ago

That's fucked up.

[–] SmoothIsFast@citizensgaming.com 7 points 11 months ago (4 children)

Oh, no, educated workers who don't want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.

[–] SmoothIsFast@citizensgaming.com 2 points 11 months ago

And open ai is not personal use?

[–] SmoothIsFast@citizensgaming.com 1 points 11 months ago (1 children)

The trajectory was chosen by NASA because the Orion capsule on top of the SLS rocket do not have enough efficiency to be on a low regular lunar orbit while landing and bringing back astronauts. This trajectory has nothing to do with SpaceX.

Nor did I say it did, I said some brain dead idiots sent the contract off to a company who designed a craft incapable of doing what we have done previously, congrats Lockheed for fucking up our next moon program. It's you who equated that to SpaceX lmaoo

When comparing the one rocket to land on the moon to the 15 launches (thank you for writing launches and not rockets, as Destin Sandlin wrongly did) is because the mass delivered to the surface is gigantic compared to Apollo. Why? Because we do not want to say "we did it!" We want to say "we live there!".

I mean it really doesn't matter are you going to have astronauts just chilling for like a year in orbit waiting for those launches, racking up radiation? Saying the reason we need 15 launches for starship is specifically due to mass is such a cop-out. It's due to how limited the amount of fuel we can send up to refuel in orbit is, it's fucking stupid at our current level of space infrastructure. We still haven't even tested it, what we need another 4 decades for this terrible plan to come to fruition? Take note of what the Apolo engineers stated as far as stepping stones in development. If you take too big of leaps, you will not adequately be able to evaluate what when wrong if something does, take to small of steps and you will never reach the goal. We decided to take such massive leaps with no forethought on its efficiency.

Can people stop saying SpaceX rockets explode? They do not.

No, that is precisely what occurred with starship. You can see the Shockwave from the explosion, which means you had the oxidizer mix with the propelent before exploding during the flip phase, that's a major fucking failure. It was not a rupture like previous issues nor was it terminated, it fucking exploded lmao. The worst part all that lovely telemetry that's gonna help them out gave zero indication of said catastrophic failure so that's gonna be such great info for them right? Just like the first test that failed when they knew the pad wouldn't be strong enough and caused damage to the rocket, meaning they got no actionable data?

As of now, and evolving for Starship:
$7B cost, 4 from NASA for the first 2 missions
11 years for the first tests, still no rocket
Can bring 220,00lb and 35,000ft³ to the moon
And they still and up with a rocket NASA can continue to use at very low price (less than 25% than SLS per mission)

Star ship has not been a proven concept and is still actively in development, these numbers mean nothing right now. With massive issues looming and 90% of what's needed not even tested yet but go ahead keep riding daddy musk as if he isn't killing good ideas with lofty moving goal posts and a complete lack of understanding for what's being developed.

[–] SmoothIsFast@citizensgaming.com 4 points 11 months ago (1 children)

Your description is how pre-llm chatbots work

Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn't change what the underlying principles are here.

Emergent properties don't require feedback. They just need components of the system to interact to produce properties that the individual components don't have.

Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it's creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.

Emergent properties are literally the only reason llms work at all.

No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That's it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.

[–] SmoothIsFast@citizensgaming.com 1 points 11 months ago

I'm just gonna leave this here as you want to buy into all the bullshit surrounding starship lmao

https://www.youtube.com/watch?v=K5GevpAGDWE

[–] SmoothIsFast@citizensgaming.com 5 points 11 months ago (2 children)

No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn't stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.

[–] SmoothIsFast@citizensgaming.com 18 points 11 months ago (7 children)

And I'd like to see that contract hold up in court lol

[–] SmoothIsFast@citizensgaming.com 2 points 11 months ago (1 children)

You have no idea what you are talking about. When they train data they have two sets. One that fine tunes and another that evaluates it. You never have the training data in the evaluation set or vice versa.

That's not what I said at all, I said as the paper stated the model is encoding trueness into its internal weights during training, this was then demonstrated to be more effective when given data sets with more equal distribution of true and false data points were used during training. If they used one-sided training data the effect was significantly biased. That's all the paper is describing.

[–] SmoothIsFast@citizensgaming.com 2 points 11 months ago (3 children)

If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there's more going on than simply "producing words."

It's not more going on, it's that it had such a large training set of data that these false vs true statements are likely covered somewhere in it's set and the probability states it should assign true or false to the statement.

And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It's saying the model is encoding or setting weights to the true and false values when that's the majority of its data set. That's basically it, you are reading to much into the paper.

[–] SmoothIsFast@citizensgaming.com 1 points 11 months ago

AI has been a thing for decades. It means artificial intelligence, it does not mean that it's a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it's not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.

Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it's not an LLM and there are different categories for AI which an LLM is it's own category.

 

Is it just me, or has iRacing been feeling more like bumper cars recently? I haven't been playing iRacing for too long, so I'm curious about others' perspectives, but recently it's felt like the driving standard of those competing, even in higher splits, has dramatically dropped. I'm constantly competing with people using others as brakes for corners, taking out half the pack in like the first or second lap when acting like a hero makes zero sense. Granted I'm not the fastest out there but it feels like people's common sense to back out of a manuaver or willingness to take a corner 3 wide without changing speed or breaking zones has dramatically increased as of late. Just wanted to see what others takes are. Maybe I'm just getting to sucked in as of late and letting little shit affect me, but it's killed a lot of fun these past couple weeks. Maybe the lower splits just had more people focused on learning the craft vs people on higher splits thinking they are max verstappen, idk.

view more: next ›