an engineer with the agencyβs Deep Space Network, which operates the radio antennas that communicate with both Voyagers and other spacecraft traveling to the Moon and beyond, was able to decode the new signal and found that it contains a readout of the entire FDS memory
Space
Share & discuss informative content on: Astrophysics, Cosmology, Space Exploration, Planetary Science and Astrobiology.
Rules
- Be respectful and inclusive.
- No harassment, hate speech, or trolling.
- Engage in constructive discussions.
- Share relevant content.
- Follow guidelines and moderators' instructions.
- Use appropriate language and tone.
- Report violations.
- Foster a continuous learning environment.
Picture of the Day
The Busy Center of the Lagoon Nebula
Related Communities
π Science
- !astronomy@mander.xyz
- !curiosityrover@lemmy.world
- !earthscience@mander.xyz
- !esa@feddit.nl
- !nasa@lemmy.world
- !perseverancerover@lemmy.world
- !physics@mander.xyz
- !space@beehaw.org
- !space@lemmy.world
π Engineering
π Art and Photography
Other Cool Links
What's really cool is they wanted to inspect the FDS to see if any parts of it is corrupted, and it was sending a whole damned readout back to us the entire time. No one could figure that out until now though.
Right! I wonder how did the probe send an entire memory dump back without them realizing. Was it programmed to do that when a system failed or something?
That person who enabled the debug flag on their last command is shitting their pants at the moment
Good question. Makes me wonder if it's part of a system debug programmed into it that was forgotten or something. The guy that put it in could be long gone and didn't document it?
It's very well documented, just 4-8 documentation systems ago and never migrated because no one thought it was important.
It's insane to me how many government agencies simply forget about things because nobody thought a certain file or document was important enough to update, and the only ways to access the information is to find the person who wrote it, find where it's being stored and dig through millions of unrelated files, or spend a ton of money to reverse engineer the thing you once made.
Just look at the US nuclear arsenal. Some of the warheads they began updating back in the day no longer had any documentation due to how many times the files changed hands. Things got lost. People moved to other projects or left the line of work altogether. There was no way to get the full process to make a other one, so they threw money at it until they figured out how to make it.
How many files have accidentally fallen into a box that got shredded? How many times has something been lost to the entirety of Mankind because it fell behind a shelf (and who wants to spend the afternoon moving the entire shelf for a single file)?
to find the person who wrote it
Somebody who was twenty years old when Voyager 1 launched is now 67. Even the junior members of the team are retired now. The senior members are way beyond the average life expectancy.
It was a completely different world back then. I'm not justifying it, but it was the way it was.
I grew up in the 80s.
I used to draw a lot, make comics, recreate the covers of my favorite music albums, etc. I also liked to record whatever thing I thought was funny in cassette tapes.
Back then, I didn't have the mindset like "I should archive this. Who knows when I will need it!"
I my Commodore ViC-20 games and programs, stuff I wrote, in cassette tapes too. I had a notebook detailing my projects, etc. Again, no "let's back this up or store it with me for years to come."
When it was time to dispose of things, you just..... did. Or reused the cassettes, or the notebooks or whatever.
Granted, my use case is way different from that of a government with nuclear warheads. But yup. Different time, different world, different mindsets.
Is that true, or are you making a joke? Because the documentation is probably a big binder of paper.
It was a joke, it's how I tend to find most documentation at work, no matter where I am working.
Could also be the thing had a buffer overflow kind of fault. Instead of just sending its intended buffer the check for the end has broken and its continuously sending the entire contents of its memory.
There's a little bit more context (although not a lot) at the NASA blog which seems to be the source for this article. Basically it looks like they instructed it to go to different memory addresses and run whatever code was there in order to try to bypass any corrupted sections. One result was this memory dump. The reason they didn't immediately identify it was that it wasn't properly formatted in the normal way.
oh, to be part of the core team of engineers for this - decoding ancient schematics and code, your entire focus on keeping this project alive. absolutely legendary stuff.
I've got an Apple II+ that was doing weird shit. Turns out after a lot of sleuthing that it was a single bad DRAM chip, which due to the way that system handles RAM would show up as single unpredictable bits in various locations.
NASA, seek me out if y'all get stuck.
It said that they're comparing the memory from this signal to the memory banks when it was in a known-good state and that it would take (possibly) months. And I wondered why they couldn't diff it. But as I'm typing, they probably need to account for measurements and data collected in between as opposed to just resetting something to the previous state.
Flip those bits, NASA. I'm hopeful.
Sending a Raspberry Pie to them?