this post was submitted on 13 Aug 2023
76 points (96.3% liked)
Open Source
31243 readers
238 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
True, but that's why my current idea is the following:
As part of the wortkflow, GitHub will build the executable, compute a few different hashes (sha256sum, md5, etc..), and those hashes will be printed out in the GitHub logs. In that same workflow, GitHub will upload the files directly to the release.
So, if someone downloads the executable, they can compute the sha256sum and check that it matches the sha256 that was computed by github during the action.
Is this enough to prove that executable they are downloading the same executable that GitHub built during that workflow? Since a workflow is associated a specific push, it is possible to check the source code that was used for that workflow.
In this case, I think that the only one with the authority to fake the logs or mess with the source during the build process would be GitHub, and it would be really hard for them to do it because they would need to prepare in advance specifically for me. Once the workflow goes through, I can save the hashes too and after that both GitHub and I would need to conspire to trick the users.
So, I am trying to understand whether my idea is flawed and there is a way to fake the hashes in the logs, or if I am over-complicating things and there is already a mechanism in place to guarantee a build.
As long as maintainers can upload arbitrary files to a release, this is not enough, I think. There is no distinction between release files coming from the build process, and release files just uploaded by the maintainer.
But, if during Github's build process the sha156sum of the output binary is printed, and the hash matches what is in the release, isn't this enough to demonstrate that the binary in the release is the binary built during the workflow?
Well, kind of.
If the printed hash checksum matches with that publish in the release, and it also matches the hash checksum of the release files, then it guarantees that the release files were produced by the github build process. However its very involved to verify that the released hash checksum was the same that was printed by the build process. This probably could be solved by having Github sign all release builds with their own keys. Since signing keypairs usually rarely change, this could be an easier way for verification.
This would verify that the binary was built during the github actions workflow, but only that. Unfortunately, there is much more to it.
First, in the build process, github will use whatever build scripts and instructions the repo maintainer has specified in the github actions files. The purpuse of one of the build scripts may be only to throw away the checked out sources and download a different set from a different place. Or to just add a single more dependency, or just a file, that will compromise the software. However if you have verified yourself that the build scripts only work with reputable sources of dependencies, the repository in question, and other repositories of the maintainer that you have also inspected, then its not really a problem, probably.
But then there is also the question if you trust github (and because of that microsoft, but also the USA because of laws) with always building from the sources, and adding nothing more.
Yesterday I would have said 'blah, they would not care about my particular small project'. But since then I read the paper recommended by a user in this post about building a compromised compiler that would installs a back-door to a type of login field. I now think it is not so crazy to think that intelligence agencies might collude with Microsoft to insert specific back-doors that somehow allows them to break privacy-related protocols or even recover private keys. Many of these might rely on a specific fundamental principle and so this could be recognized and exploited by a compiler. I came here for a practical answer to a simple practical situation, but I have learned a lot extra 😁
I think there is more to this. Maybe you are targeted because you(r project) reach someone else (the actual target, who you may not even know), but I could also imagine it happening like data mining in the past years: they are not after me or you, they are after everyone and anyone they can reach.