-
-
Notifications
You must be signed in to change notification settings - Fork 555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include hash sum locations of AppImages - and popularize if good idea #2830
Comments
Why not use embedded digital sigatures for this purpose? |
Because embedded digital signatures are much more complicated to implement for all parties involved and that would prevent fast and widespread use. Both can be used of course, but hashing is very simple and useful to implement everywhere, the signature part with certificates is solved by SSL. Also if I want to verify some random AppImage I found on some torrent site - I do not need to import any public keys, I just look up for hash on the official website of software producer. Example: If hashsum of Hashing can be also included in right-click context menu or Properties window like some Windows apps do: Fedora offers SHA256 hashsums of releases and that hashsums are also signed by PGP. However that PGP rely on SSL certificate - so if SSL connection is compromised or I get phished or get mistaken on domain, PGP will not help me. https://getfedora.org/static/checksums/35/iso/Fedora-Workstation-35-1.2-aarch64-CHECKSUM Also what if small software producers are hacked and signing keys compromised? There should be some revoke-alert system, which adds process complexity to them and implementation complexity to the AppImage community. However signing is good, its just too complex to start with on mass scale and hashes are simple and do the job of verification. |
Indeed, that is a valid reason. The screenshot above is showing the HashTab software doing this. So this would need to be implemented by the file managers of desktop environments such as KDE, Xfce, helloDesktop etc. Maybe @TheAssassin wants to share his view on this as well. |
Yes, wide support from two parties is needed:
Then user can verify it manually or semi-manually as described in original post. The hard problem to solve is to convince many companies to publish hashes of their AppImages. Maybe such hashsum-and-ssl-based verification could get to the official standard of AppImage as description how to verify authenticity of any AppImage file. That could popularize it between software authors. And also that now there are many AppImage files all over the internet with unknown authenticity and possibly malicious, without clear trace to any GIT repository of AppImage distributor. As for embedded hashsum - it is possible, there are at least two methods: However it is not needed to include hash to AppImage file. Because anybody can alter it anyway. So the hash must come from software author directly and be compared to computed (real) hash of the file. However, URL of hash could be embedded (ex. in hash_url.txt) and be even part of the AppImage specification. That way, AppImage distributors do not need to provide any additional hash to the package. Maybe there should be two files; author_hash_url.txt and distributor_hash_url.txt so if author does not create AppImages but distributor builds it then it will include own URL. |
We could make |
Yes, that would be nice - but it should create hash that will be placed online. Anybody can alter hashes inside a package so it should not be trusted. There must be the real hash stored behind HTTPS and then verification process hashes the file and compares it with the real hash stored behind HTTPS. appimagetool can produce a line which will be appended to hash file (which is specification draft) from original post, like this:
|
Please stop using the term "cryptographic hashsum"... you're talking about checksums, not hashsums, not cryptographic hash algorithms. I did have a much longer reply in the works, but I don't have the time to dissect every statement and explain why someone is wrong there. The following is just a summary of my findings. The solution for authenticity checks is clearly to use signatures. Do not try to create some "authentication" scheme using plain checksums. You are not going to get it right most likely, but you easily create a false sense of security by adding such snake oil. This proposal is not convincing at all. The author appears to have misconceptions because I see lots of contradictions and false assumptions. The proposed protocol is not supported anywhere at this point, whereas PGP is well supported. It is basically "home grown crypto" (always a bad sign), it is not "not complicated" as the author claims either. You might be able to get it working security wise by fixing the flaws, but the user ultimately has to validate the origin of the "authentication data", much like it is required with the existing support for PGP signatures. There is no apparent benefit, instead the attack surface is increased by changing the bit the user has to verify from a PGP key (fingerprint) to some URL. So where's the gain of this protocol? I see lots of open questions. For instance "what's the substitute of CRLs in your protocol" (which you correctly recognized, too, see below). Also, you're assuming some "educated user" is going to do all the manual work, but I miss a solution for "average joe" (not talking about generating checksums but making sure that URL is trustworthy). Anyway, continuing this list is a waste of time, honestly. Embedded PGP signatures are not complicated to use, what is missing is the right tooling to work with them, but you could say the same about this protocol proposal. In case you want to make it easy, feel free to also have non-embedded PGP signatures, which can be worked with with plain I don't just want to claim someone has misconceptions without giving at least one example.
How would your checksum help you there, though? It relies on that exactly same TLS certificate. The key difference between using PGP and "checksums downloaded over TLS" is that nobody pins the key for TLS, but they do for PGP. I just need to make sure the PGP key is right once, then can use it to authenticate all downloaded images in the future. Of course, pinning the TLS key is used a lot nowadays, too, for instance in mobile apps (to protect against reverse engineering, mostly). But using an authentication scheme that does not rely on TLS has a few advantages. No matter where the files are served from, you can always authenticate them after downloading. That's why there can be dozens of mirrors which are not maintained by the same project.
No, it's not "solved by SSL". You use TLS as a cryptographic layer, sure, but what your scheme boils down to is "let's ask the user whether to trust a URL", but how to do so properly? I'm sure most people can at most have a look at the domain name, perhaps open it in the browser to see what's going on. This is much weaker than using PGP keys, because "looking good in the browser" is basically "oh yeah no warning about broken TLS" and maybe "there's their logo, must be their homepage". Why bring the TLS CA infrastructure into the mix? With PGP, one just authenticates the key. One way is to download it from a "trustworthy website". But it's not limited to this. Still, once that issue is solved, you have pinned the key, which is a great benefit over "pinning the URL".
That's nonsense. You don't have to search for the signature online. You'd look for the key instead. Why is this any harder than searching for a checksum?
That is a good point that has not been solved really for embedded signatures. I think AppImageUpdate should always check the keyservers to see whether a key has been revoked. Your protocol doesn't have a solution to this. Would you just distribute a list of "known bad hashes"? Why should appimage.org maintain such a list even? Why should anyone trust appimage.org, whom they may never have heard about it before? |
Also, I want to emphasize that this is the completely wrong forum for such a discussion. |
Indeed I write about cryptographic hash algorithms which on Linux distributions are available under programs I recommend to read posts with understanding the relevant points and responding with real arguments.
You listed zero.
Yes, that is why I propose it :) But it is not entirely true that it is not used anywhere - it is used everytime when downloading software from HTTPS website, of course that is just "manual mode". The proposal can be implemented semi-automatically, see original post. Also PGP is much more complicated to implement and use for all parties involved (including average user coming from Windows) as was communicated.
It is not "home grown crypto". It uses cryptographic hash functions https://en.wikipedia.org/wiki/Cryptographic_hash_function which are widely available on Linux distributions (ex. sha256sum, sha512sum, ...) and it uses SSL with existing PKI. PGP is much more complicated for all parties involved (including average user coming from Windows) than this proposal. The proposal contains end-to-end explanations how that can work, including what should provide each party in the system. I recommend that you find drawbacks in the proposed system and do not just argue that 'PGP is better'. It does not matter that something is better when it is not used. Also the proposal is not cryptographically weak.
See post below, the discussion about validating the origin is later in the text. |
I do not argue that signing is better/worse that this scheme in principle. I argue that this scheme is much easier for all involved parties (producers, distributors, package manage makers, users) and it is sufficient to verify integrity of a file with strong cryptographic hashing (even signing uses strong cryptographic hashing) and the solution is no worse than browsing HTTPS web and downloading packages from HTTPS website. Signing and PGP verification can be done too - you can go and convince all parties to use it.
Yes, that was my point - PGP relies on SSL and domain verification in general case as well as the proposed scheme relies on SSL and domain verification. If user downloads fake PGP public key from fake domain, it is bad. But the point is that hashes and SSL and domains are much simpler for average user than PGP. And the proposed scheme is much more secure than the current state when AppImages are scattered all over the internet without any PGP, hash or even reference to author/distributor.
Or you can just get phished once... and never find out that you happily download new malware with updates. It is just about risk/usability ratio.
It is solved by SSL the same way as website authenticity is solved by SSL - user just checks both the domain and SSL certificate. Nowadays certificates are no help with generic Lets encrypt, but the domain check is good enough for AppImage published on website = good enough for AppImage verified locally with hashsum and domain check. Again, I do not argue that this is better security than signing. I argue that this is MUCH better than the current state of the AppImage world and MUCH more feasible for all parties involved (including average user) than PGP system. And still it offers strong guarantee to verify authenticity of a file. Also, pinning of URL can be done by trusted/audited package manager by adding the domain to list of trusted domains. That is the equivalent of adding PGP public key to the system in PGP scheme. Another point is that AppImage distributors must somehow redirect the user to download the PGP key. How can they do that? There is no other way just to show the domain! So again, PGP rely on the ability of user to verify domain.
No nonsense; When average user does not have key (most probable case) from authors of Or average user can just find out hash of file (GUI tool or sha256sum), search the hash on the web with a search engine and verify the domain that search engine found. Then visually or in some tool compare two hashes. That is it. No keys, no nothing, simple comparison of hashes for verification of integrity and authenticity.
List of known bad hashes is not needed here. Because distributor distributes AppImage + URL to hash. Distributor would usually distribute correct URL of hash (correct domain). If distributor is malicious then user will see that the URL is wrong. The difference is that in this case it is not required because real verification is done by user verifying the domain and not trusting that relevant signing private key was not leaked. But yes, SSL certificate can not be leaked to make this scheme work. However SSL certificate can not be leaked to get correct PGP public key too. So in principle, as a qualitative judgement, both GPG and this proposal rely on SSL (excluding postal delivery of PGP and such). To make any quantitative judgements specific cases are needed. As additional feature it can be beneficial to make distributed, community based blacklist of bad (malicious) hashes and optionally use it by package managers to warn users (package managers can be trusted or audited). Similarly how antivirus companies share databases of malware to antivirus clients. However it is not needed for security reasons - contrary to PGP system. |
tldr; The proposed system is equivalent to downloading software with a web browser. How downloading/installing software with web browser works
How downloading/installing software with the proposed system works
Clearly it is equivalent to downloading software with browser from sites with SSL certificates. User is responsible for verification of the software package in both cases. User is also responsible for verification of all future downloads, or package manager can make the domain trusted for later use. The user story behing AppImage is just that:
|
Hi. AppImages are great. There are many community maintained sites which host AppImages. However, user has to trust the distributor and the distribution infrastructure of such community sites. Serious security incidents can happen; How one man could have hacked every Mac developer (73% of them, anyway).
One of good states of the AppImage ecosystem would be that every open-source and closed-source software author offers AppImage on their site behind valid SSL certificate. However, infrastructure for high bandwidth site or CDN of software releases could be not feasible to maintain for smaller producers.
I propose an alternative system and a convention which could be used as a base layer for integrity verification in the AppImage ecosystem. It is easy to implement so involved parties can use it without problems.
Software authors and community
Software users
Then users can verify integrity and authenticity of any AppImage file at their machines:
Verification on the user side can be done manually or semi-automatically:
Manual verification
sha512sum someprogram.appimage
and compares the output hash with the hash at the URL (ex. firefox.org/releases/appimage-hashes.txt)Semi-automatic verification (just one user action required)
y
sha512sum(someprogram.appimage) == downloaded_hashsum_of_someprogram
Proposal of a standard of AppImage hashsums file (draft)
More precise specification of the format can be found here where I wrote about the idea in wider context srevinsaju/zap#66
To make this practical, it needs support from both software authors (producers) and the AppImage community.
The text was updated successfully, but these errors were encountered: