I support free and open source software (FOSS) like VLC, Qbittorrent, Libre Office, Gimp…
But why do people say that it’s as secure or more secure than closed source software? From what I understand, closed source software don’t disclose their code.
If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.
But open source has their code available to the entire world on Github or Gitlab.
Isn’t that actually also helping hackers?
You live in some Detroit-like hellscape where everyone everywhere 24/7 wants to kill and eat you and your family. You go shopping for a deadbolt for your front door, and encounter two locksmiths:
Locksmith #1 says “I have invented my own kind of lock. I haven’t told anyone how it works, the lock picking community doesn’t know shit about this lock. It is a carefully guarded secret, only I am allowed to know the secret recipe of how this lock works.”
Locksmith #2 says "Okay so the best lock we’ve got was designed in the 1980’s, the design is well known, the blueprints are publicly available, the locksport and various bad guy communities have had these locks for decades, and the few attacks that they made work were fixed by the manufacturer so they don’t work anymore. Nobody has demonstrated a successful attack on the current revision of this lock in the last 16 years.
Which lock are you going to buy?
It’s not “assumed to be secure.” The source code being publicly available means you (or anyone else) can audit that code for vulnerabilities. The publicly available issue tracking and change tracking means you can look through bug reports and see if anyone else has found vulnerabilities and you can, through the change history and the bug report history, see how the devs responded to issues in the past, how they fixed it, and whether or not they take security seriously.
Open source software is not assumed to be more secure, but it’s security (or lack thereof) is much easier to verify, you don’t have to take the word of the dev as to whether or not it is secure, and (especially for the more popular projects like the ones you listed) you have thousands of people with different backgrounds and varying specialties within programming, with no affiliation with and no reason to trust the project doing independent audits of the code.
Because “some nerd out there probably would have found any exploits for the X years its been released” is the general assumption about open source software.
If I can see the code, I can see if said code is doing something fucky. If I can’t see the code, I have to just have faith that it’s not doing something fucky.
You theoretically can see the code. You don’t actually look at it. Nor can you even have the knowledge to understand and see security implications for all the software you use.
In practice it makes little difference for security if you use open or closed source software.
No, you literally can see the code, that’s why it’s open source. YOU may not look at it, but people do. Random people, complete strangers, unpaid and un-vested in the project. The alternative is a company, who pays people to say “Yeah it’s totally safe”. That conflict of interest is problematic. Also, depending on what it’s written in, yes, I do sometimes take the time. Perhaps not for every single thing I run, but any time I run across niche projects, I read first. To claim that someone can’t understand is wild. That’s a stranger on the internet, you’re knowledge of their expertise is 0.
In practice, 1,000 random people with no reason to “trust you, bro” on the internet being able to audit every change you make to your code is far more trustworthy than a handful of people paid by the company they represent. What’s worse, is that if Microsoft were to have a breach, then like maybe 10 people on the planet know about it. 10 people with jobs, mortgages, and families tied to that knowledge. They won’t say shit, because they can’t lose that paycheck. Compare that to say the XZ backdoor where the source is available and gets announced so people know exactly who what and where to resolve the issue.
The code being public helps with spotting issues or backdoors.
In practice, “security by obscurity” doesn’t really work. The code’s security should hinge on the quality of the code itself, not on the amount of people that know it.
It also provides some assurance that the service/project/company is doing what they say they are, instead of “trust us”.
Meta has deployed code so criminal that everyone who knew about it should be serving hard jail time (if we didn’t live in corporate dictatorships). If their code were public they couldn’t pull shit like this anywhere near as easily.
Yuup. “security by obscurity” relies on the attacker not understanding how software works. Problem is, hackers usually know how software works so that barrier is almost non existent.
The code being public helps with spotting issues or backdoors.
A recent example of this is to see the extent that the TALOS group had to do to reverse engineer Dell ControlVault impacting hundreds of models of Dell laptops. This blog post goes through all of the steps they had to take to reverse engineer things, and they note fortunately there was some Linux support with publicly available shared objects with debug symbols, that helped them reverse the ecosystem. Dell has all this source code, and could have identified these issues much more easily themselves, but didn’t and shipped an insecure product leaving the customers vulnerable.
With open source code you get more eyes on it. Issues get fixed quicker.
With closed source, such as Photoshop, only Adobe can see the code. Maybe there are issues there that could be fixed. Most large companies have a financial interest in having “good enough” security.
One thing people tend to overlook is: Development costs money. Fixing bugs and exploits costs money.
In a closed source application none will see that your software is still working with arcane concepts that weren’t even state-of-the-art when written 25 years ago. The bug that could easily be used as an exploit? Sure, the developer responsible for it did inform his manager around 50 times he needs time and someone from the database team to fix it. And got turned down 50 times as it costs time and “we have to keep deadlines! And none noticed this bug so far,so why should now notice now?”
Lots of open source software uses arcane concepts because lots of it is old. See Xorg as a prime example. That was outdated 20 years ago already.
Closes source software gets exploited and hacked all the time. They take security seriously as well.
Look at OpenSSL and the heartbleed and similar high profile security failures for how even using high profile open source software is not automatically more secure.
You didn’t get my point: On Open Source people know. People know that Xorg is using arcane concepts and as a client you can pay someone to get through the code. Or a governmental institution can. (And yes, mine does with public reports)
This is not the case with closed sources. You will only know when someone has exploited it. And while closed source applications like Windows,Office,etc. are having enough public weight that a lot of people with good intentions see them as a “challenge” and test for exploits. This is already not the case for smaller,but often critical applications. And no,most commercial closed source applications don’t give a fuck about security - even in critical infrastructure. I worked as a PM for these applications in the past and my company now consults for critical infrastructure. The status of security in niche applications is abhorrent. The longest running major exploit I stumbled upon was 22 years old. And left around 65% of all water treatment plants of a smaller nation at risk. (It’s fixed now. Not because they wanted to, but because someone forced them to)
Somewhat of a different take from what I’ve seen from the other comments. In my opinion, the main reason is this:
Companies have basically two reasons to do safety/security: Brand image and legal regulations.
And they have a reason to not do safety/security: Cost pressure.Now imagine a field where there’s hardly any regulations and you don’t really stand out when you do security badly. Then the cost pressure means you just won’t do much security.
That’s the software engineering field.
Now compare that to open-source. I’d argue a solid chunk of its good reputation is from hobby projects, where people have no cost pressure and can therefore take all the time to do security justice.
In particular, you need to remember that most security vulnerabilities are just regular bugs that happen to be exploitable. I have significantly fewer bugs in my hobby projects than in the commercial projects I work on, because there’s no pressure to meet deadlines.And frankly, the brand image applies even to open-source. I will write shitty code, if you pay me to. But if my name is published along with it, you need to pay me significantly more. So, even if it is a commercial project that happens to be published under an open-source license, I will not accept as many compromises to meet deadlines.
One thing to keep in mind is that NO CODE is believed to be secure…regardless of open source or closed source. The difference is that a lot of folk can audit open source whereas we all have to take the word of private companies who are constantly reducing headcount and replacing devs with AI when it comes to closed source.
Assumed by who?
By your logic no one can break locks because they can’t see it. There are going to be people trying to break into everything even tho they don’t have the source code.
9/10 people looking into your code are the ones using it for themselves, so fixing a bug for everyone is beneficial to them too.
Also, there are entire companies working and sponsoring these projects and paying people to find bugs because if someone finds out that curl has a problem, they are gonna have that too, so the only difference between something like vlc and adobe is that you don’t have to suck their dick really.
There’s also curl and others which are offering bug bounties, since they are way more cost efficient than paying someone full time.
It’s because anyone can find and report vulnerabilities, while closed source could have some issue behind closed doors and not mention that data is at risk even if they knew
It’s not more secure or less secure, but it is easier to trust
Zero day exploits, aka vulnerabilities that aren’t publicly known, offer hackers the ability to essentially rob people blind.
Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities. So while it’s not inherently more secure, it is in practice.
Exploiting four zero-day flaws in the systems,[8] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[3] Stuxnet’s design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States.[9] Stuxnet reportedly destroyed almost one-fifth of Iran’s nuclear centrifuges.[10] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade.
Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack, a link file that automatically executes the propagated copies of the worm and a rootkit component responsible for hiding all malicious files and processes to prevent detection of Stuxnet.
“Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities.”
Heartbleed has entered the chat
The whole Stuxnet story is fascinating. A virus designed to spread to the whole Internet, and then activate inside a specific Iranian facility. Convinced me that we already live in a cyberpunk world.
The idea you’re getting at is ‘security by obscurity’, which in general is not well regarded. Having secret code does not imply you have secure code.
But I think you’re right on a broader level, that people get too comfortable assuming that something is open source, therefore it’s safe.
In theory you can go look at the code for the foss you use. In practice, most of us assume someone has, and we just click download or tell the package manager to install. The old adage is “With enough eyes, all bugs are shallow”. And I think that probably holds, but the problem is many of the eyes aren’t looking at anything. Having the right to view the source code doesn’t imply enough people are, or even meaningfully can. (And I’m as guilty of being lax and incapable as anyone, not looking down my nose here.)
In practice, when security flaws are found in oss, word travels pretty fast. But I’m sure more are out there than we realize.
It’s also easier to share vulnerability fixes between different projects.
“Y” was using a similar memory management as “T”, T was hacked due to whatever, people that use Y and T report to Y that a similar vulnerability might be exploitable
Edit:
In closed source, this might happen if both projects are under the same company.
But users will never have the ability to tell Y that T was hacked in a way that might affect Y