ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

The Open Source debate

(1/6) > >>

oblivion:
I am still fuming over this.

Today, in a meeting at work, I mentioned that one of our senior doctors was looking at an open source product that might be a worthy replacement for the aging and soon-to-die (it won't run under Win7) clinical information system we use.

One of the IT attendees said straight away that he wouldn't allow anything open source running in our environment. Why? I asked. "Well, it's insecure. If the code's available to anyone, then anything could happen. A security nightmare."

Aghast as I was, I had no instant answer. I mumbled something incoherent about open source encryption tools that probably nobody there gave any credence to at all and the conversation moved on.

We all know that viewpoint's nonsense. But I could really use a short, understandable-by-idiots, refutation of the "common sense" view that open source software is "obviously" a security disaster waiting to happen.

The fact that 80% of the internet is running on open source software probably won't cut it. The idiots all "know" that the internet is a dangerous place and clearly everything's held together by string, cobwebs, eggstains, a little glue and the determined efforts of the only software houses worth mentioning, Symantec and Microsoft, and trying to tell them otherwise needs something solid, instant and understandable.

So does anyone have anything helpful -- and preferably unarguable -- I can throw at them?

x16wda:
"Well, it's insecure. If the code's available to anyone, then anything could happen. A security nightmare."
-oblivion (November 12, 2013, 05:27 PM)
--- End quote ---

The most likely things that could happen would be the whole community could (a) test in a variety of situations and configurations, (b) find and test edge conditions in real life, (c) suggest improvements in workflow or efficiency -- and have a chance to suggest the code for it, (d)  find bugs -- and have a chance to suggest the code to fix them, ...

A real security nightmare would be if you use a closed source program and depend on it, and the company goes out of business (or gets litigated out of existence, or destroyed by a meteorite, or inserts a back door for the NSA or the Russian Mafia, or...), and you have no choice but to convert to another system at great expense and pain. If that's even possible.

wraith808:
http://www.OpenSource.org is a good site in general on the whole topic of open source, but specifically:

http://opensource.org/docs/peru_and_ms.php

The government of Peru addresses a lot of the arguments other than security that are sure to come up, and from a state sponsored perspective.

http://flosscc.opensource.org/content/spread-the-word

https://www.schneier.com/crypto-gram-9909.html#OpenSourceandSecurity

40hz:
"Security through obscurity" is not an effective strategy. Backdoors are only workable in closed systems.

You're probably facing a sysadmin that has invested an entire career in the systems and OS you're currently using. These types will fight tooth and claw to keep anything they don't already know out of the place they're working rather than upgrade their skill set or think outside their box.

Good luck with that crowd...  :-\

mwb1100:
Clearly, the number of exploits against closed source software is evidence that source code is not required in order for software to be exploited. I believe that the majority of exploits are found not by source code review, but by finding bugs and using various debugging techniques to determine the exploit.

As Dr. Ira Levy, technical director with the CESG - a department of the UK's GCHQ intelligence agency that advises UK government on IT security, is quoted as saying in a ZDNet article:


* Bad people can look at the source code, so it's less secure

"Again that's nonsense. If I look at how people break software, they don't use the source code. If you look at all the bugs in closed source products, the people that find the bugs don't have the source, they have IDA Pro, it's out there and it's going to work on open and closed source binaries — get over it."-http://www.zdnet.com/six-open-source-security-myths-debunked-and-eight-real-challenges-to-consider-7000014225/
--- End quote ---

How insecure a piece of software is isn't generally a function of whether the source is available or not - it's a function of the quality and complexity of the software.  There is insecure closed source software and  there's insecure open source software.  Similarly, there's secure software of both types. Projects and organizations that take security seriously will have secure software, whether or not they release the source. Also, less complex software will generally be more secure than complex software.

An example of a large open source project with complex software that is considered very secure is OpenBSD. See http://en.wikipedia.org/wiki/OpenBSD#Security_and_code_auditing for information about how security issues have been dealt with in the past, including bogus claims that the FBI inserted backdoor code into the system. I wonder if MS or Symantec software have any backdoors? If your IT guy asks those companies about that, can he believe the answer? If there are backdoors, I believe that crackers will likely eventually find them as exploits.

The popularity of software will be a factor in how much effort is put into exploiting it.  I'd guess that an open source clinical information system isn't high on exploiters' target lists (though given the sensitivity of health care information, I might be wrong about that. And certainly security should be taken seriously for such an application, regardless of how many people might be looking to exploit it).

And just because an organization is large and trusted, doesn't mean that they will necessarily always take proper care with security. The recent theft of a user database from Adobe is an example.  Not only did Adobe screw up in letting the database get downloaded (I have no idea on what happened to allow that), but it's clear from analysis of the file that Adobe didn't even follow the simplest of standard practices in storing passwords in the database: http://nakedsecurity.sophos.com/2013/11/04/anatomy-of-a-password-disaster-adobes-giant-sized-cryptographic-blunder/  Adobe is a rather large vendor of closed source software - are they as careless with security in those products?

Finally, is your organization so locked down that such software as Firefox, Chrome, Java, Linux, Android devices, or Apple computers aren't found anywhere?  No use of scripting languages like Perl, Python, or Ruby? All of those are open source to at least some degree. Does your organization use ASP.NET?  The source is openly available: http://aspnet.codeplex.com/  

Navigation

[0] Message Index

[#] Next page

Go to full version