Clearly, the number of exploits against closed source software is evidence that source code is not required in order for software to be exploited. I believe that the majority of exploits are found not by source code review, but by finding bugs and using various debugging techniques to determine the exploit.
As Dr. Ira Levy, technical director with the CESG - a department of the UK's GCHQ intelligence agency that advises UK government on IT security, is quoted as saying in a ZDNet article:
- Bad people can look at the source code, so it's less secure
"Again that's nonsense. If I look at how people break software, they don't use the source code. If you look at all the bugs in closed source products, the people that find the bugs don't have the source, they have IDA Pro, it's out there and it's going to work on open and closed source binaries — get over it."
How insecure a piece of software is isn't generally a function of whether the source is available or not - it's a function of the quality and complexity of the software. There is insecure closed source software and there's insecure open source software. Similarly, there's secure software of both types. Projects and organizations that take security seriously will have secure software, whether or not they release the source. Also, less complex software will generally be more secure than complex software.
An example of a large open source project with complex software that is considered very secure is OpenBSD. See http://en.wikipedia....ty_and_code_auditing
for information about how security issues have been dealt with in the past, including bogus claims that the FBI inserted backdoor code into the system. I wonder if MS or Symantec software have any backdoors? If your IT guy asks those companies about that, can he believe the answer? If there are backdoors, I believe that crackers will likely eventually find them as exploits.
The popularity of software will be a factor in how much effort is put into exploiting it. I'd guess that an open source clinical information system isn't high on exploiters' target lists (though given the sensitivity of health care information, I might be wrong about that. And certainly security should be taken seriously for such an application, regardless of how many people might be looking to exploit it).
And just because an organization is large and trusted, doesn't mean that they will necessarily always take proper care with security. The recent theft of a user database from Adobe is an example. Not only did Adobe screw up in letting the database get downloaded (I have no idea on what happened to allow that), but it's clear from analysis of the file that Adobe didn't even follow the simplest of standard practices in storing passwords in the database: http://nakedsecurity...yptographic-blunder/
Adobe is a rather large vendor of closed source software - are they as careless with security in those products?
Finally, is your organization so locked down that such software as Firefox, Chrome, Java, Linux, Android devices, or Apple computers aren't found anywhere? No use of scripting languages like Perl, Python, or Ruby? All of those are open source to at least some degree. Does your organization use ASP.NET? The source is openly available: http://aspnet.codeplex.com/