topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Tuesday April 23, 2024, 10:01 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - db90h [ switch to compact view ]

Pages: prev1 2 [3] 4 5 6 7 8 ... 20next
51
General Software Discussion / Re: Visual Basic or Visual C++
« on: July 07, 2012, 04:50 PM »
http://www.doubleclo...ogramming-languages/
Source: http://www.tiobe.com...info/tpci/index.html

20 most popular languages

NEWER CURRENT DATA
prog_lang.PNG

original post -- sorry it was a couple years out of date

52
General Software Discussion / Re: Visual Basic or Visual C++
« on: July 07, 2012, 04:48 PM »
As for 'code completion' (Intellisense in VS), yes that has improved for ALL LANGUAGES, including C/C++, and is WONDERFUL. It makes VS the most superior IDE I've ever tried, by far. I would disagree that VS2008 is superior to VS2010 because of its dynamic help. I haven't missed it much, but that's just me. I did use it, and it was helpful though, so can understand that.

53
General Software Discussion / Re: Visual Basic or Visual C++
« on: July 07, 2012, 04:43 PM »
Something like this ohloh graph ?
It shows that Java is quite dominant in what they measured, but MS/VS doesn't have a Java-language any longer (though C# is close in some areas). It also shows that C# is quite a bit more used than VB or C++.

That is the number of commits for certain projects, likely many web 2.0 projects since JAVA is the #1 language. It is a SMALL SUBSET of projects. A better source would be SourceForge.

Come on... C++ is still the most dominant language for real coding there is.

All UNMANAGED C/C++ here -- no MANAGED C++

FireFox - C++
Opera - C++
IE - C++
Windows Kernel - C/C++
Linux Kernel - C
Linux Base Packages - Mostly C or C++
Android Kernel - C/C++
iOS Kernel - C
OS/X Kernel - C
SysInternals Tools - C (mostly, some C++)
LZMA / 7z - C/C++
RAR / WinRAR - C/C++
Apache - C
IIS - C/C++
MySQL - C/C++


54
Therein lies the problem with the FOSS philosophy once a popular project reaches a certain level of maturity. Some key players suddenly decide to take the codebase - along with all the freely submitted contributions from unpaid volunteers - and sell it for large sums of money. MySQL is one example of that.

Yep. Seen it happen over and over again, and is why I don't do that much F/OSS these days. Some ahole always comes along, exploits the work for profit, and all the contributors who made it possible aren't compensated, nor could they be really... would be hard to figure out how to fairly compensate people. Meanwhile, F/OSS users demand more from F/OSS developers than most other developers - perhaps because they are more accessible, I dunno. I still do release open source work, but only with the full realization that its pure charity work. Even then, it's often stolen or abused.

55
Whatever happened to the "release when it's ready" philosophy that was the trademark of FOSS development?

When commercial software of the same genre is also free, they find that they suddenly have *real* competition. This has forced them to be much more aggressive in their development cycles, at least IMHO.

56
General Software Discussion / Re: Visual Basic or Visual C++
« on: July 06, 2012, 02:19 PM »
won't be able to create proper single-exe programs with VS without installing a runtime-part anyway, and most modern computers (should) have the .NET runtime installed anyway, so select the language that the majority of Visual Studio programmers is using: C#

Ever hear of statically linking to the CRT? This requires no CRT DLLs, because they are statically linked into the EXE. (speaking of unmanaged code, of course)

I also heavily question the statement that 'most VS programmers are using C#'. Would love to see that backed up with any real statistics...

57
General Software Discussion / Re: Unity Desktop (Ubuntu)
« on: July 05, 2012, 01:00 PM »
Few like Unity. I hate it, myself. It was a good *attempt* to unify a touch based interface with a traditional one, but it was a huge flop, IMHO. The ironic thing is that I've never even seen it deployed on touch screen devices, at least not yet.

58
Easiest way to turn off your computer is to press the power button!
-Carol Haynes (June 29, 2012, 07:50 AM)
Ever did that while the system was writing into critical files? Hint: Don't.

The previous poster is correct. This is fine. Modern systems recognize the press of the power button and issue a 'shutdown now' command, which starts the shutdown process. The RESET button is what you don't want to hit on a desktop, and don't want to HOLD DOWN the Power button on a laptop (or desktop).

59
Find And Run Robot / Re: Windows 8 and FARR
« on: July 03, 2012, 08:30 PM »
And to clarify ... in METRO while you can 'just start typing', in the TRADITIONAL interface, I don't believe that is available [ update: Metro flips over when you hit the Start button, BUT it takes the whole screen ]

60
Find And Run Robot / Re: Windows 8 and FARR
« on: July 03, 2012, 05:57 PM »
[redacted]

61
This reporting is terrible. In my opinion, the real rationale was because it didn't mesh well Metro or touch screens. Keeping the Start menu around would have caused two divergent paths to access the PC. While it is unlikely anyone would be confused, it would not move people towards Microsoft's goal of having widespread adoption of Metro, allowing them to remain in their comfort zone. Microsoft is going to shove tiles down our throats, whether we like them or not. And maybe they aren't so bad ;).

62
The entire question is about entropy. This also goes for compression, though in a different manner.

Indeed, Renegade is right, as always, but I wanted to comment on this when I got a chance, to elaborate on compression, since that is one field where I can claim expertise (being the author of more than one LZ77/LZSS derivative algorithm). Entropy in compression is different indeed, but similar too. In compression, it of course represents the minimum theoretical size you can squeeze the data into, with it remaining in-tact (reconstructable in decompression without loss).

In compression though, passing data through more than one compression algorithm does *not* improve entropy. In fact, it may decrease it.

Now, you can pass it through different pre-processing algorithms that re-arrange the data and THEN compress it, which improves entropy, but most compression algorithms have these pre-processing algorithms built in. And those are not compression algorithms, they are pre-processing/re-arranging algorithms. For example, with PECompact, by making tweaks to x86 code before compression, the compression ratio can be improved by 20% in many cases, depending on the code (could be more, could be less). LZMA now has this pre-processor (known as BCJ2) built in. There are MANY more that target different types of data. By making these tweaks, you improve the chances for a 'match' in dictionary based compression (where it matches data it has already seen, and emits a backwards reference to that data, there-by saving space).

My POINT is to MAKE SURE that nobody misunderstands Renegade's accurate and wise comment as meaning they should pass their data through more than one compression algorithms. I *hate* seeing this, ZIPs inside of RARs, inside of ZIPs, etc.. absurd. Don't anybody do that, please ;).

63
By layering on the same algorithm (or another one) you effectively increase the entropy each time you iterate the process.

That is what I thought ;). So as long as you don't throw a malfunctional or non-secure algorithm in the sequence, e.g. one that often hashes to 0 or something, you are good ;p. Myself, I have a policy of using *only* algorithms that produce at least a 512 bit digest. The exception is, of course, the first sequence in my hash, which is SHA1, only 160 bits.

Going on about my rant on hackers ... part of the problem is how the media treats them. Calling them brilliant, etc... No, it takes brilliance to keep a sever secure.

Right now, my #1 problem, and maybe mouser can sympathize, is not having the TIME to dedicate myself to constantly securing and monitoring my server. I have 10 different jobs, at least, here at my one man show, and web server admin is *definitely* a job in and of itself.

64
I'm always a bit weary of layering stuff like this though.. all it takes is one bad hash algorithm that by accident maps all inputs into a small hash space and you are in trouble.

If all algorithms chosen are secure, it should be good .. real good. I am not a cryptologist or mathematician though. I think with each iteration it would grow in strength. Who knows, I may be wrong. Of course, the larger the digest size, the better.

You know what really pisses me off though, about hackers in general? It is *MUCH EASIER* to breach a site than it is to keep one secure. They think they are so smart for exploiting a site, etc... but they have the easier task in almost all cases. Of course, 99% of them are just using exploits discovered by other people, then think they are so brilliant for doing so.

Just like it is easier to DESTROY than it is to CREATE, true of everything ... same with security.

65
Bruce Schneier posted an article somewhere on his blog about double-layering of different algorithms on top of each other. Last I recall you added a couple of new corollaries.

I figured SOMEONE had done this before, as it is the ONLY way to INSTANTLY update an entire database. Still, sometimes the most simple things are overlooked. I don't know whether he was mentioning this as a method to improve security or update a database, but still.

After reading this thread, I'm actually going to go ahead and take it another step further and add one or two more algorithms on top.. and that's the beautiful thing, how far it can be extended (infinitely). So long as nobody gets access to the code, they'll also not know the algorithm.

66
Well, thanks. I dunno if someone has done it before or not, but it seemed the only way to do it without waiting for users to log in. Necessity is the mother of all invention and such. I'm sure others have done the same.

While reading those articles, the premise of what I'm doing is very similar to what they suggest with, say, PBKDF2 ... That algorithm apparently iterates the hash in a similar fashion, X times. Now, they are not rehashing the *plaintext representation of the hash*, and instead are rehashing the last iteration, but I think the result is similar if they increase the iterations of the hashing algorithm. Of course, they go through far more iterations, making it more secure... except that it is not clear if they allow for multiple algorithms to be used.

Of course, PBKDF2's intention isn't to allow instant updating of a database, but to provide strong initial security.

In my case, I actually did change the database field, just to be sure and certain of which password hashes were updated (since I didn't initially do it all at once, but later did). So, while not storing metadata, I did implicitly give an indication of which accounts had been updated to the new algorithm.

67
A meta issue, which is touched on by Tao above when we talks about costs of migration, is building in a mechanism by which you can migrate passwords to new approaches.  So dont just store a password hash and salt, store extra info like: When the password was last changed, and the hashing algorithm/parameters used when it was stored.  So that if you decide to move from using 5000 rounds of blowfish as your hash algorithm to 10000 rounds of sha512, you can identify which algorithm was used to store each users password, and you won't break people's logins as you migrate them (looks like some of the modern password hashing algorithms are being clever and embedding this infomation in the hashed output) to make it easier to keep track of.  And have an automated system in place for forcing users to upgrade their passwords, etc.

*EXCLUDING SALTS FOR SIMPLICITY OF DISCUSSION*

As I posted elsewhere, my approach to updating my own personal database was to use double hashing. So, say I had the initial passwords stored as:

SHA1(password)

Now, to update them I could either use metadeta like you say, and update when they login, OR double hash ... That's right, hash the hash ;). The new algorithm would then become:

SHA2-512(SHA1(password))

...

or to be precise with salting,

SALT^SHA2-512(SALT^SHA1(password))

...

This is an easy way to update existing unbreached databases with new hashing algorithms, and it also increasing the computation complexity at the same time, and -- as an added benefit -- creates a 'unique' combination of algorithms that can serve to further protect you. Later, some day, if I need to increase the hash algorithm again, I can continue to add additional hash algorithms, using a third or fourth round of hashing the hash of the original password.

68
Developer's Corner / Re: Random Question (About Hash Keys)
« on: June 13, 2012, 04:47 AM »
Interesting thread. I actually am currently using something like (not precisely, as I don't want to be too precise):
SALT^SHA2-512(SALT^SHA1(password))

Why? Because I needed to upgrade the hash algorithm, and for an unbreached database, the easiest way is to double hash, as opposed to waiting for people to login again and update their stored password hashes at that time.

69
Developer's Corner / Re: Random Question (About Hash Keys)
« on: May 08, 2012, 10:15 PM »
I actually think I'm going to go with Whirlpool-T as my recommendation for the digest to use .. as I just don't know how far I trust the NSA ;p. It's got a huge bitspace of 512 bits, should be good. No known theoretical or practical breaches, though the original version of the algorithm did have some lessening to its security via a mistake, but none that were exploitable by any means.

70
Developer's Corner / Re: Random Question (About Hash Keys)
« on: May 08, 2012, 10:04 PM »
http://bitsum.com/md5.php updated to show all hash/digest algorithms supported by the current version of PHP I have installed on the web server.

71
I amended my statement above, as I wanted to clear one thing up .. although I quit caring much about older versions of IE, I do make sure my site *works* in them. It just is not as pretty ;).

72
Developer's Corner / Re: Random Question (About Hash Keys)
« on: May 08, 2012, 09:20 PM »
I'm glad you asked this question, as it has made me check into the latest on the security of common digests (or secure hashes -- see how easier it is to say 'digest'? ;p).

MD5 == at least somewhat broken, hacked with practical implications due to the speed at which it can be hacked. Still, it suffices for MANY purposes just fine.
SHA1 == theoretically broken, feasibility of attack increasing, but still not feasible YET
SHA2 == current best thing to use if you trust the NSA, otherwise maybe ripemd320+ or whirlpool
SHA3 == under development

Remember, as I posted in my first rambling explanation, salting your digest will help further in cases where it can be done (e.g. on a web server storing password digests in a database, and if only the database were compromised so that the salt value was unknown). After all, a simple XOR of any data set by a key of equal size that is not zero-filled is essentially an unbreakable encryption method (assuming the data set size is reasonably large or original value not known or guessable when right, and the key not known).

So, intent, intent, intent... ;p.

Compare a digest, for instance, to a hash used to lookup things in a hash map/table - two different worlds. For information on hash tables, where use of short hashes that are *expected to produce collisions* is here: http://en.wikipedia.org/wiki/Hash_table

73
Developer's Corner / Re: Random Question (About Hash Keys)
« on: May 08, 2012, 08:52 PM »
Actually, reading a bit, SHA1 is known to theoretically have producable collisions now too, though it is not very practical at this point .. so it is still pretty darn secure.

74
Developer's Corner / Re: Random Question (About Hash Keys)
« on: May 08, 2012, 08:46 PM »
If you require a hash to be secure before you call it a digest, then why do you call MD5 a digest when there are known collisions? Doesn't that make MD5 an insecure hash?

Well, MD5 had the intent to be secure, and indeed mostly is - except for the known issues I linked to. It's like an encryption algorithm vs an obfuscation algorithm. An encryption algorithm seeks to be secure, while an obfuscation algorithm simply seeks to obfuscate.

Hence, I believe it is intent that matters most, as all digests will eventually be broken. To reduce a larger data set to a much, much smaller one means there is always a probability of collision, even if minuscule, approaching zero. A mathematician maybe can validate that statement, but it seems reasonable to me.

You get my drift. My personal definitions are just that - my preference. Use them at your own risk ;p.

75
Fixed oopsie on my premature conclusion, although that may be accurate for *older* versions of IE. Also, this site seems to work fine for the latest version of IE, even /w SSL. They may simply be saying they aren't going to try to work around the issues with all the older builds of IE. No ulterior motive... entirely possible. Maybe this policy hasn't went into effect yet, I dunno.

Indeed, I myself quit caring about older IE versions too. Too much of a pain to create multiple CSS files, etc.. That said, my site *works* under those older versions, I did make sure of that.

Pages: prev1 2 [3] 4 5 6 7 8 ... 20next