ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Other Software > Developer's Corner

The Danger of Naïveté

(1/2) > >>

Deozaan:
I recently thought it would be fun to program a simple card game and was wondering how to simulate a deck of cards being shuffled. I asked about it in the DC Chatroom and got a few helpful hints from people. Hamradio found this particular one from Coding Horror:

In my previous post on shuffling, I glossed over something very important. The very first thing that came to mind for a shuffle algorithm is this:


--- Code: C++ ---for (int i = 0; i < cards.Length; i++){  int n = rand.Next(cards.Length);  Swap(ref cards[i], ref cards[n]);}
It's a nice, simple solution to the shuffling problem:

1. Loop through each card in the deck.
2. Swap the current card with another randomly chosen card.

At first blush, this seems like a perfectly reasonable way to shuffle. It's simple, it's straightforward, and the output looks correct. It's the very definition of a naïve algorithm:

A naïve algorithm is a very simple solution to a problem. It is meant to describe a suboptimal algorithm compared to a "clever" (but less simple) algorithm. Naïve algorithms usually consume larger amounts of resources (time, space, memory accesses, ...), but are simple to devise and implement.

An example of a naïve algorithm is bubble sort, which is only a few lines long and easy to understand, but has a O(n2) time complexity. A more "clever" algorithm is quicksort, which, although being considerably more complicated than bubble sort, has a O(n log n) average complexity.
--- End quote ---

But there's a deep, dark problem with this naïve shuffling algorithm, a problem that most programmers won't see. Do you see it? Heck, I had the problem explained to me and I still didn't see it.
-http://www.codinghorror.com/blog/2007/12/the-danger-of-naivete.html
--- End quote ---

He then goes on the explain the problem, using charts and graphs and things. Pretty interesting stuff.

I had found Jeff's original post on shuffling and thought it was useful, but Hamradio's link to The Danger of Naïveté was particularly insightful to a noob like myself.

mouser:
Thanks for posting this -- the Atwood post about why the naive sorting fails was a great read -- and surprised me.

40hz:
Lost an hour plus yesterday going over the math on that one. Pretty amazing stuff. I love nonintuitive truths. 8)

Sure goes a long way towards explaining why maybe some amateur coded solitaire games yield significantly higher win outcomes than others. And also demonstrates just how difficult it is to get something even close to genuine randomization with a non-specialized computer.

TaoPhoenix:
In the spirit of all this, though the algorithmic issues are different, a few "beginner's magic" tricks abuse "naive" shuffling. Using whatever methods etc, you learn the location of at least one (and sometimes a few cards if you're farther along studying.) Then you migrate them to either the top or the bottom of the deck. Then you "shuffle" the cards but abuse the following:

The "riffle" intersperses "1 and 25, 2 & 26" (with clumping). However, if the card you want is on top, then you make sure that half *finishes last*. So the top card remains the top card, with only the stuff in the middle traveling.

Then you "convert" the location knowledge into a sneak peek, and then the usual mumbo jumbo obfuscation for the reveal.

Renegade:
Meh... Dunno. I suppose that I'm kind of lazy. And a heretic. :P

For the random number stuff... well, can't argue with that. But there are times when pseudo-random is good enough. You just need to figure out if it's important for the number to be truly random, which isn't particularly hard.

For the O stuff... It all depends. If you're coding for a desktop and it's not a long operation, then who cares? Whether it takes 5 ns or 50 ms doesn't really matter. If you're on a mobile device, well... now you have far more limited computing power (at the moment) and you also have batter power to contend with. When you have large data sets, then you really need to be careful. I remember hacking off some quick garbage code to do some work for me and it took hours. But, it was faster & easier to simply code fast & dirty rather than figure out some elegant code that I'd never use again. So it took me a few minutes to type up, and while it was running I did other stuff. It all just depends.

Still - coding horror is always fun to read! :D

Navigation

[0] Message Index

[#] Next page

Go to full version