Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
• Monday December 6, 2021, 2:10 am
• Proudly celebrating 15+ years online.
• Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.

### Author Topic: The Danger of Naïveté  (Read 4612 times)

#### Deozaan

• Charter Member
• Joined in 2006
• Points: 1
• Posts: 9,429
##### The Danger of Naïveté
« on: July 11, 2013, 02:07 AM »
I recently thought it would be fun to program a simple card game and was wondering how to simulate a deck of cards being shuffled. I asked about it in the DC Chatroom and got a few helpful hints from people. Hamradio found this particular one from Coding Horror:

In my previous post on shuffling, I glossed over something very important. The very first thing that came to mind for a shuffle algorithm is this:

Code: C++ [Select]
1. for (int i = 0; i < cards.Length; i++)
2. {
3.   int n = rand.Next(cards.Length);
4.   Swap(ref cards[i], ref cards[n]);
5. }

It's a nice, simple solution to the shuffling problem:

1. Loop through each card in the deck.
2. Swap the current card with another randomly chosen card.

At first blush, this seems like a perfectly reasonable way to shuffle. It's simple, it's straightforward, and the output looks correct. It's the very definition of a naïve algorithm:

A naïve algorithm is a very simple solution to a problem. It is meant to describe a suboptimal algorithm compared to a "clever" (but less simple) algorithm. Naïve algorithms usually consume larger amounts of resources (time, space, memory accesses, ...), but are simple to devise and implement.

An example of a naïve algorithm is bubble sort, which is only a few lines long and easy to understand, but has a O(n2) time complexity. A more "clever" algorithm is quicksort, which, although being considerably more complicated than bubble sort, has a O(n log n) average complexity.

But there's a deep, dark problem with this naïve shuffling algorithm, a problem that most programmers won't see. Do you see it? Heck, I had the problem explained to me and I still didn't see it.

He then goes on the explain the problem, using charts and graphs and things. Pretty interesting stuff.

I had found Jeff's original post on shuffling and thought it was useful, but Hamradio's link to The Danger of Naïveté was particularly insightful to a noob like myself.

#### mouser

• First Author
• Administrator
• Joined in 2005
• Posts: 40,681
##### Re: The Danger of Naïveté
« Reply #1 on: July 11, 2013, 07:13 AM »
Thanks for posting this -- the Atwood post about why the naive sorting fails was a great read -- and surprised me.

#### 40hz

• Supporting Member
• Joined in 2007
• Posts: 11,838
##### Re: The Danger of Naïveté
« Reply #2 on: July 11, 2013, 07:52 AM »
Lost an hour plus yesterday going over the math on that one. Pretty amazing stuff. I love nonintuitive truths.

Sure goes a long way towards explaining why maybe some amateur coded solitaire games yield significantly higher win outcomes than others. And also demonstrates just how difficult it is to get something even close to genuine randomization with a non-specialized computer.

#### TaoPhoenix

• Supporting Member
• Joined in 2011
• Posts: 4,642
##### Re: The Danger of Naïveté
« Reply #3 on: July 11, 2013, 11:57 AM »
In the spirit of all this, though the algorithmic issues are different, a few "beginner's magic" tricks abuse "naive" shuffling. Using whatever methods etc, you learn the location of at least one (and sometimes a few cards if you're farther along studying.) Then you migrate them to either the top or the bottom of the deck. Then you "shuffle" the cards but abuse the following:

The "riffle" intersperses "1 and 25, 2 & 26" (with clumping). However, if the card you want is on top, then you make sure that half *finishes last*. So the top card remains the top card, with only the stuff in the middle traveling.

Then you "convert" the location knowledge into a sneak peek, and then the usual mumbo jumbo obfuscation for the reveal.

#### Renegade

• Charter Member
• Joined in 2005
• Posts: 13,276
• Tell me something you don't know...
##### Re: The Danger of Naïveté
« Reply #4 on: July 12, 2013, 03:25 AM »
Meh... Dunno. I suppose that I'm kind of lazy. And a heretic.

For the random number stuff... well, can't argue with that. But there are times when pseudo-random is good enough. You just need to figure out if it's important for the number to be truly random, which isn't particularly hard.

For the O stuff... It all depends. If you're coding for a desktop and it's not a long operation, then who cares? Whether it takes 5 ns or 50 ms doesn't really matter. If you're on a mobile device, well... now you have far more limited computing power (at the moment) and you also have batter power to contend with. When you have large data sets, then you really need to be careful. I remember hacking off some quick garbage code to do some work for me and it took hours. But, it was faster & easier to simply code fast & dirty rather than figure out some elegant code that I'd never use again. So it took me a few minutes to type up, and while it was running I did other stuff. It all just depends.

Still - coding horror is always fun to read!
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

#### x16wda

• Supporting Member
• Joined in 2007
• Posts: 860
• what am I doing in this handbasket?
##### Re: The Danger of Naïveté
« Reply #5 on: July 12, 2013, 05:35 AM »
For the O stuff... It all depends.
-Renegade

Yeah, exactly - assuming there are no other applicable constraints, if there's nothing better for those millions of cycles to do then what the heck... it's like nanoaerobic exercise...
vi vi vi - editor of the beast