This is part of my AHK tutorial here,
https://www.donation...ex.php?topic=34948.0but since GK will be on bits in some days, I searched for a review of that tool, not finding any though. Even here, the search term "karnaugh" just brings "You may have meant to search for Karna." So... but please, do NOT read on if you think it's a crime to treat two related subjects in one post.
Now, have a look into the wikipedia article on Karnaugh Map:
http://en.wikipedia....rg/wiki/Karnaugh_mapThen, perhaps, follow some of its links: to the overview, of course:
http://en.wikipedia....olean_algebra_topicsBut especiall to Venn Diagram, since that's the really useful thing here, in many cases:
http://en.wikipedia....rg/wiki/Venn_diagramAnd you can do this (and also 0/1 = Yes/No tables, etc., so-called "truth tables", but variants of them are also very useful for numerical variables, see below) on squared paper...
But following the link to the Quine-McCluskey algorithm will be instructive too,
http://en.wikipedia....3McCluskey_algorithmsince that's the more specific thing, for programmers/scripters, but we'll come back to K-maps' more general (or more specifically: rather deviant) scope in a moment.
First, have a look here,
http://gorgeous-karn...ugh-for-programmers/and try to understand the examples the developers gives there. (It is understood that the professional programmers will look upon all my posts with deep repulsion, but that we poor non-professionals have to find ways to get by, too, and that's why I think my posts can be helpful; so my invitation to "try to understand" addresses people like myself.)
You will see that the K-map enormously facilitates combinations of conditions... but of conditions, only, and that's the prob, for programmers/scripters.
So have a look into the "Gorgeous" developer's site in general, and you will quickly see that not only the K-map was invented for electrical engineers (circuitry), but that department is where it clearly excells.
Now, the prob is, K-maps greatly refine input, but what about output - it's not by accident that the "Gorgeous" developer made up his examples (on the linked page above) with just a true-false = if/else structure, and here we are back at my second subject (I've written some 70,000 or more lines of code, so I had ample occasion to make architectural and construction mistakes, and then ample occasion to amend).
In many "longer" routines (i.e. spanning over 1 or 2 pages; remind yourself: you should do a sensible max of subroutines in order to not repeat code, but see below), the first part is the "gathering" part, the second part being the "executive" one. In reality, that's not entirely true, but in so many cases, that first part has a very evident penchant to gathering data and to making decisions, whilst the second part more or less "DOES DO things", and that's why you should not totally mix up these more or less "natural" parts of a routine.
Of course, when there is a "check" result that will discard the routine, more or less, in two lines of code, do it at once, no prob: e.g.
else if blahblah
{
aCertainGlobalVariable = 0 ; = you do e.g. a variable reset to default value
return ; you leave the routine
}
else if blahblahblah ; etc, etc.
But if a certain condition will trigger 5 or 10 or many lines of code, perhaps with sub-routine calls, returns from there, etc. you should do a GOTO; goto x, goto y, goto z..., i.e. you combine the elements of the trigger part, and then, you combine the elements of the execute part.
"But my prog language does not allow goto's!" (whining, whining)
No prob! That's what are variables for, among other things. (Have a look at the truth table above again, and bear in mind I said it's also great for numerical variables.) So, without goto, have it exactly as stated, but instead of goto x, goto y, goto z, have a variable jumpto (or whatever you name it, but have it local!), and then write
if blahblah
jumpto = 1
else if blablabla
jumpto = 2
etc., etc.
And then, for the execute part, have a similar conditions structure
if jumpto = 1
your 20 lines of code
else if jumpto = 2
another 5 lines of code
etc., etc.
And bear in mind, if the code of such a part is too many lines, triggering your routine flowing over more than 2 pages or so, call subroutines, on other "pages" (= when printed out, or other screen "pages" / outliner items).
But now for the "see below" above, re "use subroutines". Well, there could be some subroutines, for code you will use again and again, on many occasions, but which ask for much specific data: If you can use global variables for that data, and/or if that data is within such variables anyway, very good: use the subroutines. But I have had many cases where I would have had to write many lines of data/variables, just for the subroutine to get that data, whilst the subroutine itself, without that part, would only have been 2 or 3 lines of code. In such cases, it's not useful to multiply these subroutines, and this is only logical: Whenever you get many lines out of your routine, by calling a subroutine, do it; but if such a call does not straigthen out your code, don't use a subroutine, and hold your code - repeated or not - together. (Of course, a very good solution to such a prob would be a function, and indeed, you should use functions (instead of subroutines) whenever "applicable", i.e. whenever that's possible.)
A related remark: Even when you need code that will NOT be re-used for other routines, write a subroutine notwithstanding, whenever your routine "becomes too long", and then why not replace 10 or 20 lines of code by just two lines: the subroutine call, and a comment line; and even if that call will need 5 or 6 lines (because of data to be transferred, i.e. because now you need variables that without breaking up your code you would not have needed): if you can put 20 lines within a subroutine, that will have "got" you 14 lines less (in our example).
Also, don't make the "advanced beginner's" mistake to think that just a min of lines of code "will be best": Of course, there are some high-brow algorithms "no one" understands, except real professionals, but you see, these have been created by more-or-less-geniuses, and then they are used again and again, on multiple occasions, by many programmers, but within strict application scope, i.e. it is known (from high-brow books) how to put which data into them, and what then to except where as the outcome... but those algorithms function as blackboxes: No need to try to do the same, and by this to create algorithms that look elegant but then give faulty results... ;-)
Now back to truth tables, with their above-mentioned numeric variants (= technically, they are not truth tables then anymore, but they are really helpful indeed, whatever you name them, and all you need is one leaf of quared paper from the exercice book of your girl or boy).
In the above example I gave, i.e. several conditions "in", then several distinct procedures "out", in the second "half", well: In real life, it's quite a little bit more complicated, and that's why I can't see the utility of K-maps here, not even for the "input", i.e. for the "first half" of the task... and in the second part, it's the same: programming is all about variants.
Which means, you will not have, as in the above link, dozens of factors, and then, there is a yes, or a no, but with many of the main factors/elements, there will be secondary, subordinate factors, and which will NOT influence the true/false outcome, as in the linked example, and they will not determine which one of several main "outcomes" in the "execute" part will be triggered, but they will determine variants, WITHIN these main "outcomes", and whilst some of these factors will apply to just one main outcome, and trigger a switch within there, other such factors will trigger similar variants within, or FOR (i.e. execution afterwards, another "goto" FROM there), SEVERAL such main outcomes, or even for all, or most, of them.
Now, how to manage such complexity? Very simple, by just "encoding" those variants, within both the first, and the second part, by numeric variables, INSTEAD OF CODING the processes: First, do the right construction; then, in a copy of your code, write the real coding lines - but don't mix up the thinking about structure, and the real coding, whenever it gets a little bit complicated.
Now, how many such variables? There are, let's say, 4 main outcomes, so var_a will get the value 1, 2, 3 or 4. Then, you will have a certain variant in some cases, wherever that might be, and you do var_b with its values 0,1 (if it's no/yes), or 1, 2, 3... for several possibilities (and by defaulting the value to 0 beforehand), and again with var_c, etc.
So, your point of departure, in your code structure, is simply building up the logical structure. Then, "on arrival", you will not replicate the building-up structure from above, but you will build a logical structure
if var_a = a
else if var_a = b
else if var_a = c
etc.
and for each if var_a = xyz, you think about possible variants, and then you either include them there, or you just "call" them there, i.e. some of these var_b = xyz should not be integrated within the main if's, but should be processed afterwards, i.e. you will not leave the else if var_a = c part with a "leave routine" command (= in AHK: return), but with a goto xyz command (or just with "nothing" if the goto is positioned immediately beneath the main var_a structure), where then var_b will be checked.
And so on, i.e. you will have to understand (and then to construct, accordingly) that var_b is NOT (necessarily) "subordinated under" the var_a structure (but perhaps dependent from it, i.e. without any "hit" within the var_a structure, the var_b will become irrelevant... but not necessarily so), it's just another logical category, different from the "var_a range" (with its respective values), and then, perhaps, var_c is clearly subordinated, logically, to the var_a range, whilst var_d is perhaps subordinated, too, but will only apply to 3 of 5 values in the var_a range, and var_e will only apply in one special var_d case, etc., etc.
As for the input structure, write this output structure down on squared paper, in order to not overlook possible combinations... but then, as said, not every possible permutation will make sense, so do your thinking "over" your squared paper, over your "adapted truth tables" (and yes, use several colors here). And then, when you write the code outline (see above), do it strictly from your paper design, and whenever you have doubts about structure, don't mess up your code, but refer to your paper again: Be sure before rearranging code lines.
You might call my style of coding the "variable-driven" style, meaning variable values as pointers; you multiply such values, instead of "doing things", and then, you check for these values again... but by this, you'll be able to structure your programs' actions in perfect logic, which greatly reduces construction probs. Professionals might have other programming styles, but then, they might even understand Quine-McKluskey: do you? I don't. But so what: We've got a right to write elaborate code, too, don't we? (And yes, doctors hate the web, and no lawyer's happy when you will have read some relevant pages before consulting him - it's all about "expertise" and laymen's challenging of that.)
And finally, in languages like AHK, you then can even replace some of the guide variables with goto's, again, before writing (in order to save the "if/else if lines "on arrival"); and no, don't call execute-part sub-routines from the first, the "gathering" part: prefer to write some unnecessary lines, and then put those calls deep into the second part, precisely at that position there, might it be deep down, where that call logically belongs:
Program in perfect, understandable, VISUAL LOGIC, even if that means lots of "unnecessary" lines.
And yes, there might be even 1,000 programmers worldwide who really need Gorgeous Karnaugh (and legions of electrical engineers), but for the rest of us, it's the same problem as with the Warnier system: It cannot guide us all the way long. (And yes, I know you can apply the K-map to conditional structures, to multiple else-if's, etc., but that doesn't resolve the inherent prob: it's too confined into a minute part of the structural prob: as said, similar to Warnier: it's a step beyond the chaos it left behind, but then you'll become aware of its limitations). And yes, try Venn diagrams if you like them visually; I prefer squared paper, and then the combination of a "checklist" with an outline, and then "manual thinking work upon that". (And yes, you should keep and reference your paperwork.)
Even professionals who laugh about such structural devices, should consider the possibility that their customer, in some years, will have to put lesser people on the code they will have left for them to understand, and with which they then will have to cope with. It is evident that a more "primitive" style but which is highly recognizable in its actions, will be preferred both by the customer, and by his poor coder-for-rent then. And yes I know I explained my style of procedural scripting here, object orientation being something else yet.
EDIT:
I forgot, above: You can further "simplify" your variable encoding (and shorten your code), whenever var_x is really and unequivocally subordinated to some other, or to just one/some value(s) of a specific other, by changing regular values of the "priority variable" to intermediate values, and in some cases, this is even useful - but in most, it's not...
Example: var_a can have values 1, 2, 3, normally. But in its value 2 case, there is var_k with values 0 and 1, or var_m with values 1, 2 and 3. Now you can change the values of var_a to 1, 2/3, 4 instead, 2 being original-2 with var_k 0, and 3 being 2 but with vark_k = 1; 4 being original-3 of course; 1-5 for var_m values 1, 2, 3 instead.
As you can see, this structure flattens out your if / else if (in your routine, you would ask for if var_a = 1, else if var_a = 2 or 3 (and then, within that code it, if var_a = 2 / else), but it also complicates the logical structure, so for most cases (where originally I happily used it again and again), I would never ever touch this anymore.
On the other hand, there are specific cases where I really love this encoding, and where I use it even today, to best effect, and without it getting my structural thinking mixed up: It's indeed where two factors are deeply "interwoven", to the effect that none of them is "superior" to the other, and where, in an outline, I would have a hard time to decide if it's element-of-kind-1 as multiple parents, and then element-of-kind-2 as its child, or the other way round.
Here, I systematically do just ONE var_a, and then the odd numbers (1, 3, 5...) are aspect 1 without aspect 2, and the even numbers would be aspect 1 with some value, and with aspect 2 present, too, but this is a valid construction only, I think, when aspect 2 is a toggle only: to have ranges 1,2,3, then 4,5,6, then 7,8,9 would again be chaotic.
But as said, even today, I love such structures 1,2, then 3,4, then 5,6..., whilst I would not redo these above-mentioned "truth tables" with entries of 8, 9 or higher numbers, generated by my previous, too excessive combining of several such variables into just one, and then counting their values higher and higher.
As said, "more elegant" style can be less readable, and "multiplication" of such "pointer variables" might not be high-brow, but assures perfect readability, so today I do it in a more "primitive" way than earlier, and since this is another one of those multiple, counterproductive "I'll do it in a more sophistaced way" traps, it's worth mentioning that I today, from experience, refrain from it, except in those even-odd cases, and even those are debatable.
Oh, and I forgot some explanation: In the curse of my programming and scripting, I discovered that (always in non-evident, i.e. a little bit longer routines) the "gathering" stage ("what is the "is" state? if, else, multiple else if's") usually has a "logic" that is totally different from the "natural execute logic" thereafter ("do this if, do that if..."), and my above-described system completely CUTS those two logical structures, or in other words, it enables you to build the "ToDo" structure as natural as possible, without undue consideration for the status quo.
So my system - other systems might do the same, so we're not speaking about superiority on other ways of coding, just about superiority on spontaneous coding - puts a level of ABSTRACTION between "real conditions", and then the execute part, with its "procedure conditions", which in most cases do not have that much (if anything) to do with the former. Whilst constructs like "if x, do y (= "all in one")" mix it all up.
And perhaps I should more clearly indicate what's the "status quo", what's the "gathering". In fact, it comprises the "what do we want to do?" part, the DECISIONAL part, but that's - and that's the "news" here if I dare say, "news" for beginners in programming/scripting at least - NOT identical with the "how is it to be done" part, and that part will then have a logical structure all to its own, hence the "necessity" for abstraction between the two, and real necessity, without any quotes, whenever you aim at producing highly readable code... whatever means you apply to that aim; mine, described here, is just one of some ways to realize that necessary abstraction.
And also, I forgot to specify that in fact, the "check list" is about the "what is, what should be done" = part 1, and the logical structure (= with more or less "truth table") which is checked to the checklist, in most cases, is part 2 = "which way do we do it", or in complicated cases, we need that, in a formalized manner, for both parts. In real life, you'll do it not that much formalized in most instances, and even when it is necessary, you'll do it just for the parts that really need observation and thorough checking if any possibility has been dealt with - the "message" in my description being, keep "task" (= together with the analysis what the task will be) separated from the execution of the task, and in most cases this means multiple crossings of the "lines" from a certain element in part 1, and then the "treatment", the "addressing" of that very same element in part 2 (where in most cases, it's become something very different anyway), or in other words: the logical grouping in part 1 is very different from the "steps to be done"-grouping in part 2, or at least should be: hence the necessity to build up some TWO logical structures, for just one (compound) task, and to coordinate them, without making logical errors, and without leaving "blanks" = dead ends = cases not treated. And in this coordination work, squared papers helps enormously, whilst "input simplification tools" are not really helpful, in most cases, since they counteract your need to realize variants in part 2, from variants in part 1: You will need those variants there, again (but in other constellations), instead of having them straightened in-between. In this respect, it's of interest to know that K-maps are the tool of choice for actors' signals' processing in alarm systems and such, since here, multiple combinations of different signals will trigger standardized reactions = transmission of other, "higher-level" signals, but only in combinations, and this is a task quite different from traditional programming, where we don't have "complicated input, and then standardized output", but "complicated input, and even more complicated output".