topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Saturday December 14, 2024, 4:55 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: Gorgeous Karnaugh Review; How To Write Code As I See It  (Read 8952 times)

peter.s

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 116
    • View Profile
    • Donate to Member
Gorgeous Karnaugh Review; How To Write Code As I See It
« on: March 14, 2014, 03:50 PM »
This is part of my AHK tutorial here,

https://www.donation...ex.php?topic=34948.0

but since GK will be on bits in some days, I searched for a review of that tool, not finding any though. Even here, the search term "karnaugh" just brings "You may have meant to search for Karna." So... but please, do NOT read on if you think it's a crime to treat two related subjects in one post.

Now, have a look into the wikipedia article on Karnaugh Map: http://en.wikipedia....rg/wiki/Karnaugh_map

Then, perhaps, follow some of its links: to the overview, of course:

http://en.wikipedia....olean_algebra_topics

But especiall to Venn Diagram, since that's the really useful thing here, in many cases:

http://en.wikipedia....rg/wiki/Venn_diagram

And you can do this (and also 0/1 = Yes/No tables, etc., so-called "truth tables", but variants of them are also very useful for numerical variables, see below) on squared paper...

But following the link to the Quine-McCluskey algorithm will be instructive too,

http://en.wikipedia....3McCluskey_algorithm

since that's the more specific thing, for programmers/scripters, but we'll come back to K-maps' more general (or more specifically: rather deviant) scope in a moment.

First, have a look here, http://gorgeous-karn...ugh-for-programmers/

and try to understand the examples the developers gives there. (It is understood that the professional programmers will look upon all my posts with deep repulsion, but that we poor non-professionals have to find ways to get by, too, and that's why I think my posts can be helpful; so my invitation to "try to understand" addresses people like myself.)

You will see that the K-map enormously facilitates combinations of conditions... but of conditions, only, and that's the prob, for programmers/scripters.

So have a look into the "Gorgeous" developer's site in general, and you will quickly see that not only the K-map was invented for electrical engineers (circuitry), but that department is where it clearly excells.

Now, the prob is, K-maps greatly refine input, but what about output - it's not by accident that the "Gorgeous" developer made up his examples (on the linked page above) with just a true-false = if/else structure, and here we are back at my second subject (I've written some 70,000 or more lines of code, so I had ample occasion to make architectural and construction mistakes, and then ample occasion to amend).

In many "longer" routines (i.e. spanning over 1 or 2 pages; remind yourself: you should do a sensible max of subroutines in order to not repeat code, but see below), the first part is the "gathering" part, the second part being the "executive" one. In reality, that's not entirely true, but in so many cases, that first part has a very evident penchant to gathering data and to making decisions, whilst the second part more or less "DOES DO things", and that's why you should not totally mix up these more or less "natural" parts of a routine.

Of course, when there is a "check" result that will discard the routine, more or less, in two lines of code, do it at once, no prob: e.g.

else if blahblah
{
   aCertainGlobalVariable = 0 ; = you do e.g. a variable reset to default value
   return ; you leave the routine
}
else if blahblahblah ; etc, etc.

But if a certain condition will trigger 5 or 10 or many lines of code, perhaps with sub-routine calls, returns from there, etc. you should do a GOTO; goto x, goto y, goto z..., i.e. you combine the elements of the trigger part, and then, you combine the elements of the execute part.

"But my prog language does not allow goto's!" (whining, whining)

No prob! That's what are variables for, among other things. (Have a look at the truth table above again, and bear in mind I said it's also great for numerical variables.) So, without goto, have it exactly as stated, but instead of goto x, goto y, goto z, have a variable jumpto (or whatever you name it, but have it local!), and then write

if blahblah
   jumpto = 1
else if blablabla
   jumpto = 2
etc., etc.

And then, for the execute part, have a similar conditions structure

if jumpto = 1
   your 20 lines of code
else if jumpto = 2
   another 5 lines of code
etc., etc.

And bear in mind, if the code of such a part is too many lines, triggering your routine flowing over more than 2 pages or so, call subroutines, on other "pages" (= when printed out, or other screen "pages" / outliner items).

But now for the "see below" above, re "use subroutines". Well, there could be some subroutines, for code you will use again and again, on many occasions, but which ask for much specific data: If you can use global variables for that data, and/or if that data is within such variables anyway, very good: use the subroutines. But I have had many cases where I would have had to write many lines of data/variables, just for the subroutine to get that data, whilst the subroutine itself, without that part, would only have been 2 or 3 lines of code. In such cases, it's not useful to multiply these subroutines, and this is only logical: Whenever you get many lines out of your routine, by calling a subroutine, do it; but if such a call does not straigthen out your code, don't use a subroutine, and hold your code - repeated or not - together. (Of course, a very good solution to such a prob would be a function, and indeed, you should use functions (instead of subroutines) whenever "applicable", i.e. whenever that's possible.)

A related remark: Even when you need code that will NOT be re-used for other routines, write a subroutine notwithstanding, whenever your routine "becomes too long", and then why not replace 10 or 20 lines of code by just two lines: the subroutine call, and a comment line; and even if that call will need 5 or 6 lines (because of data to be transferred, i.e. because now you need variables that without breaking up your code you would not have needed): if you can put 20 lines within a subroutine, that will have "got" you 14 lines less (in our example).

Also, don't make the "advanced beginner's" mistake to think that just a min of lines of code "will be best": Of course, there are some high-brow algorithms "no one" understands, except real professionals, but you see, these have been created by more-or-less-geniuses, and then they are used again and again, on multiple occasions, by many programmers, but within strict application scope, i.e. it is known (from high-brow books) how to put which data into them, and what then to except where as the outcome... but those algorithms function as blackboxes: No need to try to do the same, and by this to create algorithms that look elegant but then give faulty results... ;-)

Now back to truth tables, with their above-mentioned numeric variants (= technically, they are not truth tables then anymore, but they are really helpful indeed, whatever you name them, and all you need is one leaf of quared paper from the exercice book of your girl or boy).

In the above example I gave, i.e. several conditions "in", then several distinct procedures "out", in the second "half", well: In real life, it's quite a little bit more complicated, and that's why I can't see the utility of K-maps here, not even for the "input", i.e. for the "first half" of the task... and in the second part, it's the same: programming is all about variants.

Which means, you will not have, as in the above link, dozens of factors, and then, there is a yes, or a no, but with many of the main factors/elements, there will be secondary, subordinate factors, and which will NOT influence the true/false outcome, as in the linked example, and they will not determine which one of several main "outcomes" in the "execute" part will be triggered, but they will determine variants, WITHIN these main "outcomes", and whilst some of these factors will apply to just one main outcome, and trigger a switch within there, other such factors will trigger similar variants within, or FOR (i.e. execution afterwards, another "goto" FROM there), SEVERAL such main outcomes, or even for all, or most, of them.

Now, how to manage such complexity? Very simple, by just "encoding" those variants, within both the first, and the second part, by numeric variables, INSTEAD OF CODING the processes: First, do the right construction; then, in a copy of your code, write the real coding lines - but don't mix up the thinking about structure, and the real coding, whenever it gets a little bit complicated.

Now, how many such variables? There are, let's say, 4 main outcomes, so var_a will get the value 1, 2, 3 or 4. Then, you will have a certain variant in some cases, wherever that might be, and you do var_b with its values 0,1 (if it's no/yes), or 1, 2, 3... for several possibilities (and by defaulting the value to 0 beforehand), and again with var_c, etc.

So, your point of departure, in your code structure, is simply building up the logical structure. Then, "on arrival", you will not replicate the building-up structure from above, but you will build a logical structure

if var_a = a
else if var_a = b
else if var_a = c
etc.

and for each if var_a = xyz, you think about possible variants, and then you either include them there, or you just "call" them there, i.e. some of these var_b = xyz should not be integrated within the main if's, but should be processed afterwards, i.e. you will not leave the else if var_a = c part with a "leave routine" command (= in AHK: return), but with a goto xyz command (or just with "nothing" if the goto is positioned immediately beneath the main var_a structure), where then var_b will be checked.

And so on, i.e. you will have to understand (and then to construct, accordingly) that var_b is NOT (necessarily) "subordinated under" the var_a structure (but perhaps dependent from it, i.e. without any "hit" within the var_a structure, the var_b will become irrelevant... but not necessarily so), it's just another logical category, different from the "var_a range" (with its respective values), and then, perhaps, var_c is clearly subordinated, logically, to the var_a range, whilst var_d is perhaps subordinated, too, but will only apply to 3 of 5 values in the var_a range, and var_e will only apply in one special var_d case, etc., etc.

As for the input structure, write this output structure down on squared paper, in order to not overlook possible combinations... but then, as said, not every possible permutation will make sense, so do your thinking "over" your squared paper, over your "adapted truth tables" (and yes, use several colors here). And then, when you write the code outline (see above), do it strictly from your paper design, and whenever you have doubts about structure, don't mess up your code, but refer to your paper again: Be sure before rearranging code lines.

You might call my style of coding the "variable-driven" style, meaning variable values as pointers; you multiply such values, instead of "doing things", and then, you check for these values again... but by this, you'll be able to structure your programs' actions in perfect logic, which greatly reduces construction probs. Professionals might have other programming styles, but then, they might even understand Quine-McKluskey: do you? I don't. But so what: We've got a right to write elaborate code, too, don't we? (And yes, doctors hate the web, and no lawyer's happy when you will have read some relevant pages before consulting him - it's all about "expertise" and laymen's challenging of that.)

And finally, in languages like AHK, you then can even replace some of the guide variables with goto's, again, before writing (in order to save the "if/else if lines "on arrival"); and no, don't call execute-part sub-routines from the first, the "gathering" part: prefer to write some unnecessary lines, and then put those calls deep into the second part, precisely at that position there, might it be deep down, where that call logically belongs:

Program in perfect, understandable, VISUAL LOGIC, even if that means lots of "unnecessary" lines.

And yes, there might be even 1,000 programmers worldwide who really need Gorgeous Karnaugh (and legions of electrical engineers), but for the rest of us, it's the same problem as with the Warnier system: It cannot guide us all the way long. (And yes, I know you can apply the K-map to conditional structures, to multiple else-if's, etc., but that doesn't resolve the inherent prob: it's too confined into a minute part of the structural prob: as said, similar to Warnier: it's a step beyond the chaos it left behind, but then you'll become aware of its limitations). And yes, try Venn diagrams if you like them visually; I prefer squared paper, and then the combination of a "checklist" with an outline, and then "manual thinking work upon that". (And yes, you should keep and reference your paperwork.)

Even professionals who laugh about such structural devices, should consider the possibility that their customer, in some years, will have to put lesser people on the code they will have left for them to understand, and with which they then will have to cope with. It is evident that a more "primitive" style but which is highly recognizable in its actions, will be preferred both by the customer, and by his poor coder-for-rent then. And yes I know I explained my style of procedural scripting here, object orientation being something else yet.


EDIT:

I forgot, above: You can further "simplify" your variable encoding (and shorten your code), whenever var_x is really and unequivocally subordinated to some other, or to just one/some value(s) of a specific other, by changing regular values of the "priority variable" to intermediate values, and in some cases, this is even useful - but in most, it's not...

Example: var_a can have values 1, 2, 3, normally. But in its value 2 case, there is var_k with values 0 and 1, or var_m with values 1, 2 and 3. Now you can change the values of var_a to 1, 2/3, 4 instead, 2 being original-2 with var_k 0, and 3 being 2 but with vark_k = 1; 4 being original-3 of course; 1-5 for var_m values 1, 2, 3 instead.

As you can see, this structure flattens out your if / else if (in your routine, you would ask for if var_a = 1, else if var_a = 2 or 3 (and then, within that code it, if var_a = 2 / else), but it also complicates the logical structure, so for most cases (where originally I happily used it again and again), I would never ever touch this anymore.

On the other hand, there are specific cases where I really love this encoding, and where I use it even today, to best effect, and without it getting my structural thinking mixed up: It's indeed where two factors are deeply "interwoven", to the effect that none of them is "superior" to the other, and where, in an outline, I would have a hard time to decide if it's element-of-kind-1 as multiple parents, and then element-of-kind-2 as its child, or the other way round.

Here, I systematically do just ONE var_a, and then the odd numbers (1, 3, 5...) are aspect 1 without aspect 2, and the even numbers would be aspect 1 with some value, and with aspect 2 present, too, but this is a valid construction only, I think, when aspect 2 is a toggle only: to have ranges 1,2,3, then 4,5,6, then 7,8,9 would again be chaotic.

But as said, even today, I love such structures 1,2, then 3,4, then 5,6..., whilst I would not redo these above-mentioned "truth tables" with entries of 8, 9 or higher numbers, generated by my previous, too excessive combining of several such variables into just one, and then counting their values higher and higher.

As said, "more elegant" style can be less readable, and "multiplication" of such "pointer variables" might not be high-brow, but assures perfect readability, so today I do it in a more "primitive" way than earlier, and since this is another one of those multiple, counterproductive "I'll do it in a more sophistaced way" traps, it's worth mentioning that I today, from experience, refrain from it, except in those even-odd cases, and even those are debatable.


Oh, and I forgot some explanation: In the curse of my programming and scripting, I discovered that (always in non-evident, i.e. a little bit longer routines) the "gathering" stage ("what is the "is" state? if, else, multiple else if's") usually has a "logic" that is totally different from the "natural execute logic" thereafter ("do this if, do that if..."), and my above-described system completely CUTS those two logical structures, or in other words, it enables you to build the "ToDo" structure as natural as possible, without undue consideration for the status quo.

So my system - other systems might do the same, so we're not speaking about superiority on other ways of coding, just about superiority on spontaneous coding - puts a level of ABSTRACTION between "real conditions", and then the execute part, with its "procedure conditions", which in most cases do not have that much (if anything) to do with the former. Whilst constructs like "if x, do y (= "all in one")" mix it all up.


And perhaps I should more clearly indicate what's the "status quo", what's the "gathering". In fact, it comprises the "what do we want to do?" part, the DECISIONAL part, but that's - and that's the "news" here if I dare say, "news" for beginners in programming/scripting at least - NOT identical with the "how is it to be done" part, and that part will then have a logical structure all to its own, hence the "necessity" for abstraction between the two, and real necessity, without any quotes, whenever you aim at producing highly readable code... whatever means you apply to that aim; mine, described here, is just one of some ways to realize that necessary abstraction.


And also, I forgot to specify that in fact, the "check list" is about the "what is, what should be done" = part 1, and the logical structure (= with more or less "truth table") which is checked to the checklist, in most cases, is part 2 = "which way do we do it", or in complicated cases, we need that, in a formalized manner, for both parts. In real life, you'll do it not that much formalized in most instances, and even when it is necessary, you'll do it just for the parts that really need observation and thorough checking if any possibility has been dealt with - the "message" in my description being, keep "task" (= together with the analysis what the task will be) separated from the execution of the task, and in most cases this means multiple crossings of the "lines" from a certain element in part 1, and then the "treatment", the "addressing" of that very same element in part 2 (where in most cases, it's become something very different anyway), or in other words: the logical grouping in part 1 is very different from the "steps to be done"-grouping in part 2, or at least should be: hence the necessity to build up some TWO logical structures, for just one (compound) task, and to coordinate them, without making logical errors, and without leaving "blanks" = dead ends = cases not treated. And in this coordination work, squared papers helps enormously, whilst "input simplification tools" are not really helpful, in most cases, since they counteract your need to realize variants in part 2, from variants in part 1: You will need those variants there, again (but in other constellations), instead of having them straightened in-between. In this respect, it's of interest to know that K-maps are the tool of choice for actors' signals' processing in alarm systems and such, since here, multiple combinations of different signals will trigger standardized reactions = transmission of other, "higher-level" signals, but only in combinations, and this is a task quite different from traditional programming, where we don't have "complicated input, and then standardized output", but "complicated input, and even more complicated output".
When the wise points to the moon, the moron just looks at his pointer. China.
« Last Edit: March 14, 2014, 06:10 PM by peter.s »

Curt

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 7,566
    • View Profile
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #1 on: March 15, 2014, 06:28 AM »
I guess peter.s's post was inspired by Gorgeous Karnaugh being 70% off on Bits du Jour today. Standard: $21 ($70), or Professional: $45 ($150). http://www.bitsdujou...in=todays-deals-home. I read about it, but didn't understand a word of it.

2014-03-15_122454.gif

peter.s

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 116
    • View Profile
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #2 on: May 20, 2014, 09:47 AM »
Understood that this thread always addresses non-programmers, non-professionals...

Elsewhere, I mentioned Warnier (for details, just search for "warnier" either in this forum or in outlinersoftware.com) who, in the Seventies, did something revolutionary for mainframe programming, which at the time was done more or less in spaghetti-code style which made stay code reusability an unknown concept, and even hampered code adjustments and code maintenance to the point of unbearability voire impossibility.

Hence his (very mainframes-and-their-then-typical-output-centred,) very strict (graphically, horizontal) tree structure for process/logical flow. Up until some time ago, there even had been (left) one piece of sw to do it on screen, "b-liner 6" (well, there had never been versions 2 to 5 at least, b-liner.com), for 90$, but when I today wanted to check the current price, I got a "Currently Not Available" instead. Well, it was buggy (and development had been stalled a long time before), but it was graphically very pleasant...

Now, even quickly in the Seventies, programming became much more sophisticated, than the Warnier paradigm reasonably could handle: As indicated above, there are several such flows that in an elaborate application will not make their voyage together, and at the very least, you'll get logic flow, and information flow that differ, hence the need, in non-professional coding environments, to do some heavy manual work here, like extensive, manually-maintained lists for triggers, triggered elements (in, out), variables (ditto: "gets var1 from system, updated by trigger routine xyz", etc., and "updates var2 for routine abc, updates var3 which is then possibly checked by routines c and d", etc., etc.).

Now, manual maintenance of such lists is a lot of (error-prone) work, but you can at least simplify this by either using some editor whose search function will show results in a hit table (and which offers code folding, of course), or, much much better, to do your programming/scripting within an outliner which has got such a hit table for search results, too (and where the hit table will indicate the name of the item = routine of the occurence); fortunately, there are some such outliners, from which RightNote stands out because even its FREE version offers this feature, and allows for rtf formatting of text, of which you should make very ample use at least during the construction of your code body, so for people who ain't (yet) into outliners for almost all their work, will be able to start programming/scripting in this free outliner, independently of their possible switching to some outliner for general work later on.

Also, you could do you manual logic and info flow checks on paper, with print-outs and colored pencils, either/or (even) for the original data, or (and especially) for those hit lists (many programs will not allow for exporting/printing of their search hit lists: Just make a screenshot then, and work, with colored pencils, on the multiple screenshot printouts).

Of course, before doing this "macro comparison", i.e. inter-routine (i.e. "are my indications correct, checking alleged sources and targets?"), you'll have to do your "micro comparison", i.e. intra-routine: (full) routine vs. your manually-created "header" lists there: Here, you will work with printouts and colored pencils, preferably, but if you insist on using a hit table, instead, even here, you can do it in every outliner offering "search..." (with hit table, to begin with, naturally) "...just in selected item and its children": For such a setup, you simply divide your item into your header, and then the real code, and in the tree it would be:
; H some routine
   ; C some routine
; H next routine
   ; C next routine
H meaning "header", and "C" meaning "code", or whatever you choose for differentiating them:

Then, you select ; H some routine, you search (e.g. for variables, but also for GOSUBs, etc.), and voilà, you'll get your comparison between your initialisations, etc., and your real use of things.

It goes without saying that in programming languages where declaration/initialisation of variables is mandatory, and where there is a clear distinction between local, and then global variables (some ace programmers of today will even tell you that global variables were to be avoided, but that's a whole other story/world (so don't have you fooled)...), both comparisons, intra and inter, are greatly facilitated, but even there, DO THOSE COMPARISONS, as thoroughly as is needed, i.e. down to the last detail (or buy some really professional coding environment, but unfortunately I don't know any that lets me work in such an outlined structure, and does all the above-described tasks for me upon request, so external input from fellow posters would be more than welcome here, and yes, it's understood such sw would be in the 4-digit range).

As you see, this is quite overwhelming a task (but in a good outliner, it's doable at least, whilst in b-liner, e.g., technically it's doable, too, but accessing all those elements that will have to be compared would be a strenuous nightmare (and yes, there are people who get killed in their sleep, precisely by their nightmares!), so most of you will try to avoid it: DO NOT!

AND HAVE PATIENCE IN WRITING. What does that mean? Do not do multiple pieces of tiny code, then being eager to see if it works, but try to build up the architecture of all those interlocking elements, i.e. build some tree sctructure, and do lots of pseudo-code, intervowen with real code wherever real code is at your fingertips, i.e. where you both know how to write it, and can write it down quite quickly, but perhaps leaving out the necessary variables and such...

But whenever you write such "primal code" (i.e. some chaos mixture of pseudocode and real code and, especially, as much of (just provisional!) notes as you'll possibly need later on in order to "complete" your code), whenever it occurs to you that your "code" is not complete "here" yet, do a note what's missing, what to consider, too, what might be included, whatever:

BE AS COMPLETE AS POSSIBLE in your considerations, in your coding. I.e. refrain from following what "they" say, DON'T be concise, but "put it all in": Take those novelists as your example who write 1500 pages or more, before condensing it all into those 350 pages that then, if they are very lucky (or have established their renown already) will become the bestseller you (or your wife) will possibly rave about.

In other words, THINK YOUR architecture/code, in writing it! (But if you try that in an editor, instead of an outliner, you'll probably get lost, or your editor should have really good outlining functionality!) I.e. do some simili-Warnier construct, but vertically instead of horizontally, and with immediate access to both any "code pieces and other explanations to yourself", within the right pane... and without considering the very first node in your tree as the logical source of any tree you might create downwards (and, of course, without cutting your code into TOO many code parts: as said above, whatever code is less than one printout page, even if it's a (but please: cohesive!) logical structure: no need to artificially cut it up into a dozen or so Warnier elements.

Also, and especially in light of the above - big header structures in "comment" format, but so important in order for your being sure every "flow" in your application will be correct (and yes, I should have said it above, I say it know: All that intra/micro comparing code vs. comments, and then inter/macro, header comments vs. header comments elsewhere, will INCREDIBLY minimize your debug time (not speaking of your children some day maintaining papa's code!) -,

there is no need to artificially cut up code into subroutines (e.g. in ahk: GOSUB, nameofsubroutine), together with "endless" header comments there again, when you don't have to possibly access that code from elsewhere (but you should ask yourself if that wasn't an alternative in order to further optimise your application's functionality, among other considerations!):

Instead of cutting code "too big for one page" into several routines, cut it up in several, "simili-self-contained" pages, e.g. within your tree:
; routine (header including comments)
   ; if abc (code belonging to the if's in the next item (! why not?!) but then with some 10 or 30 lines of its own
   ; if d (further on in that "broken-up-between-pages" if structure)
      ; if a (why not? even a sub-structure here!:)
      ; else (and the code for the else branch)
   ; if ... (= end of above if structure, with perhaps 3 or 5 more)
   ; and some more
; here only, some other routine

So, sometimes, you'll get "real big packages", and which are quite homogeneous in such, not in structure, but in what they are deemed to "be" (i.e. not even necessarily re their elements' respective outcome/output), but without these "output" elements then being "accessed from the outside", or in other words, you'll get code structures that are, in some way, quite "final".

In such instances, it would really be quite ridiculous to artificially observe some "one page, one routine" structure, but you'll be well advised to let flow that routine over 3, 5 or 10 pages sometimes, and without endlessly and partly (! or you create inconsistencies if independent routines!) replicating header structures, at the condition of course that within such a several-pages code structure, you'll be able to cut your code into logically clearly distinct parts (but even with "continuation sections, as in the outer if structure in the example above): Whatever is visually easily understandable is perfectly acceptable as code structure, as long as it is "self-contained" (no access from the outside).

So, please take this part of my post as a correction of my post above: I write "from memory" here, and in my post above, the observation of the "one page one routine" rule made me ignore my own experience of those, perfectly acceptable (and perfectly "maintainable"), (sometimes even much) more extensive routines: Atomize, but wherever it makes sense, not where it doesn't!

To resume what I've said above, have patience, i.e. let grow your new code (be it several routines, be it one big routine) for days (and more, if you just work on it 2 hours a day), without trying to run your code or parts of it, and if you have got several locations where you expect the same or similar difficulties, just say so in your comments, but in the meanwhile, "fill up" and complete your whole structure, as best as you can: Write 5, 10 or 15 pages of code, trying to "oversee", to hold in scope, the "big picture", and iteratively switch forth and back between construction questions and details within parts of the construction.

This way, your construction will grow up to the point of being "acceptable" (i.e. further optimization, or even later rearrangements will perhaps be necessary AFTER "trying out", "running for tries", i.e. (starting) debugging, but most rearrangements will even be done here, before "starting to try"), and at the same time, every part in it will grow to the point of where it will BECOME reasonable to THEN complete the real code everywhere, i.e. replace pseudocode and comments whenever you feel it will be quite "final", i.e. writing too many lines of real code too early in this process will mean AND many lines of code for the bin, AND annoying retardation of structure...

whilst any try to perfect structure, without code, will (except for some ace programmers perhaps) result in more or less faulty structure, since it's also in writing code (and pseudocode, but quite near to real code) that it will occur to you that alternative structuring will get "easier/better/more accessible/or even just:possible vs. impossible!" code, i.e. in many instances, (alternative ways of) code structure and (alternative ways of) code detail are so interdependant that you should find your way of doing both, architecture AND start-of-finition, concurrently.

(Instruments like UML try to facilitate this, but especially for UML (and for which there are several free sw offerings), I personally think that the graphic represention in most of its instruments is the worst, most non-intuitive and most non-immediately-catchy(=comprehensible) I ever encountered anywhere - UML is a nightmare imo, and then, it's not even smart enough when combining different overlaying structures, instead of doing them in (much too) separate views (it's just that (paid) UML sw obviously comes with some very welcome automation (see above))...

Hence: Start from tree, or from several trees one below the other(s).
Fill the content fields with some content (comments of which the bigger part will be deleted in your work).
Create new branches / subbranches (name the routines you will create, add new comments).
Do some code here and there, enough to discover structure.
Amply use rtf formatting for everything you do (= one of the BIG advantages over editors!).
Create new branches, etc. whenever you deem "necessary for perfect clarity" to separate (future) code from other code parts.
Rearrange branches / subbranches / code pieces within content fields.
Fill up the content fields as much as necessary in order to get it "complete".

Then: Revise your pages, one by one, checking if CODE is complete there (and if NOW you will have formatted any comment there that will (for the time being at least) stay there, as comment) (and replace pseudocode in several such pages in one row if up to then you had done similar code parts in pseudocode/comments instead of real code (e.g. because it's "obscure"/difficult commands you first had to look up).

Then: Do the above comparison work: headers vs. bodies, then headers vs. headers.
Check logical flow.
Check info flow. Check variable names, and variable contents. Check (AHK!) if here and there, you will have mixed up name and content.

THEN try to compile, even if it's a month later. NO! First (ok, you could have done this above already, but don't do it before your code is "semi-final"), put multiple msgbox, 1/2/3: variablename _%variablename%_`n_%anothervariablename%_ lines into your code (accent grave plus n is new line, _ is for being sure to exclude possible leading/trailing spaces, and since you will see the variable names in your printouts, you can leave out them in your message boxes:
msbox, 54 _%var1%_`n_%var2%_
, etc.), multiple such msgboxes I said, and often with braces and a second, return line, and most often out-commenting the real line, the one to be executed. Print all your pages out then, and check the proceeds, in the next step, one message box one by one: Is the variable in msgbox 28 the variable you will have expected there? (If you don't number the message boxes, as described above, you will not know WHERE in your code you will have got the right or wrong variable values.

Since NOW try to compile, and trial, perhaps only for the main parts of it (cf. your replacing (!) executive "subs" by "just showing the respective variable values" above). On paper, check the variable values, with "ok"s (or what you get). Then, revise the code, then "open up" such "subs" (whenever the higher structures give correct msgboxvalues), in order to check smooth running of the "inner parts", too: Step by step, replace msgboxes (which could also contain passages like "In reality, it would trigger sub xyz here.", but just look at your page instead!) by real code execution, up to obtaining perfect code (or seeing where you better rearrange your construct).

Or in other words, iterative coding is not a synonym for "coding by chaos". But Warnier structures et al. had been too stringent, and that harms creativity even for building up the "macro" structure, i.e. the repartition of intermediate, not only lower branches. And, you know this already, but just for the beauty of the pic: Build some wood, not just one tree. ;-)

Start with RightNote FREE if you don't have same (good) outliner anyway.
Don't say "But my outliner doesn't do code completion (for ahk or whatever), whilst my editor does!" whenever your outliner comes with the above-described core functionality for programming/scripting and your editor does not.

Ok, this wasn't only "How to write code as I see it", but also "how to debug (and considering that the real difficulty in coding is not logic structure in its stricted sense of the term, but info flow, i.e. the process flow created by info in variables and such)" and even "how to use the original language to do the prototype", but again, I'm addressing fellow non-professionals here, so I seriously think my advice could be helpful to some of them.

As for a certain Mr. Orr, well, you wouldn't name a train after people who just jump onto it when it has started to run, even when it then feeds them quite well a life long, would you?

And here's the inevitable "and I also would like to share this..." part; please allow, for once, to share the very best piece of comedy ever produced after Groucho Marx, but which is unfortunately available in German only, and worse, in a German intonation that would ask for more-or-less-native German speakers in order to "get it all"; I would not have dared mention it, had the quality of that tour de force not been world-class. And then, it also comprises some outstanding sax performances by Simone Sonnenschein ( simonesonnenschein.de ) which are perfectly accessible to non-perfect-German speakers, too, and of which the one right after the break, "Free Jazz", is without any doubt the most hilarious piece of (so good, en plus !) music in the whole world of music, since ancient history.

Well, this rave performance is from 1999, is called "Hip Hop für Angestellte", and is by and with Piet Klocke:

http://www.youtube.c.../watch?v=ekvxHsVEPq0

Enjoy (the music at least: listen at very low level (and without picture: Without understanding his speech, you'd mistake Mr. Klocke for a dangerous lunatic!), then rise it whenever the sax plays, and remember, that grandiose lady starts piano-piano (= in a very subdued mood), so early asking yourself, "so what?!" would be a BIG mistake. (And bear in mind that her singing at the end is for comic effect, not to compete with Frederica von Stade!)) ;-)
When the wise points to the moon, the moron just looks at his pointer. China.

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,914
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #3 on: May 20, 2014, 04:47 PM »
That was a long read, peter.

I just have a few comments, not exactly in response to your posts but perhaps related.

Often we find ourselves *optimizing* for some metrics -- without stopping to ask whether we are optimizing for the right thing.

The Karnaugh maps seem to be an effort to optimize for the minimum number of boolean logic evaluations.  If you are trying to minimize circuit size or computation time on something that has to perform huge numbers of these calculations then this would make sense.

As a modern programmer, though, it is almost always the case that the most sensible thing to optimize is for ease of code comprehension and maintenance.  There may be rare occasions where a small piece of code will need to run as fast as possible.  But most of the time the "best" thing you can do is make your code as easy to understand, fix, change, and test.  Because those are properties of your code that are important (most of the time).

peter.s

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 116
    • View Profile
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #4 on: May 21, 2014, 03:33 PM »
I

mouser, I think you make a very valid point here, my first post showing that I understood Karnaugh as a means to straighten out code, whilst in fact, such "beautified" code will also and foremost dramatically reduce computation time, in such cases.

And for general purposes, it's generally accepted today what you say in some key words here (and what I tried to explain to noobs a little bit, trying to counterweight the (implicit, possible) counterargument, "but as long as I find my way thru my code..." by saying, one day your children will have to maintain the code you write today; of course I know that today's fast-changing computer world will prevent them from such a task; in most cases, much of today's code will have become useless in some years (and less and less code is written for traditional devices, for that very same reason).

II

There is one aspect to be added, to that maintainability need, and which I try to "observe", as much as I can (i.e. by pure imagination, i.e. without having sufficient knowledge of alternative programming languages et multi-devices setups):

More-or-less-traditional-applics-written-today should be "transportable" at least in the sense of facilitating, or at the very least in the sense of not "deliberately hampering" ("deliberate" by unfortunate design, not by real intention, of course) transposition into other programming languages, incl. multiple-device setups, i.e. I muse, "how could this be realized again, later on, divided up between pc and cloud/handhelds/whatever?", and I try to not do it "too compact".

Both in the "micro" and in the "macro" levels mentioned above, there should be enough valid "recoding info" in order to recode it all, for more sophisticated setups, and if you blur "micro" and "macro" - and most noobs do exactly this, and I'm also speaking from my former, own experience here -, such "partial reusability" or rather, "code's lending itself to become "framework" for rewriting", would not be enhanced.

It's not the same, but a very similar construction concept to the one applied by MS in their .NET thing, plus programming languages, and then their WPF/XAML concept, where they try to separate, as far as possible, "core code", and then access to visual elements, in a word, their aim is abstraction, and of course, in order to reduce complexity wherever and as far as that's possible (and we have a double effect here of this both facilitating original coding, AND then maintainability, reusability, and adjustability/malleability even of code later on, to integrate new/replacing elements).

All this is about utmost-possible clarity today (in programming), and tomorrow (in revising and even upheavals), and as said, performance considerations are disregarded to a point here.

Two applications come to my mind here: One of the earlier CRM sw, Act!, very common in its time, got, in the early 2000s, some overhaul, and bingo, legions of former users left, after having shared their deceptions of which the by far most important was that any function had come to a crawl. I myself trialled it some years ago, and its (missing) speed was so unbearable, even with just half a dozen entries, that I very quickly dismissed it. So here somebody's priorities obviously ran amok.

Many Ultra Recall users (.NET and SQLite), on the other hand, complain about it being "slow" - I've used that program extensively and can report that even with BIG content, and on non-ace comps, its speed is totally acceptable, except for just some details where from a psychological pov you'd expect immediate responsiveness, and when then you'll have to wait though, just some seconds but which go on your nerves since every other program of its kind does give immediate reaction in these conditions.

So it is certainly a good idea, as you imply in your post, mouser, to have a look at response times in typical situations, and then to do some special tweaking there if needed, and it's always of interest to see that even very modern pc's, with all their power and speed, do NOT overcome some special speed issues of some programs, in spite of "us" not speaking of big routines here but of things a layman would think should be easy... and which ARE easy, of all evidence, in competing progs!

I'll not divert here, just let me say that sorting algorithms can have tremendously differing run times, easily by factors of 1:1,000 and more, and then some of them are very good for just some dozens of items-to-be-sorted, whilst they are extremely bad for higher numbers of items, or vice versa, which indicates that an ace program in which often items are to be sorted, should COUNT those items before sort, and then apply one of two "waiting" sort routines, with their respective algorithms, to the SAME body of items, depending on its length...

(And that's easy to program (and the sort algorithms are to be found in special textbooks), it's just a little bit more work for the coder... but it's one part of coding excellence as I see it...)

III

Thank you mouser, for not contradicting me, so noobs should note that there is some sense at least in what I try to "teach", from my own experience.

But then, it also hit you (and me!) into the eye that in my way of coding, there is a myriad of (necessary but unpleasant and time-consuming) manual checking, re-checking and counter-checking, and for everybody having outgrown scripting basics and trying to do some real work, it should not be that bad an idea to have available what I describe above, AND to be able to run a special routine that does all this checking-in-all-directions on his behalf, even if that implies spending of 800 or 1,200 bucks.

That's why I kindly ask prof. programmers to share their experiences with appropriate tools (i.e. that should NOT be entirely object-centred).

IV

See III. And since this problem is strictly unbearable in the end, I came along with an intermediate idea about this.

Why not rename all your current variables in a certain way, in order to strictly identify them as variables. Ditto for routines. (Trial special chars before using them though.) And on first occurence on that "page", in that item, do some comment (from where, to where...), etc.

Then, as described above, one routine, one outliner item, and even, one separate routine part, it's own outliner item. Then, an outliner offering a hit table (showing the respective lines), with indication of the respective item.

Then, print out the hit tables, and compare them with color pencils: This will at least avoid both: any additional work to write/maintain those header section parts; and especially: any (and from a logical pov, totally unnecessary) synch work between body and header in this respect, and ultimately any synch problem in this work, i.e. this alignment body-header is highly error-prone AND totally unnecessary, from the moment on you clearly identify variables (and routine calls, etc.) for yourself, and for the hit table function of your outliner!

(Well, they call this "process management", cutting off any unnecessary step out of it, by optimizing the remaining ones. And yes it's different for languages that force variable declaration/typing, and which hence do the checking for you, intra-item. In those languages, you'd do the inter-item checking from the headers again.)

But then, COMPARE those hit table printouts, conscientiously!

No, not one musical/comical share per post, just per thread, all the more so since you all will KNOW "Cat Tara" already by now, right? ( If not, see that little heroine for yourself, on YT, wherelse! ;-) )
When the wise points to the moon, the moron just looks at his pointer. China.

peter.s

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 116
    • View Profile
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #5 on: May 22, 2014, 07:48 AM »
(Immediately above:) "In those languages, you'd do the inter-item checking from the headers again."

Well, it was late in the evening...

In fact, for such languages to check variables, you must run the compiler, and we're speaking of pre-compilation checking here. So it seems some "partial compiling" just for one routine, to check intra-routine, would be a very good idea.

And some general observations:

You absolutely need "global replace", in order to do programming within an outliner; you might think that's ubiquitous, when in fact, in rare but notable cases (Ultra Recall), there is no such functionality (and "global" includes "this entry and its children"...).

In your outliner, you need exporting of "this entry and its children / whole tree" to txt.file; the compiler doesn't need all your formatting. Then you change the suffix, and run the compiler on that file, and in case open it within an editor, in case you can't identify the compiler's messages otherwise than by line number. (I do all this by script.)

If you insist on using an editor to begin with, you need to mimic an outliner's natural division into heading and body, into tree and content pane, and that's why you'll need Boolean search in your editor (always with hit table, i.e. with a list view displaying all occurrences of your search expression, together with their context):

For whatever would be, in an outliner, a heading, have an outcommented line with some special char.
For any variable, use another special char (you could even have several such special chars/char combinations, like $a at the end, or another group with $eb at the end, etc., by "greater context", and also grouping by variable format, i.e. integers, strings, many more, and it's also possible to tag (!) one variable with different such tags, so that they appear in different such searches.
Similar for routine calls and such.

Then, your search expression would e.g. be:
£ OR *$eb
and you would get a long hit table with lots of unnecessary entries/headings (the ££), but also with all variables of the group eb, beneath their respective headings (and which is the part you're after).

Yes, you could try to "optimize" this by also trying to tag your headings (or to cut up longer code into several files, but that would be dangerous if then you don't search "over all"), but if you code headings at the beginning (not the end as for variables and such), i.e. in the form
;£ Respective Heading
you will see at one glance where there are lists of headings with no "hits" in them, and where you should really look.

Of course, some "Expanded Boolean" would be more than welcome, a routine that would only show those "first-OR-element" when the next such element in the list is from the "second-OR-elements" variety, i.e. which would suppress any £ find NOT followed by a $ find in our example, but currently I do not remember any (editor's or other) ready-made search routine that would do that, without your programming that more elaborate routine first, by yourself?
When the wise points to the moon, the moron just looks at his pointer. China.

peter.s

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 116
    • View Profile
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #6 on: May 26, 2014, 12:37 PM »
I

After writing the above post, it occurs to me that my first post here was too abstract and not helpful enough.

Why do I muse about first part of a routine, and second part, and then the interconnexions of both? (Have a short look into the first post here if you begin reading down here though.)

Because good programming style is to abstract, to combine, AND to stay easily readable (i.e. a little bit the contrary of what I do in my writings here).

Let's have some real-life examples.

You do a typical, noob AHK script. There's a trap. You'll probable use the construct

#IfWinActive, some program
then all your key bindings there
#IfWinActive, some other program
then all your key bindings for that other program

and so on.

WRONG!

In many cases, you'll have similar routines, triggered from within different scopes. Ok, you could do triggers pointing to routines, but even then, even for the trigger scriptlets, lots of similarities would be spread all over the place, instead of being held together, i.e. you'll need to send (sometimes, multiple) attributes (unfortunately, necessarily by variables, in AHK, and this means that if you have ONE key assigment in the form

somekey(combi)::
if ( winactive("abc") or winactive("def") ... )
else if (winactive ... etc.
else if ...

and then trigger ONE routine, or just a few routines, for similar tasks, you'll get much neater code, here where in most cases, most attributes will be identical (except for the variable indicating from which applic that other routine was triggered), and also "on target", i.e. for those routines which then handle lots of similar functionality, with just some little differentiation depending on the trigger source.

After this intro into "Key, then scope, instead of the other way round, for AHK", i.e. the real-life example for the trigger, let's have a second real-life example, for the trigger-and-target this time.

Let's imagine you do your own little file manager, for PM and such, with 6 or more panes (as they are in some ready-made file managers available out there). Trigger would be selection and then Return, or Click or Double click, in those 6 panes or so; let's imagine such selection, in some panes/list fields, would trigger external display, whilst other such selections would simply change the content in neighbouring/subordinate list fields.

So what will you do? Do like an AHK noob on his first day, and do 100 scriptlets, all VERY similar to each other? I hope not!

Instead, you will gather all those different trigger situations in part 2 of your routine (part 1 containing variable declarations and such stuff); if this kind of trigger in that pane number, your little program should do this or that suite of commands; you assign variables which are then checked for in part 3 of your routine.

There (again, this is a real-life example for what I exposed in post 1 above), you'll do a second if, if else, if else... (or condition / when / when... (not possible in AHK) structure, and indeed, as explained above, this second (part 3) if structure is NOT identical or quasi-identical with the similar conditional structure in part 2 above.

Since, as explained, those triggers are very similar, but there could be several groups of commands; of course, instead of having just one part 3 of the routine, you will instead call external routines, "sub-routines" from there, either if those "executive" routines are rather long on their own (but as explained in my previous post, why not do a a 10- or 12-pages routines, as long as those pages are clearly distinct?!), or if you trigger routines which must also be accessible from other triggers (= keys or routines).

In that second case, there is no choice, and you'll do them as separatine routines of course (and there are cases where first you do that "routine" as page 7 of 12 first, within such a bigger routine, and it'll be after writing this routine that it will occur to you that page 7 should be accessible from elsewhere, too, and then you simply cut off that page to an external subroutine; in this case you'll check for the variables that will be declared in that new external subroutine, in order to get that all the necessary information from its trigger routine of which it was once just one part; i.e., as said above, fractionizing multiplies headers (which can become rather extensive), and if you don't need access to some routine part from the outside, doint it all in one big routine helps with minimizing unnecessary headers).

Back to our tripart routine: Now, in part 3, you check for every variable you will have created/set up in part 2, and here, the similarities between different blocks could be totally different from similarities in part 2: Just some examples, different file formats, different target panes, and as said before, even, instead of showing files, just listing files.

Here in part 3, you'll group again, according to such similarities, but the items in your conditional structure will probably be in a totally different order from the one they, or similar ones, were in part 1. And of course, you will spread up your blocks into different pages here, and this could even determine the order in which you put your blocks, i.e. why not do if var1 = 3 or 4 or 8, else if var1 = 1 or 2 or 5, else if var1 = 6, else if var1 = 7, etc. -

just because those 3 and 4 and 8 are quite similar and easy and can be treated all together on page 3, whilst variants 2 and 5 will be treated on page 4, together again, whilst 6 and 7 each will nead, separately, one page of their own; if afterwards, you'll see that 6 needs its own subroutine, you will not leave the call for that subroutine, alone, on page 8 or so, but you'll do the else if var1 = 6 on page 3, before the "longer" code blocks.

Sideline: It's always a good idea to discard simple things as soon as possible. E.g., I never write, if x ... then 10 lines, else, return, but I always write if x = 0 (or input = zero or something), return, and then, without else, without indentation, the main structure:

not:
if a = 1
{
   10 lines here, all indented
}
else
   return

but

if a = 0 ; even if it's very improbable
   return
here 10 lines of main code (no braces, no indentation)

Accordingly, I check as early as possible for values that would invalidate other structures, so as to not even run parts of those structures, to then be aborted anyway.

Now a sideline: You could do this tripartite 1-2-3 for heading, set-up, execution, with goto's instead of variables, or at least you could replace lots of variables by such goto's.

As said before, don't be afraid of goto's, if their targets are on top of above-described pages 5, 6, 7, no problem whatsoever: Goto's don't make your code spaghetti code PER SE, and functionally, there is no big difference between target-pointer-variables and goto's - except that in my multiple-spreading variable-if-structures, I don't need then further goto's in order to leap over following blocks, whilst in a goto structure, you will need to pay attention to do that, in order to get OUT of your goto target, since if you do not pay that attention, any goto will continue then with the following goto, and so on, and in 99 p.c. of the cases, that's presumably not what you intended it to do (and which an if-else if structure does not)! So, pointer variables are both much more flexible (ok, that could become a trap, your, by laziness, interweaving several if structures...), and neater.

And, of course, most programming languages have abolished goto's, which would become an obstacle when translating to code in some other language. Obviously, those same extremists that abolished goto's, did NOT find a way yet to abolish pointer variables, i.e. can't stop you from (mis)using integer or yes/no/true/false variables as pointers being even better goto's than original goto's ever were.

Use such pointer variables to structure your code, and it will become easy to write for any beginner, and will be perfectly readable/neat, maintanable, etc. It's a good way to code, and that's why it was worth it to better explain it to you than in post 1 here.

II

What about my question at the end of my previous post? Is there any editor where you could suppress search hit lines containing search term 1, which are NOT followed by hit lines containing search term 2? There are many occasions where such an editor would become more than helpful...

(As said before, rtf formatting of your code is so extremely useful that I would not switch from outliner to editor, for writing code, but many people will not switch from editor to outliner, but would switch to a really better editor than their current one, and this feature would make all the difference, as explained above.)
When the wise points to the moon, the moron just looks at his pointer. China.

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,914
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #7 on: May 28, 2014, 08:39 AM »
I have found one of very the best ways to help new coders become better is to tell them to focus like a laser on ELIMINATING DUPLICATE CODE.

This is especially true because it's very natural for beginning coders to repeat big chunks of code, and the simple act of forcing them to eliminate duplicate code -- either by changing the structure of conditionals and control structures, or by using functions -- does wonders to improve the quality of code.

It is also very helpful as a pedagogical tool (teaching aid) because it can be otherwise difficult to explain to a new coder when and where they should use functions -- wheras anyone can visually see when they have repeated big blocks of text.

phitsc

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 1,198
    • View Profile
    • Donate to Member
Re: Gorgeous Karnaugh Review; How To Write Code As I See It
« Reply #8 on: May 28, 2014, 11:19 AM »
There are also tools that detect and report code duplication for various programming languages. These are mostly commercial though.