avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Monday March 30, 2020, 3:10 am
  • Proudly celebrating 15 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - peter.s [ switch to compact view ]

Pages: prev1 2 [3] 4 5next
Greetings, donleone.

"by namely THERE automatically pulling out & showing together, not just the selected tag, but with it also automatically all the tags that are found WITH your tag, either as a tag unto the left, or as a tag unto the right - in all the items across your entire database.
(and in that sense "related tags", since these are found occurring WITH your selected tag together,
and even just in one single item used together, is enough to make them show up as "related" in that top right corner pane)"

I think this is a tremendously and-powerful, and-helpful feature, and I'm also very interested of the current developments of Surfulater since the developer will abolish traditional tree building altogether, replacing it by smart tag trees; of course, this is a quite different concept from RN's "related tags" (in a way, it's related to amazon's "customers also bought...""), but it's obvious that these two developers, at this moment, are those who are deep into conceptual work re on the tagging front, and I greatly appreciate this.

I make big reservations about (current/traditional!) tagging, and I explained my stance in depth here:

http://www.outliners...revgen-on-bits-again ("prevgen" = mocking of "nextgen" of course),

and if you allow for a very brief summary of what I think about tree structures vs. tagging, it's,

1) with traditional tagging, you will miss both "context in several indendation level shapes" and "context in order/formatting", meaning it makes a big difference if you just get "relevant" items as a list, or if you will have pre-arranged those items (or do this arrangement/grouping work now, that you will work on your items, gathered in disorder beforehand), will have created micro-relationships, micro-hierarchies, by creating "indentations", groupings (= by sub-parenting and putting items as siblings together, and also by creating "divider lines"

- I developed on this here (in point 2): http://www.outliners...messages/viewm/19697 ), in aforementioned "prevgen" thread, and especially here:


where I wrongly assumed there was no sw like RN which did already have multiple trees in one db, as we see now, from your explanations, but in which I exposed my ideas of "live trees" into which subsets of the whole set of material would be put together, creating a new concept of truly independent items, just gathered and ordered in various trees, whilst even RN's multi-tree concept which we know now, clings to the traditional idea of items "belonging" to some specific tree, and then PERHAPS being cloned to another one, or rather, in RN, creating tagging as an expediant, an overlayed structure, in order to replace that missing inter-trees cloning (this is not a criticism, it's just some second concept, overlayed onto the primary concept, the tree structure, in order to give it "complete" functionality, in some way, but not within the bounds of the original mainfraime, which is the tree concept, in outliners -

and even by sheer order within those micro-lists, and then, and THAT's why I so much insist upon tree formatting, by weighting, not only by putting some important item up one level, but also by bolding it, for instance.

All this "contextual info" (which, as explained, is NOT only about "what items are tagged identically or similarly", but both about relative order, and relative importance/weight within that items cluster) is missing from traditional tagging.

2. As explained in one of those linked threads, I do not see this prob in the context of items which are traditionally worked upon without "matter context", i.e. for "subjects", tree systems (but with advanced cloning as I have developed over there, and here, these last weeks/days) are best, whilst for customers, real estate objects, supplies, applications and such, (advanced) tagging (and yes, as in RN, which seems to be more than promising from what you say) should be the tool of choice: Here, you consider distinct records, even when comparing them with other distinct records; it's quite different from "compound files" where you work, over hours, days, or even months, within a frame of "disparate but interrelated case/matter info", and try to create new info from it, or in a some words:

textbook writing is conceptually different from application appraisal, and for the former, you'd need the perfect outliner, whilst for the latter task, some tagging system (that's why I so much touted almost-defunct askSam over there) would be best.

In the light of these considerations, tagging concepts are of the highest interest, but let's bear in mind my reservations either should be given the right "answers", so as to overcome tagging's shortcomings, or then, the tree concept should be further developed, along the lines I have explained in depth, and to which I even made a significant contribution here and today since my today's add-on is one possible answer to the relevant question how to realize "live variants" (i.e. cloned structures that then partially and in a controlled, monitored way come to a life of their own - well, at the end of the day, it's nothing less than the introduction of the "object model" into outlining, together with controlled inheritance) on the technical level.

I'm not trying to devaluate RN's tagging mastership, I'm just saying that my concept is trying to remain within the confines of the tree paradigm, whilst tagging in outliners (and of which RN's seems to be the very sophisticated) are melt into outlining, instead of striving to lead the tree concept itself, into perfection (which is possible, as my writings clearly show); of course, we're discussing conceptual coherence, conceptual purity here; wherever a perfectly "working" hybrid solution to that age-old "items in multiple contexts" arises, it will be more than welcome; "working" also meaning "minimizing complexity in the user experience, while allowing for as much technical complexity as is needed" (- now compare with Evernote...).

It is obvious that RN's "this tag comes with these other tags, which in their turn come with those other tags yet" feature tries to enable this "multiple contexts problem", but I'm not sure yet that it succeeds at it in an ideal way, just browse all threads over there, on "RightNote", and you'll see that most users praising RN do not praise it for this feature, so there's grounds for suspicion they did not yet grasp this elaborate feature, both in its way of functioning and its possible positive implications for their work: In a word, its realization is not easy enough yet in order to be really useful.

"(and for this initial creation of ALL your "categorization possibilities" aka. all possible "contexts" or "keywords",
i especially recommend using Microsoft Excel" - no, really, I INSIST on the fact that I don't want to devaluate/attack RN (anymore; in fact I created this thread in order to dismiss that former attitude of mine), but we must agree that any tagging program should have some integrated "tagging taxonomy maintenance" gui, and that the hint to fall back on some spreadsheet tool, in parallel, will neither convince prospects, nor will it be followed by that many current users; this being said, RN is under continuous development (even for non-EN-integration-related features I hope!), and I'm just saying relying upon external tools cannot be the definite answer to the need of tagging taxonomy; let's see what the developer will bring about re this part of his program!

Btw, you (duly) speak of "classification", and that's exactly my point (developed in the above links) in emphasizing perfect tree M over tagging, for such "compound work" like forging strategies, finding new insight, writing books: tagging is perfect for (even multiple!) classification, but not for bringing perfect order into and perfect relevance onto your material (both raw, input, and created, possible output): traditional tagging creates unordered, unweighted lists, and for "compound work", that's not enough.

I read your description of RN's tagging with delight, and when I read further (and remembering your former UR experience), I discover you're joyfully describing an almost-ideal realization of exactly that tagging feature numerous people had been begging for, for years, for many years, over there in the UR forum, the developer just lending them his hardest possible ear, and you know what?

I'm sure this thread, and all the good things you have to say about RN and its tagging feature (which appeals, as i'm perfectly aware of, to lots of people, all of them negating my fundamental criticism of tagging), will motivate LOTS of prospects to buy RN, both this forum and the title "review" (which I chose on purpose) getting lots of coverage by google, and it's a very good thing UR, with its blatant unwillingness to take even minimal advice from their (formerly) loyal customers, gets some more heavyweight competition.

(And of course, my theory being that so many outliner users long for good tagging since day in day out they feel, and suffer the shortcomings of today's tree realizations, with them not feeling the need for tags anymore within some tree(s)-wise perfected outliner...)

I also get your point in criticising (even UR's otherwise brilliant) cloning, and you're entirely right in claiming quicker, handier target selection, btw from BOTH sides: "Put this item as clone THERE", and "Put THAT item right here as a clone": Both ways of cloning should be "immediate", "instantaneous", at least when subject or target belong to some history (!), to some "favorites", and especially to some "ToDo's", and not speaking of the fact that currently, only the first variant is possible, not the second one also, which means there's some unnecessary switching forth and back involved, too.

But you see here what I said in the links above: Much of our current criticism, of one, or the other paradigm, just addresses current realisations of those competing concepts, and not necessarily ideal realisations of them. As for real life, when I asked for additional panes in UR, the developer told us that UR even had many a pane as it was and is, and almost too many of them to his liking:

Unfortunately, we're speaking of developers with not enough conceptual imagination (and that's why I wrote the Maple review, in order to acknowledge the things they do really well over there!), and of course, some "pane M" is to be recommanded in all those cases, and it's evident that in the very moment you're choosing your subject-for-cloning, you will NOT need the tree pane for the entire tree, but for "intelligent proposals" being made there, and the moment you will have made your choice there, that very pane should show your (proposed) target position(s): It's certainly not about multiplication of panes, but of multiplication of possible USES of the those panes you've got anyway (and as said re Maple: Why waste screen real estate for a search hit list (or worse, hide, or even delete it whenever the user choses some hit from there), when you can perfectly toggle it with the tree?).

"And specifically because the search allows you to narrow down into & bring together "many similar items"" - again, my argument that those search results will bring "unordered" lists (i.e. ordered by tree position or alphabetically, but not in your "man-made context order", in any case. Btw, that's my argument against the "just search" paradigm, where search is perfected to a point that the developers say you don't need any tree structure anymore (e.g. original askSam (which then added some live tree structure though), and advocates of the combo of desktop search engines like X1, and then just myriads of single files (instead of outlines): All these concepts make you lose ordering/weighting within the clusters they deliver ("relevance sorting" weighting frequencies or even combos of search terms, but cannot order in some "manual-processing-relevant" sequence, just outliners (with clones, and may they show trees or cascading lists, so-called Miller Columns) can do that).

"a giant high quality relational database" - well, you ain't wrong altogether, but let's say that SQLite's big advantage resides in its "incroporability", i.e. it will become part of your application on the pc of your customer, whilst a real high-quality db like PostgreSQL, which offers many more possibilities to the "interface developer", will appear as some distinct body on your customer's hdd; you can appreciate the difference when comparing UR or RN with TheBrain (which does not use PostgreSQL, but another db spreading its data over the place.

Thus, whenever I miss some functionality in some SQLite-based outliner, I ask myself, is it due to SQLite, or could the interface developer have done better?! ( Ok, it's the latter alternative in almost every case... ;-) )

"and to thus remove this manual step here out too" - again, I'm just "putting into perspective", I'm not denigrating RN: the application of a common command to a bunch of search results is thanks to the db, and also available in UR, for some such commands, but of course, it's a VERY good thing that RN's developer has made them available; I said this before, re UR: The interface hides a lot of db functionality from the user, and even erects barriers between what the db theoretically could use for the user, and what will finally get thru to him; I also said, over there, don't be too much impressed by the presumed brilliance of the interface when simply it translates db functionality for you, instead of barring it out from you... but I acknowlege that such a remark might appear a little bit nasty. ;-)

"to be tagged / contextualized / categorized" - It all reverts to the truth that (current) tagging doesn't allow for "fine-tuning" of your material within those "micro-compounds", within those lists  of 15, 20, 60 items you will have to work on/with concurrently. I know (and infer from your kind explanations, which are truly "over-constructive" in the best sense I can imagine this term to have, that it will even be very easy to do so in RN) that from then, by "micro-tagging", you could create sub-groups, and sub-groups again, but this would be quite artificial, since you would have to fractionize again and again, by "taxonomie-going-bonkers" (or to be more precise, from a certain level downwards, it's not sensible anymore to give "titles" to sub-groups (since within themselves, they become both too disparate and mixed-up content-wise: atomization-by-content just being sensible down to SOME level), hence tagging those would become "unnatural"), AND, even with multiple, artificially-created sub-groups (where I would simply do manually-sorted lists, with separator lines, cf. in the above links), you would not get to YOUR order, within those sub-groups:

You'll get some ranking, multiple tables of 12 within your reception hall, but you won't get seating: those 12 persons on those tables will not sit together well, or just by rare chance.

This being said, I certainly would not like to put off any RN prospect coming here from google, from buying RN, by my observations on the limits of tagging in general: That RN is among the heavy-weights in outlining, and perhaps even within the top group of three (consider MyInfo and Zoot, too, before buying!), has undeniably been PROVEN by the very kind efforts of donleone, here and today!


- There is a difference in atomizing external content, and self-created content. If you're really good, and if your subject lends to it, you can atomize your own FINAL writings to a very fine degree. (But for them, order would be of even more importance than for external material, so you can do with an outliner, without tagging, but not just with tagging... and even MS Word heavily outlines, e.g. by its paragraph formatting hierarchy.) Whilst for imported content, further atomization is only sensible up to a point, below which you have to live with "mixed content": Writings ain't butterflies or beetles in boxes, hence outlining more natural than tagging=categorising; the multiplicity of categorization in tagging just mitigating this conceptual limitation.

- My RN copy is the free one, i.e. has reverted to the free version a long time ago, and during the presumed 4 weeks it was "Prof.", I didn't "get into" the intricacies of this application, and much of my "not getting from given explanation to the program's features" may just be due to those features in my version not being available anymore. That's the risk of most trials: If you don't profit from your short trial period, for further trialling, you'll have to buy. That's why rare exceptions, like Scapple and Beyond Compare, to name just two in quite different sw categories, and which count your 30 or so trial days not consecutively, without mercy, but just when you open the applic, are so much better suited to be trialled in depth. ;-)


Thank you so much, tomos, I, too, often overlook the obvious: The English softcover is sold by, too (and at an entirely acceptable price that is).

Often, you have to pay close attention with English titles on, since (which is not the fault of amazon, but of the respective distributors) they sell numerous cinematographic film dvd's, with the original English title... and with JUST the German soundtrack, not even English subtitles then, which is outrageous: English titles sell the film, instead of giving it some idiotic German title, and then you will get some idiotic German synch only! (Some distributors do this on a systematic level, and in France, it's similar, and worse, since French synchs are systematically, abysmally laughable.) And then, with novels and such, this is recurrent, too: They often take the original English title for their translated books... (Not that I'd have time to read novels anymore, though...)

But in this instance, I was over-paranoid, and overlooked the native English offering, and it should be easy to find some other title to get to 20€ and have free delivery included, thank you so much again!

Radically OT:

Another totally OT hint to people living near some (any) German frontier: Anybody can get, from the German postal office, some "Gold Card", at the condition of having them your real identity, by passport, and you need a German (!) pre-paid mobile phone card (which is available to anybody, too), and then parcels from Germany, e.g. from, are not to "John Smith, your home address abroad" anymore (which could cost a fortune, in some instances, or would not even be available), but "John Smith 12345678901234567890 (some long number), Packstation 123 (some short number, for an automatic delivery station), 12345 (postcode), town (name of some frontier-near town, on the German side of the frontier of course), and then, your problem will consist in finding suppliers who (like amazon and about 3/4 overall, but many less for technical goods!) deliver with the German post parcel system ("DHL"), and if they do, you'll get some "sms" on your (German!) mobile (but you don't need it to work 365/365, just when you're awaiting some parcel) with a 4-digit code by which, together with your "Gold Card", you then can fetch your parcel during the next 8 days (but not a day longer) from such a delivery station, for most of them 24 hours a day incl. week-ends, the interest laying in German sellers-by-mail being LOTS cheaper than, say, French ones, most of the time.

On the other hand, OT again, I would never buy a German pc anymore: Just lately, my try to install some MS sw update ("Service Pack") failed miserably, even after my trying to apply all sorts of MS "registry checking/repairing" tools recommended within the web, and for the (by some threads over there) presumed reason that MS simply isn't able to hold every single service pack and such compliant to any language world-wide, so that updates to the (native) English versions work fine, whilst more often then not, even updates to English MS sw, but installed on a "foreign" Windows (here: German pc's with German Win), don't update properly, or don't update at all - which means you'll need, to be safe on this account at least, English Win, and English applications, wherever you live.

Thus, my future pc's will come from GB, exclusively (and hoping there will not be any problems caused by "Win U.S. vs. Win GB"), even if that complicates possible guarantee issues (well, at the end of the day, either your hardware works over many years, or falls flat 1 month after guarantee period expiring, by "planned obsolescence", so this doesn't make too much difference). Price level in France, e.g., is exceptionally high, so that even with additional postage costs, buying in GB, from a French's pov, is almost always a very good idea, both with regards to technical stuff and with books and "consumer media" (film DVD's even with French soundtrack, etc.).

Of course, we all know original U.S. prices are about half-price, but with transportation costs, and then severe European taxes, original price is doubled, and if something goes wrong, exchange/repair costs (and custom probs) quickly skyrock, not speaking of the strain on your patience... Unfortunately, I cannot give any hint how to really smoothen out delivery to the Continent, either from the U.S. or just from GB (where it's much easier but of much less interest, comparatively); there are some remailers but who are so expensive (and they don't make you avoid custom) that their only possible interest lies in getting goods from U.S. sellers who refuse abroad deliveries.

Oh, there are two really good ways indeed:

- Have an American pen pal; this might even be a way to avoid European customs and taxes, depending on various factors

- Get acquainted (or in love) with a U.S. soldier on the Continent: He'll get almost everything for almost free, and you will, too... and this brings us to this thread's shared find: Manuel Pradal's 1997 film "Marie Baie des Anges" (but beware, it's just for real hardcore Frenchies; btw, did I ever speak to you of Edouard Nierman's 1987 movie Poussière d'Ange, or then, of Christophe Ruggia's Les Diables (2002)? Well, that's very particular, as is Terrence Malick on the bright shore of the Ocean. ;-)

I bet you will! No, seriously, that's why I wrote this passage here: All day long, millions of writers use wrong wordings, both without thinking, and, worse, encouraged by reading those same wordings by millions of fellow writers, and in the long run, it all ends up with even who have their say on language issues, will "accept" those wordings, for them having been generally used for so long long.

While on the other hand, I very much hope I make people stumble upon their "As xyz, ..." use next time they will be tempted to use it, and their memory saying, wait, that was wrong, ain't it? I'll reword it!

I've got another one, which unfortunately seems "unstoppable" as well: Subject in singular, then another subject in plural, and then the verb, in either form, but if in plural, in a form that syntaxically does NOT englobe both subjects (or worse, first subject in plural, second one in singular), and when the only correct phrasing would have been to repeat the verb, or even more elegant, to use singular, then plural (or vice versa), but of a synonymous verb.

Whenever I read the next real-life misconstructed "both singular and plural, with common verb" sentence here, I'll jump on it, promised! ;-)

EDIT: Often, in this second example, it would be sufficient to group subject 1 and then subject 2 with a comma and a pronoun, in order to establish the correct connection to the common verb, but that short add-in is left out, by which the whole sentence doesn't hold together anymore.

EDIT 2, the day after:

"Und um die Steuerprüfung mache ich mir keine Sorgen. Als Student mit Jahreseinkommen unter 8000€ ist die Steuererklärung doch sehr übersichtlich..." ( today on re the divine Alice Schwarzer who only pays taxes within the limits of what she thinks is sensible by HER standards ) - Translation: [Not: For me as..., not: As a ..., I think my ..., but:] "As a university student [sic] with an income of under 8,000€ p.a. the [in German, "the" instead of "my" here is ok though] annual tax return is/remains easy to grasp and manageable [for me], so I don't fear tax audits."

These "As (a) xyz, subject 2, verb in relation to the latter" misconstructions abound in German. Awful! ;-)

mouser, my point was that from the web alone, it's quite difficult to get some valid, comprehensive AND comprehensible, whilst concise info on these matters, short of reading books, and not being an expert in that field to begin with.

But this book has got unanimous rave reviews on all Amazon sites, so I thank you very kindly for your recommandation to what seems to be the finest intro into that matter available.

Thus, I just tried to get it from the otherwise incredibly good German inter-university library book lending system, and you know what? There are 3 hits for all of Germany, and with no lending, and of course, dozens of hits, with lending, for the German translation (and even one in French).

Of course, I could buy (or could I, in Europe, from the (cheap) e-book, but I'm not into ebooks yet, even less so into reading 400-pages books on-screen; and of course, I could buy the English version, in a Continental bookstore, for about 50€ (= 70$) instead of 12$, at least that's been my experience from the old days.

Thus, there's always, where it's a mere 12$ plus postage, so I need to wait for other English books I'll need to complement my parcel.

Thus, my implicated "bad availability of readily available whilst not-too-basic info" argument wasn't entirely wrong, even incl. books, except if you accept translations which lately I don't do as easily anymore, since recurring reconnaissance of (too) literal translations harm my reading speed, real good translations being rare.

After all this OT whining, I'm sure your book recommendation is spot-on, and thank you so much again... and I'll bother fellow readers here again with the subject after having read the book. ;-)

Tuxman, yes, it's "do", not "to", a classic typo even current spelling sw would not identify... whilst they SHOULD identify such false friends/nearly-homonyms causing clearly identifiable syntax errors.

But my point was different, it has nothing to do with transposing a German idiom into English, but about that German figure of speech being totally awful, AND being totally wrong, even in German, and then, Germans using this bad structure even when they write in English, i.e. not even the literal translation does them make stumble upon the ultimate wrongness of that wording, and far from wanting to attack you, I just wanted to verbalize this language mutilation (i.e. indecency in ANY language, but borne, it seems, on German ground?) once and for all.

No offence! But I so much long for never reading this outrage anymore, from anybody, at least in this sophisticated forum. (I) as xyz, change of subject, verb - das geht gar nicht! ;-)

Greetings, donleone.

Wow! If ever some application had some ace ambassador, that was RN getting you as its spokesman! Of course, I'm musing about how you got together all this info, with no forum and a very low-key helpfile (to say the least); if by chance you're the developer himself, well, kudos, you defend your program brilliantly, and, very obviously, you are right to do so, since from your explanations, it's much, much better than it appears.

Thank you so much for your multiple (and obviously very needed) clarifications, and I fully acknowledge, from what you say, that the tagging limitations do not appear to be of relevance anymore, since more than 500 identical tabs would to be cut into several "siblings tags" anyway. I uphold my observation that the tag gui is catastrophic, though, not only visually, but from the "how to search for tag combos" pov, too; I deduct from what you explain that the tag feature does NOT have some specific search in-built, but that, if several items are tagged "tag_a" and "tag_b", you simply do a regular quick search for "tag_a tag_b", like for would do for every search term present in the text.

Thank you so much for your explanations on the "pages" concept, so "pages" is simply a synonym for "tree", several such trees, each with its specific tree format, being possible in ONE db file; in fact, that reminds me not only my developments of "several trees in one db" in the outlinerswforum over there, some months ago (and the big difference of RN trees vs. the concept I explained there, see below), but especially, several both UR and MI users asking for different sub-tree FORMATS within the same db = very big tree, in their respective outlining applic; doing several trees within the same db instead obviously is a viable and much more elegant solution to this appearantly rather common prob.

Now I understand better why RN, which (as I see now) belongs to the "big shots" in its category, has no clones yet, since in the db structure you describe, clones would be more difficult to implement than in the traditional "1 tree - 1 db" setup, and that seems to be the difference between RN today and my developments over there:

If I understand well what you say, one item, in RN of today, can be in one tree, but not in several trees (yet), no more than it can be placed in different sub-trees of the same tree (of course, since that would be cloning), and thus, whilst several trees are possible, in RN, within the same db, it is not possible (yet) to create "contextual variants" (or whatever you might call those structures), i.e. to have items in more than one context, and by this building independent data repositories but accessing (and then SELECTING and REGROUPING) already-available "raw" data.

( cf. my thread https://www.donation...ex.php?topic=37993.0 here )

So what we have got, is:

- UR and MI (and others) which offer clones (i.e. multiple instances of (= internally references to) an identical item, and which is a "big step in the right direction"

- RN which offers multiple trees within the same db,

and it hits everyone in the eye that the perfect outliner would combine these two features (and it is evident that you need complete cloning first to realize this, i.e. any clone must "update" any possible children/grandchildren which may be added either anywhere, or, even more elaborate but perfect = the ultimate solution, which might have been added in some "source", some "primal" tree, but not, or just by way of dialogs popping up (= not by general option once and for all!), within trees that should be considered not as "partial descendants" from the original ones, but as "partial siblings"; here again, my example of legal dispositions, in some "source", and then different "cases" where you will need diffent subsets of these dispositions only; technically, such subsets could be realized the following way, even "cross-wise", replicating from one such "sibling tree" to another one:

Any cloning does a "complete cloning", i.e. sets a value in the respective dataset to any new descendants being replicated "here" (i.e. any clone would have its own settings), but then, those specific clone settings, individual for every one clone, would maintain additional "deletion" info, causing that any "add-on", elsewhere, of that replicated sub-structure, to an item here in THIS instance of that sub-structure, will NOT be added here, IF the sub-parent item in question has been deleted here OR has been "marked as sterile" (while having been left there to contain alternative descendants): Thus, "updating" of any clone wherever would be automatic, but would encounter individual barriers in each replication, and this "infertile items" would not only not be updated for new descendants added elsewhere, but their own descendants would not be replicated in the "sibling" replicating structures:

Thus, an ideal cloning concept would clone "over" different trees, AND allow for these clones becoming perfectly individualized over time (as would do primitive copies), AND get "allowed updates" (= context changes, item add-ons, item deletes), but just within the limits individually set: Just imagine "identical" twin siblings which are then heavily changed, "individualized" by their respective lifes and in different towns, even countries, and new family contexts, but which often attend the same familiy meetings (and get identical new info there, often replacing old info instead), but which often one sibling wants to incorporate into his knowledge, whilst the other sibling refuses such "new info", be them add-ons, be them corrections, and having "voids", "blank spaces" in his mind instead, or having new info of his own (instead, or additionally), but which he won't share with his sibling, and even if they grow very much apart, there is always the possibility that one day, they both get their respective share of the inheritance of their parents when those die - ok, in my IM concept they wouldn't die, in normal circumstances, but up to then, the above allegory is quite faithful, and even for the "death"/deletion of "original parents", some "taking over" proceedings should be implemented (and processed from within dialog windows, individually).

Sorry for diverting, but as we see, RN is just short of becoming something really great, at the condition that further development will be based upon its current strengths, so some guidance could not do any harm. ;-)

As for the multiple tagging features in RN, as said I don't grasp them yet at this moment, and some better help file descriptions, AND some remodeling of the taggingrelated part of the gui would both be more than welcome. ;-)

Hi TaoPhoenix,

He knows about it, and you're right, if there is one very responsive outliner developer, it's Petko, so this will probably be amended. I should have added that the cloning command in MI had been implemented some years ago and was really abysmal at first (= MUCH work to clone something), but then had been changed to somewhat as good as in UR, creating a clone is much smoother now.

And of course, "singulary clones" (without children, and which presumably will get no children even afterwards) in MI are perfectly viable even now, e.g. if you have got legal dispositions, one by one, or grouped into one item, within the context of the corresponding law, and then cloned into some "case" you're working on.

And yes, there are no clones in Maple; re Maple: As said, it's db-based, but it's lightweight, and as in any "text-based" outliner, it will feel natural to multiply outlines/files, whilst maintaining hundreds of such files in UR, e.g., would feel "inappropriate", would at least go a contrario the "spirit" of that program. And then, for maintenance of such multiple outlines in multiple files, the special trans-file search feature in Maple is something extraordinary, which I would like to see elsewhere, too (= in a program WITH tree formatting, and WITH Boolean search that is). Technically, you can do the same with MI, and which HAS got, as said, both these features I miss in Maple, and that's another reason why I think MI is leader of the pack, currently (and in light of respective development paces, this position is unlikely to change).

I warmly recommend Paul's blog, for more than decent (and current) specific info on specific outliners:



Since the public viewing of the Snowden affair is 1 year old, there are some articles in the press, on Snowden, but also on Turing and encryption/decryption, and I stumbled on this one (in German):

I never understood the Enigma, especially since every "explanation" about it you can find in the web, either is written by experts for experts, or by non-experts who don't understand the Enigma themselves, and the link above falls into the second category, but some comments there rise some interesting points.

It seems Turing began his decryption work on the Enigma after one functional Enigma machine fell into British hands.

It seems the "code for the day" was communicated using the previous code.

It seems some primary code needed for the real encryption, then to be made by the machine, with the help of the "code for the day", was a stable arrangement of the abc chars, and the British tried myriads of possible sequences for this, e.g. qwerty, and so on, but the Germans, "incredibly", just used "abcde", and it seems the mathematician Marian Rejewski did find out this, not Turing, and the commenter in the above link who brings this info into the discussion, muses that Rejewski had worked in Göttingen, Germany, beforehand, so had some first-hand info on German psychology / way of thinking, which enabled him to take into consideration the Germans might do it in the utmost basic, primitive way, a possibility excluded by Brits just admiring the machine but without intimate knowledge of German "national character" - I very much like this observation.

(EDIT: And of course, this over-emphasizing/relying upon the over-obvious resp. the "really-too-easy" reminds us of that E.A. Poe short story...)

It seems the breakthrough was then made by Turing's reflection that by the way the machine obviously worked, on a physical level - direct current was sent thru the rolls in one direction, then in the other direction - no character (a...z, etc.) could be replaced by itself, this drastically reducing the machine's encryption possibilities / possible permutations; in fact, the cited article is primarily about this phenomenon of "Selbst-Bewichtelung", no English translation found, just this transcription, "Players who receive their own gift in 2/3 of all secret Santa games."

Some commenter over there claims the biggest U.S. employer for mathematicians is the NSA - very funny and very convincing, even while no proof is given.


I, just some days ago, had mused about specific file formats countering effective encryption. Let's say you use MS Word files, or some other file formats where quite lengthy passages are, more or less identical, but highly standardized at the very least, for every one of your encrypted files, AND the decryptor knows (or can safely assume, from the presence of these applications on your system, or simply by the ubiquity of some applications, like MS Word, and its rather few "replacements", ditto for spreadsheets, etc.) which (few) applications you will have used to produce the encrypted data:

Then, this might drastically reduce the theoretical power of your specific encryption, since the decryptor (assuming even he doesn't have a way (which might exist, without us being aware of such possibilities) to determine where one of your file ends, and the next one begins, which would further cut the possible permutations into a mere fraction of their theoretical potential-by-strong-password) would try to decrypt those "standard passages" first, and even allowing for your individual data within these "standard passages", intimately knowing the "format" of the latter, incl. possible lengths of different such individual data in-between, and once these "file headers" are decrypted, your key will be known.

This would mean that usage of any application not producing just naked ansi files, but putting "processing data", "meta data" into the file, too, should be prohibited if you really want your data safe (= necessary but not sufficient condition)...

(As the title indicates, this does not treat the maths program.)

Maple Prof. is an interesting piece of sw, but my experience (checked with the developer) is somewhat mixed. Since there is no review yet, some info should help.

It's a regular 2-pane outliner, but db-based; this is worth mentioning, since from its regular behaviour, you would think it's text-based; for example, better db-based outliners like MI and UR store pics within the text content in a light format where jpg remains jpg (= even when you don't import the jpg as an item on its own), whilst "bad" outliners (like AO, Maple, Jot+ and others) blow those jpg's up into a format that for a 30k jpg imported into the text, 1 million bytes may have been added to your file.

Outliners (= tree-formed data repositories) often are very helpful on the road, with perhaps a tiny screen. Thus, I never understood why most outliners which have search hit lists, do these in an additional pane, since if you have 3 panes at the same time, tree, content and search results, on a 12- or 14-inch screen, you will not see much of any of them.

Now Maple uses, as search results pane, the tree pane, or more precisely, you can switch back and forth between hits and tree in that pane (different viewers), and this seems perfectly logical since either you browse the tree, or the search results, by you scarcely ever will need both at the same time.

Also, you can do regular search (over tree and content), or then, in tree only; this seems mandatory, but some outliners don't offer this way of searching.

Very unfortunately, though, Maple does not offer Boolean search, no AND, OR, or NOT, which means that any search will be primitive "phrase search" (which is a very good thing if it comes besides Boolean search, but which is unbearable when it's the only search you get); of course, presenting a hit table, but no Boolean search, is quite an incongruity, but the developer says Boolean search will be implemented in the future, but without giving a road map.

Sideline: From my memory, the corporation behind Maple (they do some other sw's, too, but for Maple, there is slow but steady development, whilst for AO, e.g., development is almost inexistent, and Jot+ is defunct or rather can be bought in its ancient state from multiple years ago; none of these sw's have a forum) seemed to be from Spain (which is quite exceptional), but the current stuff seems to be from Russia (which is quite regular); I don't know if I'm mistaken, or if they have been bought, or whatever.

Also, very unfortunately, formatting of tree entries (bolding, italicising, underlining, coloring, background color, etc.) is NOT possible with Maple, and it will NOT be implemented, which seems to indicate they chose a bad tree component and are not willing to replace it; of course, you have the usual icons instead, but from my (very extensive) experience, icons could never ever replace tree formatting; in fact, for me that's the ultimate deal breaker.

Now the outstanding Maple feature, which is why I took the effort to write this review:

The only (?) big shot "search over several files" outliner is currently MI, but Maple Prof. has got a similar feature. Now, if you need trans-file search, for outliner files, your best bet is some tool like "File Locator" (The internal text search functions of both XY and SC file managers do it, too, but don't find all occurences, as hopefully FL does, and, interesting, they overlook the same occurrences, which are listed in FL (free).)

Whenever you search in outliner files with such a tool (most indexing search tools refuse to index outliner files to begin with), you will (hopefully, see before) at least see where to look further then, but no exterior tool will get you to the right item there, of course, and this makes the tremendous interest of such in internal trans-file search tool: When you click on a hit there, the proper item will be shown.

Since Maple Prof. has got such an exception feature (as said, together with MI), it deserves its own review, after all, even in the absence of tree formatting and Boolean search. Let me put this straight: Permanent absence of tree formatting makes me discard this program which otherwise has lots of potential, even if I would like it to implement Boolean search immediately, not some day in the future, and I would even be willing to live with the rather bad rtf editor which blows up imported/inserted pics.

Or in other words, the day Maple would replace their substandard tree component, I would switch over to Maple.

Now some brief words on that age-old competition UR-MI.

MI development is much more "developed" than UR's is, and the key misses are:

- UR has no global replace, and the developer is unwilling to implement it (this is the deal breaker for me, since trying to replace some term in 20 or 50 items, by external macro, is crazy and unreliable)
- UR has no trans-file search, but then, you use both programs more or less as "global data repositories", i.e. few people will create multiple files in any of them, so this is less important

- MI (of which the cloning feature development is much more recent than in UR) has always a missing detail in its cloning function which makes it almost unusable: Whenever you add child items (or grandchildren and such) to a cloned (parent) item, this is then absent from the descendants of the clone. Now there are very few instances where this absence would be welcome indeed, but in most practical uses, this totally awful:

Not only for "one subject in different contexts in general", but especially in the "ToDo" part of your big tree: This absence makes it impossible to put some subject into the ToDo list, and then to work upon that subject from the ToDo list; instead, you will need to CROSS-REFERENCE the subject in the ToDo list, i.e. to jump to the "natural" (and unique) context, and to work from there then.

On the other hand, MI's developer could work on this, and then, his program would undoubtedly be the far better program, at least speaking of Prof. version in both instances, since only MI Prof. has the global replace feature (which everybody will need in the end, even if he thinks he can do without, that's blatant wishful thinking), and which UR presumably will never get.

(I'm repeating myself here: In UR, tree formatting is possible with a trick... whilst in MI, it's available straight on.)

Thus, with a little more development, MI will be number one (or, if you're willing to cross-reference instead of cloning), it beats UR even today).

And yes, Maple competitors should adopt Maple's hit table, in the frame of the tree, at least by option.

Thanks, rjbull, for the link to, some instructive content, even if it's been a LONG while I've seen some other site as ugly: straight from 1983, or so it seems to me.

He's got some page "Programmable keyboards" over there, too, with "Using AHK instead of a prog. kb" - well, you know, I think by now, that it's "Using AKS ON a prog. kb", of course, i.e. just assigning weird key combis on the prog. kb, and then intercepting those impossible key combis by ahk, to do the real scripting.

Tuxman, your Germanese, "As a German, the centricism isn't too bad for me." is awful, Germans to this ALL THE TIME. I'm no teacher, can't give the right terms to it, but let me explain: "As a xyz" is subject 1; then, "the centricism" (or whatever) is subject 2, and the verb, here "isn't", depends on subject 2, whilst the "As a" for subject 1 unsuccessfully tries to make it depend from subject 1. As they say, Goethe's spirit would rotate in his sepulchre, had it knowledge of such ways of speaking. Thus, every German out there/here, please, say, "For me/him/whomever as a xyz" (by this, transposing subject 1 into accustive, by this breaking up the link with the verb), if really you can't do without the "as a" structure which is of utmost ugliness anyway.

As for SC: NO English-language forum, just in German; NO English-language help file, just in German; no help worth to speak of in the Forum, even in German, just Germans musing abing absence of features, and the developer not deigning to intervene/clarify/inform, except for very rare occasions. Additional prob: In order to "find" something either in the help file or the forum, you must know the German term first, and be assured they just don't use simple translations but have their own, very special SC terminology for many common English terms... (And you should be able to read German to begin with, of course.)

Comes with several add-ins, e.g. a text search function, similar to the one in XY (and neither better nor worse, neither slower nor faster than there; I own both and compared extensively), or a synch tool (which is really bad), and, of course, bulk renaming as any paid file manager offers to some degree (see below).

"Every other file manager is just a sub-set of those three." Innuendo, this is simply not true, Besides, whenever I see a FAR screenshot (as in the linked softpanorama), I feel an urge to scream out loud (Wanna buy my NC, Dos or Win, anyone, btw? = rhetoric question).

Of course, TC is very powerful, but 2 things:

- My ways of doing an AHK tutorial may be debatable, and yes, it was a "work in the make", and I should have it revised, and edited; cut up into several posts, it's become a mess. But then, I tried at least (and am not too motivated to do the necessary work on it, by lack of feedback, i.e. no AHK noob to ask about details, so what!), and any TC "expert" would be free to do something similar (cf. the post above, about the need to search it all together, over numerous hours of hard work, from dozens of forum entries), in order to make TC more "accessible", both from a technical pov and from a gui pov (yes, the step from bold to regular font is known by now, but that's not enough, as we all know).

- But that is not done, and this brings me to a second observation: My tries with the forum were that the developer doesn't take part in it, except some possible exceptions, and if you explain another one of the innumarable weirdnesses of TC and ask for an option to have it another, more "normal" way, TC experts will explain, with lots of goodwill (cf. DO forum, where you will be attacked instead), "why" it is as it is, and only that way, and most of the time, these explanations are quite weird on their own, whilst the developer just has it his way, no any other. Thus, the form of discussion is much more pleasant in TC, than in DO forum, but you quickly get the impression that nothing really will ever change though, and version history (8 now) proves that your impression is right. Also, explanations about "how to" are sparse, and (as said above by Innuendo), are both fractionized and aleatoric... and, my impression, TC experts on that forum like it a lot like that.

Thus, TC expertise has become their hobby, and just as I don't count my spare time spent with AHK, they do similar with TC - of course, people who preserve a minimum of objectivity would argue that time spent with a scripting (or programming) language might be time which, at least in perspective, is spent to some reasonable goal, whilst for file managers, it should be "want to do something? here it is, immediate availabality, so that it won't make you lose time unnecessarily": A file manager should be a readily-available instrument for special tasks, not your new folly.

The same applies to other file managers, to a degree, and certainly to DO where "spending time with my preferred file manager DO" has become a hobby on its own for quite some people, whilst the intuitiveness is often absent; on the other hand, DO's got one of the very best help files out there, which helps a lot for lots (if not all) of things.

But at the end of the day, whilst most daily functionality is perfectly done by FC or such, and whilst XY (today on bits, 50% if you havn't got it yet) certainly has got the most pleasant photo viewer functionality of the immediate competition:

As soon as you get to some special needs, you try your 5 or 8 file commanders, 1 by 1, and then you risk to do it by hand. Last weekend, e.g., I needed to rename JUST the FIRST term of a bunch of folders from upper- to lowercase (whilst the rest of those names would have to be left unchanged). So I spent more than 2 hours with my numerous file managers, and tried to apply, where it DID apply, my knowledge about several regex replace flavors, to no avail whatsoever, and finally, in XY, it did it manually, but renaming in the XY rename list, which at least spared me multiple F2, Return, F2...

Every file manager does it its own way, deep down to regex replace in file names, and the respective help files are far from being up to par, and you end up thinking that you search, and try, in vain, because the relevant special functionality simply isn't there.

A last word on SC (I'm repeating myself here, but it's important, and, ok, I didn't try the last versions of it): If you want quick access to a subfolder (beginning with "ac"), you enter "ac": So far, so ubiquitous. But then, in order to display that subfolder, you press enter TWO times, not once (or you opt for (the totally awful) "NC mode"), and quite frankly, that drives me crazy, since it forces you to always reflect on "in which file manager / program am I? 1 enter, or 2 enter? So many people in their forum criticised this crazy behaviour, and the developer didn't lend them his ear, over many years (as said, perhaps it's set now, but I'm not sure at all about this).

I like XY, for photos. For everything else, I use FC. And to finish: I suspect SC people to not being able to write in English, and that would then of course hold them back over there, in that depressing but German-speaking forum.

I trialled several of those.
First try free Mouse without borders (MS Garage).
If it's not up to your expectations, try paid ShareMouse (Bartels; I can confirm it sets the standards here)

This being said, all such sw implies your pc's being connected, by network?
Or is there sw where there is no network connection, but some more "primitive" connection, perhaps even without the usual clipboard sharing (cs)?
Ok, cs is very helpful I agree, but...

These sw's (except for my asking if there might be alternative ones, doing it in another, more basic connection) prevent you from having a set-up in which NOT all your pc's  are connected to the web.

Hence my reminding you of the good old hardware kvm's, with physical relays; I suppose there is no way to access your private data on pc 2, from some web actor accessing pc 1?
And if you consider such a physical thing, let me tell you, before you buy, that those 30-40$ devices, often functionally identical to the 120$-and-up devices (speaking of 2-pc sets here, multi-pc sets can cost hundreds of dollars), have cheap relays which will quickly make you fear about the life time of the device in question, upon every switch.

Thus, there are several aspects to bear in mind, BEFORE deciding on your respective multi-pc set-up. Perhaps it's not that bad an idea to have 1 extra pc for the web, and then several pc's (if you really need them, concurrently), within a cable (not: air) network, with the (excellent) Bartels sw, not connected to other devices which might give web access.

Any multi-pc setup brings security considerations with it, and not even mentioning those doesn't seem that prof.

"Especially when somebody else is picking up your distribution expenses."

Ok, since this "old" thread has been revived anyway, let me give my 2c to this aspect. Most developers would be happy to have "distribution expenses" caused by heavy downloads from their own page; today's schemes are such that for most developers, 1,000 downloads or more per months would not only be included within their hosting, but would be considered peanuts by their respective hosters, which means by such download traffic, they would not be forces into another, more expensive contract.

Now, whenever you see some sw on a site like cnet (e.g. because the term "review" redirected you there, which is not a bad thing since as said, many reviews there are quite informative), what do you do then? Download from cnet, and have trouble, or go to Softonic and have as much trouble (alcool: bad), or to Softpedia, and presumable have no troube? (wikipedia: good - that's how I ensure for myself to not mix bad and good up).

No, your natural way of doings things is to search for the homepage of the developer, and to try to download from there. Now you could say that with unknown developers, that's a much greater risk that downloading the same free or trial program from e.g. Soft(wiki)pedia, since them do some scanning, but at the end the day, nobody except for the developers knows what his program will do, behind the scenes, when you run it, so you either trust him or not?

But then, in light of the above, such direct download should be possible in most instances, and when not, for some financial reason behind (really big download, really lots of downloads every month), it's the developer, on his homepage, who should you redirect to the download "provider", it should be up to the (honest) developer to redirect you to some honest download where NO crap, viruses, whatever will be added, i.e. as long as (e.g.) Softpedia is "safe", why a developer should not redirect you to that site, and to the download link over there?

Problems start from 2 bad ways of doing things:

- Your downloading from "anywhere", from "where it's available", without thinking

- Much worse even (and you should think twice then about the developer: is he really so naïve, or is he not entirely honest?): The download redirect link from the developer's page brings you to cnet, Softonic, et al.

This rule, download from the developer resp. following his redirect link to a trustworthy download site, should apply except for defunct sw (where there is not any developer's site anymore), and those are rare; most often, you get crapware by not following the above rule, by not thinking about it.

And yes, whenever some program is unknown to me, and its developer, too, I try to gather some "reviews", some shared experience with that program from different sources. Which means we're not 15 years old anymore, we should be beyond "sw collecting". When in doubt, be sure you'll be able to live without it.

My 2c.

Great db Foundation
Indexing of various file formats for linked files: Your list is quite impressive, and over at outlinersw pdf linking/indexing was mentioned a lot as for RN being really good in.
So not the slightest criticism of mine here, just two reminders:
- This excellent linked-files M of RN should be another (good) reason to not import external files, thus blowing up the db
- Linking a max of external files into an outliner invariably produces de-synch very quickly, since you do part of your file M within the file manager (in which you have available all files and their structure "live"), and part in your outliner (where you have a subset of which you never really know how much it is in synch, or not in synch anymore, with the corresponding set in the file system, re renames, moves (and perhaps even copies), and then also and first of all, completeness of sub set; for me, that's the principal reason for which my concepts have shifted to "integration of an outliner into file-system-based PM/file M", instead of desparately trying to do PM within any outliner, only to never get by re numerous "file system replication faults" within the outliner. In other words, a combination of both, db-based outliner, and file manager, is overdue, but any such combination should preserve the life character of its file system representation.

As said, I did not grasp how to search for tag combinations, but the special tagging features you list are quite impressive (tag icons, plus item icons? well, that's too much fuss for me, I like it neat, but then, automatic grouping according to their respective source tree/subtree(?), this should be of high value indeed; type-matches I didn't understand; tag list vs. tag-search, and by tree: if ever I understand how it's done, I'd probably very warmly recommend RN for this reason alone)

Folder tags
Above in weaknesses you describe that folder tags are NOT automatically updated when items are moved, here you seem to say the contrary, or then I simply misunderstood what you said above I suppose? Again, the help file is a nightmare, and especially for these (brilliant?) tag features, it would have helped for the developer to explain in a way that more people easily grasped how to apply them.

Multi-line entries in the tree
I have to admit I did not discover this feature before, and it's very rare, and should be extremely useful; I missed it a lot in ANY of my outliners I ever used, so it's a good thing to tell people it's there. (As above: Most tree components do not allow for multiple-lines entries, so when this feature isn't there, don't count on it to get it ever; Rael seems to have made a good choice in choosing his components!)

Is there, as you say, and it's a very important feature for anybody who puts lots of items in one tree (Which, as explained, I don't do anymore.) Btw, nowadays, most db-driven outliners have got hoisting now, by peer pressure: It's considered more or less considered standard=mandatory nowadays. (Sideline: It's ironic that 2 panes (which are also available here (in Prof.)), and which are much more easy to implement, are much rarer than hoisting, whilst asked for almost even more!)

ABC-sorting of tree?
Wait a moment here? If it really was non-destructive sorting, this would be sensational, but I didn't find that feature anywhere: In every which outliner you try (and many of them offer this feature), automatic sorting of the tree, or of subtrees (sibling sort), is destructive, and I just rechecked RN: sort is destructive, as usual, i.e. there is NO alternative view in which the items would be sorted, and from which then you could revert back to the original tree/subtree. (The "real thing" would be easy to implement, though, but no outliner developer ever did it, from what I know.)

Coloring/formatting of items in the tree
In fact, that is standard (and it's very annoying to encounter an outliner that doesn't allow for it, e.g. Maple, or then Ultra Recall (but which offers a trick to do that at least... but which I never got aware of but after my having left UR, and by Paul's blog), but you say, it's rare (and available in RN) that the title can be formatted, whilst other columns in the tree will be left out of formatting. In fact, I don't know? I should open MyInfo with several columns, then try... In fact, I don't believe in tree columns in outliners anymore, it makes your data (as a whole at least, more or less) un-export (same problem with extensive use of clones), and if you really need columns, you will quickly discover (I cannot speak for RN here, but for several competitors) that the outliner's functionality with regards to its tree columns is very sub-standard (compared to db's, to Excel...), so column use in outliners might become a very frustrating experience, and most of the time, you'll be much better served with tags, or then with simili-tags, i.e. £a:500, £e:1,3, etc., i.e. encoded, simulated "fields" in the text of your items. Of course, if those columns permit numeric fields, and then search for field xyz < 500, and field abc > 15, THAT indeed would be of real use. (As said, I didn't trial RN for such functionality.)

Tree position not only to the left, and individual setting for any tree
Well, I have never been tempted to put the tree anywhere else, but "be able again, to set that option individually for any of the tab/page’s trees" intrigues me!
So a tab/file is a "page"; as said, such very particular vocabulary doesn't make any sense, but one file/tab/page with several trees? What are we speaking about here? Or do you simply mean hoisting, several hoisted sub-trees in different panes, anywhere on the screen? But that again would have been "tabs", so contradiction with your "tab/file/page" vs. "several trees". (Or did you want to say "each tab/tree/page/file" (and not: "each tab/page's trees") can have its individual setting for where the tree pane is to be positioned"?)
I'm not criticising a possible minute fuzziness in expression in a very long (and excellent) text, I just want to grasp a possible, sensational detail you seem to describe and which I might have overlooked?

"without having to actually open any one of them"
I'm perfectly d'accord with you that real info in the file name very much helps in browsing which item(s) you should finally look into (instead of having so many of them opened in vain).

As you say, it's a rare and useful feature, and locking the top row is even rarer and so useful! And I have a correction to make: Above, I had spoken of a spreadheet feature within regular content panes, i.e. embedded in rtf text, like in text processors. Whilst RN comes with a special, additional pane format which is especially for spreadsheets (and only for spreadheets), and where your expectations re inbuilt functionality are of course much more developed as with the above described minimal table of just 10x10 cells.
Of course, the question remains if it's really useful to have such an alternative, more or less "full-grown" spreadsheet application, within your outliner, instead of Excel/standard spreadsheets integrated into your outliner; most outliners don't offer such integration at all, indeed, and as for Ultra Recall, many users are very unhappy with Excel integration, via the MS internet browser it seems?, in that competitor.
But that would just be another argument for my concept, described here and in the other thread, re live integration of external files into your projects, instead of doing some "half-baken" additional spreadsheet within an outliner, "half-baken" meaning "lesser than Excel, so that if Excel had been better integrated into your outliner, you would have preferred the latter".
Sideline: You mention even Memomaster here; very few people know that program which unfortunately does not offer clones, and some other important features, but which is, in the field of "network integration", or, "as groupware", far more developed as any other outliner I ever encountered, and which for this reason is currently the only outliner that has made its entry into the corporate market.

Some weeks ago, on bitsdujour, re The Journal, I said,
"Mike, +1!

Of course, the real question is if you rely on various encryptions built into different applications, or if you buy one encryption program, e.g. "Steganos Safe", and which will encrypt some folder/drive, and in which you then store various folders with multiple files from multiple sources, which is the approach I adopted.

Both approaches have their respective advantages: It's handy to be able to encrypt some parts of a bigger outliner structure, working in the outliner, but then, the problem with this approach is that anyway, you will need some encrypted container, too, for all those files that ain't handled by a specific encryption-able application, so the container is there, anyway, and then, why not use it for everything you want to be encrypted?

Once it's opened, everything work smoothly, once you put your ordinary files into your regular folder, your files-to-be-encrypted into the container... and then, .lnk files pointing to those... into your regular folder!

That way, your applications "think" you've got all your stuff together, and access all files in the regular way, without disturbing your "user experience", but in fact, sensitive files are hidden from your wife, or your boss, or perhaps even your Government.

This "replacing the actual files, in the regular folder (structure), by .lnk files pointing to some encrypted container", is a very good means of encrypting everything you want, from all programs, AND avoiding multiple encryption engines, multiple passwords, etc.

Of course, you need quick means to create (and correctly name) the .lnk file, but with a macro, you can even replace the original file, in the original (unencrypted) folder by the .lnk file, and move the original file into the encrypted container. All this works fine in everyday life once it's set up.

And then, the question remains what backdoors might be implemented in such containers, for the Government... ;-) (but neither for spouse nor for the boss)"

So much for "full encryption of the whole db", but I acknowledge that a manager "on the road" would very much like to have the full db encrypted, and without thinking about it.

But here again, I'm intrigued by your mentioning special RN terms, by saying "and finally, the integrated so called “page transfer” AND “floating tree” feature, that both allow one to transfer
in & out items on either a per tab/page basis (page transfer) or an individual per folder notes section basis (floating tree)
from & to another notebook (actually up to & from 3 others open simultaneously)" -
So now we've got "notebooks" here, different from files/db's, different from "pages"... - well, that's not a really neat concept I dare say, and of course, I don't understand what this "page transfer" here would mean:
Are we really speaking of several independent trees in one db, and a "notebook" would be a db? And how differe "pages" and "trees"? Above, "pages" were different from items, though...
I'm lost! (Ok, you say it further down, "pages" are trees, as I had suspected, but then, this paragraph here gets even more impenetrable.)

"ability to merge multiple notes into a single output file"
No criticism here, just let me say that any outliner which does not permit this, would be totally unacceptable.
Sideline: As described in my coding thread, you need this feature to bring your code (i.e. split up into multiple items of various "indentation" (in fact hierarchical) levels) into compiler-readable format; hence the need (but no problem) to "outcomment" your tree entries (the entries, not the respective contents). Now, Ultra Recall offers a very special thing where all your item contents, i.e. without the titles, are shuffled into one target item (from which you can export of course), so this would mean, in theory, an even more elegant tree, since you don't have to outcomments its entries in UR. Now the irony: As said elsewhere, UR is one of those rare outliners which are totally unsuited for programming, since it lacks "global replace" (and even "replace everywhere in this sub-tree"), with a developer who's unwilling to implement that almost ubiquitous feature. (Nothing to do with RN, hence "sideline", but it's noteworthy. As for RN, as you say here, it produces a single output file with which then you can feed your compiler!)

Custom shortcuts for every command you'll ever need
Very handy, and as we've seen above, thanks to Scott, Rael even did the "extra mile" here, thinking about tricks his competitors didn't think of (and which even I, snooping around for such "extras" though, didn't grasp).
Just let me repeat what I said above: It's by thoroughly checking the virtually endless key assignment table that you will discover multiple strengths and unknown features of RN, and which you will not become acquainted with just by browsing the menu.

"is yet so slow of being rid of some of its still existing usability-bugs"
Nothing to add here, it's WEIRD that a program that good in many respects does such ugly things like filling up its history with endless lists of unwanted transit items, and more, and is riddled by a help file which literally hides its multiple strengths (sophisticated tagging details; let's abstract from the obviously substandard internal tag M by which with 1,000 tags, you can bring down this program).

As you see, I didn't want to lessen the importance of your fine review, I just wanted to put those features into a more general perspective, in order to value them by competition and what's possible in general. And it's evident Rael should do some long-overdue homework, and his offering could quickly become number one among current offerings.


After writing the above post, it occurs to me that my first post here was too abstract and not helpful enough.

Why do I muse about first part of a routine, and second part, and then the interconnexions of both? (Have a short look into the first post here if you begin reading down here though.)

Because good programming style is to abstract, to combine, AND to stay easily readable (i.e. a little bit the contrary of what I do in my writings here).

Let's have some real-life examples.

You do a typical, noob AHK script. There's a trap. You'll probable use the construct

#IfWinActive, some program
then all your key bindings there
#IfWinActive, some other program
then all your key bindings for that other program

and so on.


In many cases, you'll have similar routines, triggered from within different scopes. Ok, you could do triggers pointing to routines, but even then, even for the trigger scriptlets, lots of similarities would be spread all over the place, instead of being held together, i.e. you'll need to send (sometimes, multiple) attributes (unfortunately, necessarily by variables, in AHK, and this means that if you have ONE key assigment in the form

if ( winactive("abc") or winactive("def") ... )
else if (winactive ... etc.
else if ...

and then trigger ONE routine, or just a few routines, for similar tasks, you'll get much neater code, here where in most cases, most attributes will be identical (except for the variable indicating from which applic that other routine was triggered), and also "on target", i.e. for those routines which then handle lots of similar functionality, with just some little differentiation depending on the trigger source.

After this intro into "Key, then scope, instead of the other way round, for AHK", i.e. the real-life example for the trigger, let's have a second real-life example, for the trigger-and-target this time.

Let's imagine you do your own little file manager, for PM and such, with 6 or more panes (as they are in some ready-made file managers available out there). Trigger would be selection and then Return, or Click or Double click, in those 6 panes or so; let's imagine such selection, in some panes/list fields, would trigger external display, whilst other such selections would simply change the content in neighbouring/subordinate list fields.

So what will you do? Do like an AHK noob on his first day, and do 100 scriptlets, all VERY similar to each other? I hope not!

Instead, you will gather all those different trigger situations in part 2 of your routine (part 1 containing variable declarations and such stuff); if this kind of trigger in that pane number, your little program should do this or that suite of commands; you assign variables which are then checked for in part 3 of your routine.

There (again, this is a real-life example for what I exposed in post 1 above), you'll do a second if, if else, if else... (or condition / when / when... (not possible in AHK) structure, and indeed, as explained above, this second (part 3) if structure is NOT identical or quasi-identical with the similar conditional structure in part 2 above.

Since, as explained, those triggers are very similar, but there could be several groups of commands; of course, instead of having just one part 3 of the routine, you will instead call external routines, "sub-routines" from there, either if those "executive" routines are rather long on their own (but as explained in my previous post, why not do a a 10- or 12-pages routines, as long as those pages are clearly distinct?!), or if you trigger routines which must also be accessible from other triggers (= keys or routines).

In that second case, there is no choice, and you'll do them as separatine routines of course (and there are cases where first you do that "routine" as page 7 of 12 first, within such a bigger routine, and it'll be after writing this routine that it will occur to you that page 7 should be accessible from elsewhere, too, and then you simply cut off that page to an external subroutine; in this case you'll check for the variables that will be declared in that new external subroutine, in order to get that all the necessary information from its trigger routine of which it was once just one part; i.e., as said above, fractionizing multiplies headers (which can become rather extensive), and if you don't need access to some routine part from the outside, doint it all in one big routine helps with minimizing unnecessary headers).

Back to our tripart routine: Now, in part 3, you check for every variable you will have created/set up in part 2, and here, the similarities between different blocks could be totally different from similarities in part 2: Just some examples, different file formats, different target panes, and as said before, even, instead of showing files, just listing files.

Here in part 3, you'll group again, according to such similarities, but the items in your conditional structure will probably be in a totally different order from the one they, or similar ones, were in part 1. And of course, you will spread up your blocks into different pages here, and this could even determine the order in which you put your blocks, i.e. why not do if var1 = 3 or 4 or 8, else if var1 = 1 or 2 or 5, else if var1 = 6, else if var1 = 7, etc. -

just because those 3 and 4 and 8 are quite similar and easy and can be treated all together on page 3, whilst variants 2 and 5 will be treated on page 4, together again, whilst 6 and 7 each will nead, separately, one page of their own; if afterwards, you'll see that 6 needs its own subroutine, you will not leave the call for that subroutine, alone, on page 8 or so, but you'll do the else if var1 = 6 on page 3, before the "longer" code blocks.

Sideline: It's always a good idea to discard simple things as soon as possible. E.g., I never write, if x ... then 10 lines, else, return, but I always write if x = 0 (or input = zero or something), return, and then, without else, without indentation, the main structure:

if a = 1
   10 lines here, all indented


if a = 0 ; even if it's very improbable
here 10 lines of main code (no braces, no indentation)

Accordingly, I check as early as possible for values that would invalidate other structures, so as to not even run parts of those structures, to then be aborted anyway.

Now a sideline: You could do this tripartite 1-2-3 for heading, set-up, execution, with goto's instead of variables, or at least you could replace lots of variables by such goto's.

As said before, don't be afraid of goto's, if their targets are on top of above-described pages 5, 6, 7, no problem whatsoever: Goto's don't make your code spaghetti code PER SE, and functionally, there is no big difference between target-pointer-variables and goto's - except that in my multiple-spreading variable-if-structures, I don't need then further goto's in order to leap over following blocks, whilst in a goto structure, you will need to pay attention to do that, in order to get OUT of your goto target, since if you do not pay that attention, any goto will continue then with the following goto, and so on, and in 99 p.c. of the cases, that's presumably not what you intended it to do (and which an if-else if structure does not)! So, pointer variables are both much more flexible (ok, that could become a trap, your, by laziness, interweaving several if structures...), and neater.

And, of course, most programming languages have abolished goto's, which would become an obstacle when translating to code in some other language. Obviously, those same extremists that abolished goto's, did NOT find a way yet to abolish pointer variables, i.e. can't stop you from (mis)using integer or yes/no/true/false variables as pointers being even better goto's than original goto's ever were.

Use such pointer variables to structure your code, and it will become easy to write for any beginner, and will be perfectly readable/neat, maintanable, etc. It's a good way to code, and that's why it was worth it to better explain it to you than in post 1 here.


What about my question at the end of my previous post? Is there any editor where you could suppress search hit lines containing search term 1, which are NOT followed by hit lines containing search term 2? There are many occasions where such an editor would become more than helpful...

(As said before, rtf formatting of your code is so extremely useful that I would not switch from outliner to editor, for writing code, but many people will not switch from editor to outliner, but would switch to a really better editor than their current one, and this feature would make all the difference, as explained above.)

On outlinersw, I was censored and threatened to be thrown out, for being too harsh with RN. As we all can see, and in the absence of a RN forum, its developer (who, as said, does not ignore this thread) does not see any necessity to express his views/intentions re his program which e.g. by its more than amateurish history function is almost unusable.

Folder Tags
Without contradicting you (and without having trialled this feature), I think that additionally, the behavior you describe, or at least a similar one, could be of big benefit.
In fact, RN does not have clones, so when you move an item which automatically got the corresponding folder tag when it was created, that item, by this "ancient" folder tag, has got some info of its "origins", or of its original/alternative context; of course, this should be made evident in some way, e.g. by adding a "From " to that original tag.
So what you describe is just by sloppy programming, but something in that line, and better thought-out, would be welcome indeed.

Go previous in same file, not working trans-files
As said, the (intra-file) history is unusable, so I've got some doubts about "Previous/Next" here, too.
History-for-files is easily done by external macro, for RN files, and for every other file.
The intra-file history should work, since for external macros, it's often impossible to replace the missing/unusable internal function.
Of course, it would be even better if there were several histories, one overall, one for each file, for the items visited there... and, most important, some history where your key pressing only would enter item (and which would englobe every file of course), in order to put just those items in which you'll need to visit again soon... and then, that list field should be done in 2 columns (or 3 if you like to indicate the corresponding source file, too): the very first column being for a number: You press 3, and the program shows item 3 in that particular list: THAT's user-centered gui creation...

Selective bolding of PARTS of table entries
The same problem persists with trees in outliners, why? Because almost all tree components do simply not allow for special formatting of just one word of entries there, it's, for any formatting, all or nothing. Similar for spreadsheets, Excel being an exceptions (and I know of one tree component that permits that, too). So I suppose that the table functionality of the rtf field component used in RN does not permit it, and which such limitations of the respective component, both developer and user will then have to live (until the developer throws out the component) - cf. abysmal Ultra Recall's MS rtf component...

Necessary scrolling in tables
As before, but I could imagine the developer could spice it up, i.e. add the functionality missing from the original component as it's delivered. This being said, I more and more believe that big tables should not be done in an outliner, but in Excel or similar, and that PM/project files gathering should be done from a higher level (see my thread on that, derived from this one), than from the ordinary outliner tree. So in my concept, tables in outliners are good for 10 rows x 10 columns, not for anything much more, idem with other add-ins.

Missing tagging scalability
Very interesting, thank you! This is a typical program's fault you'll never get to know just by trialling, and slow-down and crashing even in the 3-digit range is more than a pity!
I'm not sure if I should follow your try to see it from the bright side: I always whined (even here) about the unmanagability of endless tag lists, but then, the user should have the freedom to apply several tags to any item, even if that means a 6-digit number of tags; that's why I complained about the weirdness of the tagging function in RN to begin with, in my outlinersw post on that matter and here, e.g. not grasping how I'm deemed to quickly and easily do a) tag combinations and b) manage multiple tag sub-categories. Of course if even some hundred tags make this program unreliable, well, no need to discuss better tag M for thousands of tags...

From your "page" I see that RN, just like MyInfo, thinks it advisable to introduce weird non-standard denominations. Of course, a tree with its contents should be called a file if that's its technical organization (as it is, in both programs), and an item should be called an item; to call a tree/file a "page" is outright ridiculous (of course, I'm not criticising you who just wanted to be faithful to the particular vocabulary), but objectively, it's nuts to call it this. (And in South Africa, they speak English, which is not the case for Bulgaria, so here it's not a possible case of bad translation to which then the developer stuck for reminiscence reasons.)

So much for the "Still bad"; I'll look again into your incredibly rich and informative post in some days; if more posts where like yours here, this forum's usefulness would be multiplied.

(Immediately above:) "In those languages, you'd do the inter-item checking from the headers again."

Well, it was late in the evening...

In fact, for such languages to check variables, you must run the compiler, and we're speaking of pre-compilation checking here. So it seems some "partial compiling" just for one routine, to check intra-routine, would be a very good idea.

And some general observations:

You absolutely need "global replace", in order to do programming within an outliner; you might think that's ubiquitous, when in fact, in rare but notable cases (Ultra Recall), there is no such functionality (and "global" includes "this entry and its children"...).

In your outliner, you need exporting of "this entry and its children / whole tree" to txt.file; the compiler doesn't need all your formatting. Then you change the suffix, and run the compiler on that file, and in case open it within an editor, in case you can't identify the compiler's messages otherwise than by line number. (I do all this by script.)

If you insist on using an editor to begin with, you need to mimic an outliner's natural division into heading and body, into tree and content pane, and that's why you'll need Boolean search in your editor (always with hit table, i.e. with a list view displaying all occurrences of your search expression, together with their context):

For whatever would be, in an outliner, a heading, have an outcommented line with some special char.
For any variable, use another special char (you could even have several such special chars/char combinations, like $a at the end, or another group with $eb at the end, etc., by "greater context", and also grouping by variable format, i.e. integers, strings, many more, and it's also possible to tag (!) one variable with different such tags, so that they appear in different such searches.
Similar for routine calls and such.

Then, your search expression would e.g. be:
£ OR *$eb
and you would get a long hit table with lots of unnecessary entries/headings (the ££), but also with all variables of the group eb, beneath their respective headings (and which is the part you're after).

Yes, you could try to "optimize" this by also trying to tag your headings (or to cut up longer code into several files, but that would be dangerous if then you don't search "over all"), but if you code headings at the beginning (not the end as for variables and such), i.e. in the form
;£ Respective Heading
you will see at one glance where there are lists of headings with no "hits" in them, and where you should really look.

Of course, some "Expanded Boolean" would be more than welcome, a routine that would only show those "first-OR-element" when the next such element in the list is from the "second-OR-elements" variety, i.e. which would suppress any £ find NOT followed by a $ find in our example, but currently I do not remember any (editor's or other) ready-made search routine that would do that, without your programming that more elaborate routine first, by yourself?

donleone, in this


RN thread, said,

"- RightNote can do internal Quick-Linking to another note using e.g. the shortcut CRTL+SHIFT+K,
but the quick-link only remembers as long as you refer to an item on the same page/tree.
For when namely a quick-link is made to a note, that then gets dragged over unto another page/tree,
it breaks the quick-link and says "This item has been deleted" (even though it's just on an other tab)
So the ability to sustain note-links across pages, is a missing ability yet or bug."

and I said,

"Correct me if I'm wrong, but "other tab" is other file (except for hoisting of course), other db, and you describe a problem that currently harms any one of those db-based outliners, whilst the text-based ones are even worse, do NOT allow even for intra-file cross-referencing. OMG, I see I develop this too much here, so I cut it out to a new thread!" - here it is:

That's why I, 22111, in outlinersoftware, some months ago, devised the concept of a better db-based outliner, in which there would not be 3 distinct db's for 3 tabs/trees, but where the trees=outlines would be stored distinctly, as lists, from which the trees then would be created in run-time, from a set of ALL items, which in that db would be totally independent from each other, i.e. there would be 2 db's, one for all items / single bricks, and another one for multiple architectures for which all those bricks would be available in every which combination (order, hierarchy, cluster, whatever macro compounds).

Of course, there are some conceptual difficulties with such a construct, since in that second db, the one containing trusses from which the individual trees would be created, there should be some "combine" functionality, i.e. it would be devoid of sense to ask the user to build each tree up from zero, so multiple "partial trees" would be to be combined, in myriads of compilations and combinations.

And of course, there would be a third (distinct) db (part) from which you would have access to these compounds listed in part 2, and the interaction of all within 2, and/or of 1 accessing compounds in 2, or managing some of that combination work, is both conceptually demanding, and especially difficult since most prospects are deemed to immediately run if such a project isn't presented to them in a way to make them feel very comfortable, i.e. the fear to be inadequate vàv such a "difficult" framework would make people do not touch it to begin with, the French call this phenomenon "l'embarras de richesse", i.e. the completeness of such a system would also be its evident complexity, whilst you must hide complexity instead, in order for the prospective user to give your IMS a chance.

In other words, part 1 (part 3 above, let's rearrange it top-down instead: 1 = project level, 2 = compound level, 3 = innumerable, independent items) should give access to compounds (trees, lists, e.g. from search results), but should be as clear as possible, whilst in part 2, there should be all the possibilities waiting, but it's evident such a system should start piano-piano here, whilst in fact, here would lie the incredible force of such a system.

In fact (and I developed this in length over there), today's THREE-pane outliners just put an intermediate flat list between tree and content/single item, instead of shuffling the tree into the middle pane, creating a new, master tree within pane 1, and you see from the implications of your imagining such a second tree hierarchically below the first below that of course, at a strict minimum, there should be some floating pane with a THIRD three, FROM WHICH TO CHOOSE FROM (i.e. single items, or whole trees/subtrees), and which (the pane) could then contain any subtree from tree 1, or also any search result, both, as said, to choose from, for tree 2, the single project tree you are going to populate), and of course, that "target tree", tree to be constructed, or then, afterwards, to be maintained from tree 1 (i.e. in tree 1, you select the tree to be displayed in pane 2, or in the special "source" tree pane), should be able both to contain part-trees from other trees, in their synched, original/currently-maintained-over-there(i.e. in the original source) form, and in individualized forms, i.e. some items of the original sub-tree cut out here since not needed here, others changed here, and so on, i.e. I'm speaking of cloned parts, and of copied parts, or rather of cloned parts that get into a "just-copied"/augmented state later on, and this individualized for sub-parts of those originally-cloned sub-trees...

Just imagine somebody in the legit profession who, for some trials/proceeds, needs some legal dispositions in current state, and others in the state of their version being in effect at the time of the facts!

Thus, there should always be complete clarity of the respective state of deliberate de-synch (be it item contents, be it item versions, be it similar but partly different item groupings), in any which context, so part 2 will not only be basic lists of item IDs and the hierarchy info for the respective tree, but these tree-info-db's in db2 will contain lots of info...

And of course cross-referencing info: to sub-tree/heading, to item, to paragraph in an item... of whichever tree, you clearly see here the interest of separating item info from simili-item-info which in fact is entirely dependent of the respective occurence of a(n even perfectly identical) item, in multiple trees:

Not only, in one tree, an item has got some position, and an entirely different position in some other tree, but then, in tree a that item is cross-referenced to item xyz in tree c, but the same item in tree b might be cross-referenced to heading mno in tree pqr, and so on, in countless possible combinations.

All this is suddenly possible with that overdue separation of items and trees, and as I developed over there, in a corporate environment, there could be multiple item db's, but again, there should be a "management layer" between all those item db's, and their tree use.

As it is, cross-referencing between items in different files, and then their maintenance beyond renames and moves, IS technically possible, but would take lots of necessary "overhead", the above-described setup of independent items, and then a structure maintaining multiple tries, together with any linking info, in a separate "just-trees-and-their-info" db is by far both more functional and more elegant;

again, there's a construction problem, and that problem how to "sell" such a sophistaced structure to the user, "selling" meaning here, how to devise the gui in a way that the prospective user will start in it with confidence in his grasping it all, in time... ;-)

EDIT: Some more development of the "multiple trees, with live cloning, in just one outline db" concept in "Reply number 9" = post 10 in this RightNote thread here:



"As for the shortcut Alt+B, I agree it is a bit bothersome to remember a non-standard combination. But it has the advantage of toggling the bold state of the tree node while still in the note editor. That is, you don't have to move your focus from the editor to the tree to make the node bold. Now whether that is an advantage or not is another question, but I think it explains why the shortcut is not Ctrl+B."

Scott, my fault, I perfectly acknowledge the sense now since you explain it to me. In fact, I had often mused about unnecessary scope limitations in programs where it would have been perfectly possible and sensible to trigger a command from everywhere, and where unfortunately the developer simply had not thought of it.

Especially with 2-pane outliners, creating both a sibling and a child item (or at least for one of them), is often only possible when the tree has got focus beforehand, as well as renaming an item (= a tree entry), and for bolding items, yes, I often had to switch to the tree first, then only was able to apply control-b, so it's definitely a real good thing, and with some other "weird" key assignments in RN, it might be similar.


Hello, donleone.

Well, that's an elaborate post, wow, kudos!

"- RightNote can do internal Quick-Linking to another note using e.g. the shortcut CRTL+SHIFT+K,
but the quick-link only remembers as long as you refer to an item on the same page/tree.
For when namely a quick-link is made to a note, that then gets dragged over unto another page/tree,
it breaks the quick-link and says "This item has been deleted" (even though it's just on an other tab)
So the ability to sustain note-links across pages, is a missing ability yet or bug."

Correct me if I'm wrong, but "other tab" is other file (except for hoisting of course), other db, and you describe a problem that currently harms any one of those db-based outliners, whilst the text-based ones are even worse, do NOT allow even for intra-file cross-referencing. OMG, I see I develop this too much here, so I cut it out to a new thread! Hence:


There, I explain the limits of what we can reasonable expect from today's outliners, and why none of them overcomes those limits in their current state of code architecture.

As for your other info, I'm very thankful for it, and I will thoroughly look into it, and comment, in some days, promised! ;-)


mouser, I think you make a very valid point here, my first post showing that I understood Karnaugh as a means to straighten out code, whilst in fact, such "beautified" code will also and foremost dramatically reduce computation time, in such cases.

And for general purposes, it's generally accepted today what you say in some key words here (and what I tried to explain to noobs a little bit, trying to counterweight the (implicit, possible) counterargument, "but as long as I find my way thru my code..." by saying, one day your children will have to maintain the code you write today; of course I know that today's fast-changing computer world will prevent them from such a task; in most cases, much of today's code will have become useless in some years (and less and less code is written for traditional devices, for that very same reason).


There is one aspect to be added, to that maintainability need, and which I try to "observe", as much as I can (i.e. by pure imagination, i.e. without having sufficient knowledge of alternative programming languages et multi-devices setups):

More-or-less-traditional-applics-written-today should be "transportable" at least in the sense of facilitating, or at the very least in the sense of not "deliberately hampering" ("deliberate" by unfortunate design, not by real intention, of course) transposition into other programming languages, incl. multiple-device setups, i.e. I muse, "how could this be realized again, later on, divided up between pc and cloud/handhelds/whatever?", and I try to not do it "too compact".

Both in the "micro" and in the "macro" levels mentioned above, there should be enough valid "recoding info" in order to recode it all, for more sophisticated setups, and if you blur "micro" and "macro" - and most noobs do exactly this, and I'm also speaking from my former, own experience here -, such "partial reusability" or rather, "code's lending itself to become "framework" for rewriting", would not be enhanced.

It's not the same, but a very similar construction concept to the one applied by MS in their .NET thing, plus programming languages, and then their WPF/XAML concept, where they try to separate, as far as possible, "core code", and then access to visual elements, in a word, their aim is abstraction, and of course, in order to reduce complexity wherever and as far as that's possible (and we have a double effect here of this both facilitating original coding, AND then maintainability, reusability, and adjustability/malleability even of code later on, to integrate new/replacing elements).

All this is about utmost-possible clarity today (in programming), and tomorrow (in revising and even upheavals), and as said, performance considerations are disregarded to a point here.

Two applications come to my mind here: One of the earlier CRM sw, Act!, very common in its time, got, in the early 2000s, some overhaul, and bingo, legions of former users left, after having shared their deceptions of which the by far most important was that any function had come to a crawl. I myself trialled it some years ago, and its (missing) speed was so unbearable, even with just half a dozen entries, that I very quickly dismissed it. So here somebody's priorities obviously ran amok.

Many Ultra Recall users (.NET and SQLite), on the other hand, complain about it being "slow" - I've used that program extensively and can report that even with BIG content, and on non-ace comps, its speed is totally acceptable, except for just some details where from a psychological pov you'd expect immediate responsiveness, and when then you'll have to wait though, just some seconds but which go on your nerves since every other program of its kind does give immediate reaction in these conditions.

So it is certainly a good idea, as you imply in your post, mouser, to have a look at response times in typical situations, and then to do some special tweaking there if needed, and it's always of interest to see that even very modern pc's, with all their power and speed, do NOT overcome some special speed issues of some programs, in spite of "us" not speaking of big routines here but of things a layman would think should be easy... and which ARE easy, of all evidence, in competing progs!

I'll not divert here, just let me say that sorting algorithms can have tremendously differing run times, easily by factors of 1:1,000 and more, and then some of them are very good for just some dozens of items-to-be-sorted, whilst they are extremely bad for higher numbers of items, or vice versa, which indicates that an ace program in which often items are to be sorted, should COUNT those items before sort, and then apply one of two "waiting" sort routines, with their respective algorithms, to the SAME body of items, depending on its length...

(And that's easy to program (and the sort algorithms are to be found in special textbooks), it's just a little bit more work for the coder... but it's one part of coding excellence as I see it...)


Thank you mouser, for not contradicting me, so noobs should note that there is some sense at least in what I try to "teach", from my own experience.

But then, it also hit you (and me!) into the eye that in my way of coding, there is a myriad of (necessary but unpleasant and time-consuming) manual checking, re-checking and counter-checking, and for everybody having outgrown scripting basics and trying to do some real work, it should not be that bad an idea to have available what I describe above, AND to be able to run a special routine that does all this checking-in-all-directions on his behalf, even if that implies spending of 800 or 1,200 bucks.

That's why I kindly ask prof. programmers to share their experiences with appropriate tools (i.e. that should NOT be entirely object-centred).


See III. And since this problem is strictly unbearable in the end, I came along with an intermediate idea about this.

Why not rename all your current variables in a certain way, in order to strictly identify them as variables. Ditto for routines. (Trial special chars before using them though.) And on first occurence on that "page", in that item, do some comment (from where, to where...), etc.

Then, as described above, one routine, one outliner item, and even, one separate routine part, it's own outliner item. Then, an outliner offering a hit table (showing the respective lines), with indication of the respective item.

Then, print out the hit tables, and compare them with color pencils: This will at least avoid both: any additional work to write/maintain those header section parts; and especially: any (and from a logical pov, totally unnecessary) synch work between body and header in this respect, and ultimately any synch problem in this work, i.e. this alignment body-header is highly error-prone AND totally unnecessary, from the moment on you clearly identify variables (and routine calls, etc.) for yourself, and for the hit table function of your outliner!

(Well, they call this "process management", cutting off any unnecessary step out of it, by optimizing the remaining ones. And yes it's different for languages that force variable declaration/typing, and which hence do the checking for you, intra-item. In those languages, you'd do the inter-item checking from the headers again.)

But then, COMPARE those hit table printouts, conscientiously!

No, not one musical/comical share per post, just per thread, all the more so since you all will KNOW "Cat Tara" already by now, right? ( If not, see that little heroine for yourself, on YT, wherelse! ;-) )

Understood that this thread always addresses non-programmers, non-professionals...

Elsewhere, I mentioned Warnier (for details, just search for "warnier" either in this forum or in who, in the Seventies, did something revolutionary for mainframe programming, which at the time was done more or less in spaghetti-code style which made stay code reusability an unknown concept, and even hampered code adjustments and code maintenance to the point of unbearability voire impossibility.

Hence his (very mainframes-and-their-then-typical-output-centred,) very strict (graphically, horizontal) tree structure for process/logical flow. Up until some time ago, there even had been (left) one piece of sw to do it on screen, "b-liner 6" (well, there had never been versions 2 to 5 at least,, for 90$, but when I today wanted to check the current price, I got a "Currently Not Available" instead. Well, it was buggy (and development had been stalled a long time before), but it was graphically very pleasant...

Now, even quickly in the Seventies, programming became much more sophisticated, than the Warnier paradigm reasonably could handle: As indicated above, there are several such flows that in an elaborate application will not make their voyage together, and at the very least, you'll get logic flow, and information flow that differ, hence the need, in non-professional coding environments, to do some heavy manual work here, like extensive, manually-maintained lists for triggers, triggered elements (in, out), variables (ditto: "gets var1 from system, updated by trigger routine xyz", etc., and "updates var2 for routine abc, updates var3 which is then possibly checked by routines c and d", etc., etc.).

Now, manual maintenance of such lists is a lot of (error-prone) work, but you can at least simplify this by either using some editor whose search function will show results in a hit table (and which offers code folding, of course), or, much much better, to do your programming/scripting within an outliner which has got such a hit table for search results, too (and where the hit table will indicate the name of the item = routine of the occurence); fortunately, there are some such outliners, from which RightNote stands out because even its FREE version offers this feature, and allows for rtf formatting of text, of which you should make very ample use at least during the construction of your code body, so for people who ain't (yet) into outliners for almost all their work, will be able to start programming/scripting in this free outliner, independently of their possible switching to some outliner for general work later on.

Also, you could do you manual logic and info flow checks on paper, with print-outs and colored pencils, either/or (even) for the original data, or (and especially) for those hit lists (many programs will not allow for exporting/printing of their search hit lists: Just make a screenshot then, and work, with colored pencils, on the multiple screenshot printouts).

Of course, before doing this "macro comparison", i.e. inter-routine (i.e. "are my indications correct, checking alleged sources and targets?"), you'll have to do your "micro comparison", i.e. intra-routine: (full) routine vs. your manually-created "header" lists there: Here, you will work with printouts and colored pencils, preferably, but if you insist on using a hit table, instead, even here, you can do it in every outliner offering "search..." (with hit table, to begin with, naturally) "...just in selected item and its children": For such a setup, you simply divide your item into your header, and then the real code, and in the tree it would be:
; H some routine
   ; C some routine
; H next routine
   ; C next routine
H meaning "header", and "C" meaning "code", or whatever you choose for differentiating them:

Then, you select ; H some routine, you search (e.g. for variables, but also for GOSUBs, etc.), and voilà, you'll get your comparison between your initialisations, etc., and your real use of things.

It goes without saying that in programming languages where declaration/initialisation of variables is mandatory, and where there is a clear distinction between local, and then global variables (some ace programmers of today will even tell you that global variables were to be avoided, but that's a whole other story/world (so don't have you fooled)...), both comparisons, intra and inter, are greatly facilitated, but even there, DO THOSE COMPARISONS, as thoroughly as is needed, i.e. down to the last detail (or buy some really professional coding environment, but unfortunately I don't know any that lets me work in such an outlined structure, and does all the above-described tasks for me upon request, so external input from fellow posters would be more than welcome here, and yes, it's understood such sw would be in the 4-digit range).

As you see, this is quite overwhelming a task (but in a good outliner, it's doable at least, whilst in b-liner, e.g., technically it's doable, too, but accessing all those elements that will have to be compared would be a strenuous nightmare (and yes, there are people who get killed in their sleep, precisely by their nightmares!), so most of you will try to avoid it: DO NOT!

AND HAVE PATIENCE IN WRITING. What does that mean? Do not do multiple pieces of tiny code, then being eager to see if it works, but try to build up the architecture of all those interlocking elements, i.e. build some tree sctructure, and do lots of pseudo-code, intervowen with real code wherever real code is at your fingertips, i.e. where you both know how to write it, and can write it down quite quickly, but perhaps leaving out the necessary variables and such...

But whenever you write such "primal code" (i.e. some chaos mixture of pseudocode and real code and, especially, as much of (just provisional!) notes as you'll possibly need later on in order to "complete" your code), whenever it occurs to you that your "code" is not complete "here" yet, do a note what's missing, what to consider, too, what might be included, whatever:

BE AS COMPLETE AS POSSIBLE in your considerations, in your coding. I.e. refrain from following what "they" say, DON'T be concise, but "put it all in": Take those novelists as your example who write 1500 pages or more, before condensing it all into those 350 pages that then, if they are very lucky (or have established their renown already) will become the bestseller you (or your wife) will possibly rave about.

In other words, THINK YOUR architecture/code, in writing it! (But if you try that in an editor, instead of an outliner, you'll probably get lost, or your editor should have really good outlining functionality!) I.e. do some simili-Warnier construct, but vertically instead of horizontally, and with immediate access to both any "code pieces and other explanations to yourself", within the right pane... and without considering the very first node in your tree as the logical source of any tree you might create downwards (and, of course, without cutting your code into TOO many code parts: as said above, whatever code is less than one printout page, even if it's a (but please: cohesive!) logical structure: no need to artificially cut it up into a dozen or so Warnier elements.

Also, and especially in light of the above - big header structures in "comment" format, but so important in order for your being sure every "flow" in your application will be correct (and yes, I should have said it above, I say it know: All that intra/micro comparing code vs. comments, and then inter/macro, header comments vs. header comments elsewhere, will INCREDIBLY minimize your debug time (not speaking of your children some day maintaining papa's code!) -,

there is no need to artificially cut up code into subroutines (e.g. in ahk: GOSUB, nameofsubroutine), together with "endless" header comments there again, when you don't have to possibly access that code from elsewhere (but you should ask yourself if that wasn't an alternative in order to further optimise your application's functionality, among other considerations!):

Instead of cutting code "too big for one page" into several routines, cut it up in several, "simili-self-contained" pages, e.g. within your tree:
; routine (header including comments)
   ; if abc (code belonging to the if's in the next item (! why not?!) but then with some 10 or 30 lines of its own
   ; if d (further on in that "broken-up-between-pages" if structure)
      ; if a (why not? even a sub-structure here!:)
      ; else (and the code for the else branch)
   ; if ... (= end of above if structure, with perhaps 3 or 5 more)
   ; and some more
; here only, some other routine

So, sometimes, you'll get "real big packages", and which are quite homogeneous in such, not in structure, but in what they are deemed to "be" (i.e. not even necessarily re their elements' respective outcome/output), but without these "output" elements then being "accessed from the outside", or in other words, you'll get code structures that are, in some way, quite "final".

In such instances, it would really be quite ridiculous to artificially observe some "one page, one routine" structure, but you'll be well advised to let flow that routine over 3, 5 or 10 pages sometimes, and without endlessly and partly (! or you create inconsistencies if independent routines!) replicating header structures, at the condition of course that within such a several-pages code structure, you'll be able to cut your code into logically clearly distinct parts (but even with "continuation sections, as in the outer if structure in the example above): Whatever is visually easily understandable is perfectly acceptable as code structure, as long as it is "self-contained" (no access from the outside).

So, please take this part of my post as a correction of my post above: I write "from memory" here, and in my post above, the observation of the "one page one routine" rule made me ignore my own experience of those, perfectly acceptable (and perfectly "maintainable"), (sometimes even much) more extensive routines: Atomize, but wherever it makes sense, not where it doesn't!

To resume what I've said above, have patience, i.e. let grow your new code (be it several routines, be it one big routine) for days (and more, if you just work on it 2 hours a day), without trying to run your code or parts of it, and if you have got several locations where you expect the same or similar difficulties, just say so in your comments, but in the meanwhile, "fill up" and complete your whole structure, as best as you can: Write 5, 10 or 15 pages of code, trying to "oversee", to hold in scope, the "big picture", and iteratively switch forth and back between construction questions and details within parts of the construction.

This way, your construction will grow up to the point of being "acceptable" (i.e. further optimization, or even later rearrangements will perhaps be necessary AFTER "trying out", "running for tries", i.e. (starting) debugging, but most rearrangements will even be done here, before "starting to try"), and at the same time, every part in it will grow to the point of where it will BECOME reasonable to THEN complete the real code everywhere, i.e. replace pseudocode and comments whenever you feel it will be quite "final", i.e. writing too many lines of real code too early in this process will mean AND many lines of code for the bin, AND annoying retardation of structure...

whilst any try to perfect structure, without code, will (except for some ace programmers perhaps) result in more or less faulty structure, since it's also in writing code (and pseudocode, but quite near to real code) that it will occur to you that alternative structuring will get "easier/better/more accessible/or even just:possible vs. impossible!" code, i.e. in many instances, (alternative ways of) code structure and (alternative ways of) code detail are so interdependant that you should find your way of doing both, architecture AND start-of-finition, concurrently.

(Instruments like UML try to facilitate this, but especially for UML (and for which there are several free sw offerings), I personally think that the graphic represention in most of its instruments is the worst, most non-intuitive and most non-immediately-catchy(=comprehensible) I ever encountered anywhere - UML is a nightmare imo, and then, it's not even smart enough when combining different overlaying structures, instead of doing them in (much too) separate views (it's just that (paid) UML sw obviously comes with some very welcome automation (see above))...

Hence: Start from tree, or from several trees one below the other(s).
Fill the content fields with some content (comments of which the bigger part will be deleted in your work).
Create new branches / subbranches (name the routines you will create, add new comments).
Do some code here and there, enough to discover structure.
Amply use rtf formatting for everything you do (= one of the BIG advantages over editors!).
Create new branches, etc. whenever you deem "necessary for perfect clarity" to separate (future) code from other code parts.
Rearrange branches / subbranches / code pieces within content fields.
Fill up the content fields as much as necessary in order to get it "complete".

Then: Revise your pages, one by one, checking if CODE is complete there (and if NOW you will have formatted any comment there that will (for the time being at least) stay there, as comment) (and replace pseudocode in several such pages in one row if up to then you had done similar code parts in pseudocode/comments instead of real code (e.g. because it's "obscure"/difficult commands you first had to look up).

Then: Do the above comparison work: headers vs. bodies, then headers vs. headers.
Check logical flow.
Check info flow. Check variable names, and variable contents. Check (AHK!) if here and there, you will have mixed up name and content.

THEN try to compile, even if it's a month later. NO! First (ok, you could have done this above already, but don't do it before your code is "semi-final"), put multiple msgbox, 1/2/3: variablename _%variablename%_`n_%anothervariablename%_ lines into your code (accent grave plus n is new line, _ is for being sure to exclude possible leading/trailing spaces, and since you will see the variable names in your printouts, you can leave out them in your message boxes:
msbox, 54 _%var1%_`n_%var2%_
, etc.), multiple such msgboxes I said, and often with braces and a second, return line, and most often out-commenting the real line, the one to be executed. Print all your pages out then, and check the proceeds, in the next step, one message box one by one: Is the variable in msgbox 28 the variable you will have expected there? (If you don't number the message boxes, as described above, you will not know WHERE in your code you will have got the right or wrong variable values.

Since NOW try to compile, and trial, perhaps only for the main parts of it (cf. your replacing (!) executive "subs" by "just showing the respective variable values" above). On paper, check the variable values, with "ok"s (or what you get). Then, revise the code, then "open up" such "subs" (whenever the higher structures give correct msgboxvalues), in order to check smooth running of the "inner parts", too: Step by step, replace msgboxes (which could also contain passages like "In reality, it would trigger sub xyz here.", but just look at your page instead!) by real code execution, up to obtaining perfect code (or seeing where you better rearrange your construct).

Or in other words, iterative coding is not a synonym for "coding by chaos". But Warnier structures et al. had been too stringent, and that harms creativity even for building up the "macro" structure, i.e. the repartition of intermediate, not only lower branches. And, you know this already, but just for the beauty of the pic: Build some wood, not just one tree. ;-)

Start with RightNote FREE if you don't have same (good) outliner anyway.
Don't say "But my outliner doesn't do code completion (for ahk or whatever), whilst my editor does!" whenever your outliner comes with the above-described core functionality for programming/scripting and your editor does not.

Ok, this wasn't only "How to write code as I see it", but also "how to debug (and considering that the real difficulty in coding is not logic structure in its stricted sense of the term, but info flow, i.e. the process flow created by info in variables and such)" and even "how to use the original language to do the prototype", but again, I'm addressing fellow non-professionals here, so I seriously think my advice could be helpful to some of them.

As for a certain Mr. Orr, well, you wouldn't name a train after people who just jump onto it when it has started to run, even when it then feeds them quite well a life long, would you?

And here's the inevitable "and I also would like to share this..." part; please allow, for once, to share the very best piece of comedy ever produced after Groucho Marx, but which is unfortunately available in German only, and worse, in a German intonation that would ask for more-or-less-native German speakers in order to "get it all"; I would not have dared mention it, had the quality of that tour de force not been world-class. And then, it also comprises some outstanding sax performances by Simone Sonnenschein ( ) which are perfectly accessible to non-perfect-German speakers, too, and of which the one right after the break, "Free Jazz", is without any doubt the most hilarious piece of (so good, en plus !) music in the whole world of music, since ancient history.

Well, this rave performance is from 1999, is called "Hip Hop für Angestellte", and is by and with Piet Klocke:

Enjoy (the music at least: listen at very low level (and without picture: Without understanding his speech, you'd mistake Mr. Klocke for a dangerous lunatic!), then rise it whenever the sax plays, and remember, that grandiose lady starts piano-piano (= in a very subdued mood), so early asking yourself, "so what?!" would be a BIG mistake. (And bear in mind that her singing at the end is for comic effect, not to compete with Frederica von Stade!)) ;-)

Hello, Rael (= the developer), Hello, prospects (and "you" sometimes means the first, sometimes the latter, but that will be evident from respective context),

I'm currently extensively trialling RN. Some first observations:

a) There should be a forum to ask questions in. (It's not mandatory for such a forum to be integrated in your site, so there are solutions which would cost you nothing.)

b) Why have Tree shortcuts different from general standards, i.e. "bold" is "Alt b" here, instead of "control b". The SCOPE of the shortcut will determine its action, so there is no problem to change it to the standard shortcut, and that I did, but there are many such idiosyncrasies in your shortcut assignments, i.e. there is a lot of manual tweaking work to do for a new user. (This Alt b instead of control b just being one example among many.)

b) Tree: I miss "italic" and "underlined" (but there is a color and background color formatting).

c) Tree: If I change shift enter to Ins (by "Special"), for "Add child note", this does not work then. Of course, the change is a matter of taste / personal appreciation, but the assignment should work if it's offered by the "Special" list of possible key assignments.

d) Tree: If you select an entry with the mouse, instead of by kb navigation, it will become underlined, even when you then change the selection to another entry/item. This is visually awful, and distracting. (And if there is any sense behind this, i.e. if "it's a feature, not a bug", please make it available by option only.)

e) "History" and "Recent" are totally awful, since those lists are populated by all those dozens of items you just touched a fraction of a second, by kb tree navigation, so those lists become totally useless since for identifying those "real finds" there to which you would like to navigate, using those lists as a "shortlist", you would have to read dozens of irrelevant "false hits". Solution to this problem is very easy: Just have items entered into those list JUST whenever they did appear for more than one second or such on screen (which is not the case for items "touched" by navigation only, and perhaps better, for more than 2 seconds (= opening of parent items), and even better, make it an option for the user to determine the length of display necessary in order for items to be included in those lists; I would probably choose 3 seconds then.

f) The looks: RN is currently one of the worst-looking outliners/PIM's out there (except for my possible missing possible adjustments?). I really beg you to have a quick look at some competitors, in order to do something about this. Tree, content (with title/tags), history/search panes, you imagined it "purist", but it's just ugly. As a first step (if you want it purist), please consider abolition of all those thin lines / thin frames, and the grey background. Especially, the title and tags frames, above and beneath the content field, are almost unbearable, as are the "lines" made up from the grey background between them and the content frame, and between tree and content frame, and between the latter and the search/history frame. (I'm speaking of its visual appearance in XP here.) Of course, such inferior appearance / visual appeal greatly harms the commercial outlook of any program, so it's important to work about it.

g) RN is MUCH, MUCH better than I had ever thought (except for the fact that a db-driven PIM should offer clones of course), but the help file does not reflect the power of this program (neither do the menu entries), and I invite newcomers/prospects to have a thorough look into the virtually endless "Tools-Customize Shortcuts" list, by which the hidden power of this fine program can be fully appreciated: There are many hidden gems there to be discovered! (It is one thing to just pretend, "RN's rtf editor is much superior to Ultra Recall's", e.g., but it's quite another thing again to discover the virtually endless possibilities in RN's editor(s), incl. not only tables, but extensive, powerful manipulation facilities for tables, too.)

h) The Boolean search problem (and I don't have to tell you how important such functionality, even without NOT or NEAR, is, for not getting endless hit tables). As said, there is a gulf between the power of this fine program, and what the help file tells you about it, and I obviously did not try ANY POSSIBLE variant in my extensive trials with the search function. In fact, from my understanding, and from the help file, it seemed "Fast Search" was something LESSER than "full-featured" search, but in fact, for the time being, it seems to be the only search flavor that correctly processes AND and OR search terms. (And my fault being, I seem to have left out this "Fast Search" in my frenetic search tries, and I'm indebted to PIMlover, from, to have very kindly mentioned this point to me. So, yes, indeed, AND and OR WORK, for the time being, but just in this special variety of search you would have had a tendency to overlook, possibly, from reading the help file. (It goes without saying that I hope for extending this functionality on all three search flavors.)

i) There is no distinction in search between "just tree titles" / "just in the tree" and "overall" / "tree and content". If you remember just some key word(s) in the title, such a distinction would be more than helpful, since it would spare you perhaps dozens of "false hits" within the hit table, through which then you would have to unnecessarily browse in order to find that one item you would like to work on, or you need to access for reading.

j) Import and Export seem both very limited at first sight, but they both present file hierarchies (even of rtf, and possibly html files, not trialled yet) to be built from the tree, and to build the RN tree from, and some other outliners/PIMs offer similar functionality, and those also offer special competiting formats, and that means that for many outliner file formats, you will be able to import your stuff into RN, and even to export your RN stuff into competing offerings, whenever the necessity might arise to do so. (Of course, I'll have to check the quality of html/docx export, which at the end of the day are the most important features in this respect, in order to further process the "product" you produce, from your stuff, in an outliner/PIM.)

k) I do not think (yet) the tagging function(s) is/are quite neat, but that's perhaps (partly) due to my possible misunderstanding of parts of it/them, but even then, the possible fact that I don't intuitively grasp that functionality, even with the help of the help file, should indicate that some work on this feature (group) could not do any harm ;-)

l) Tree: F2 currently opens a full-fledged item properties dialog, whilst in most cases, you would just like to adjust the item title a little bit, e.g. for eliminating a typo; so the regular F2 for "edit title in tree" function would be very welcome, and the current properties dialog could be opened by shift-F2 or whatever.

All this being said, I redirected Rael, the developer, to this thread, in order to comment here, as long as he will not have got a forum on his own, and anyway, I'll post more findings about RN here, and I invite fellow users to do so, too, since from my experience, positive observations should be shared widely, and criticism should be made public, too, in order to sufficiently motivate the respective developer to amend sub-standard functionality.

As for the above, point e) would need immediate attention, since the current absence of a usable history function (or then, the presence of a history function that forces you to navigate by mouse only!!!) makes this otherwise fine program almost unusable... and then, point f), the looks, seems to be primordial to me. ;-)

General Software Discussion / Re: Micro-review: Scapple
« on: April 20, 2014, 02:14 PM »
"Um...I think I may have figured that much out already."

"There are also some free concept mappers out there, such as Visual Understanding Environment (VUE) (which I prefer) and CmapTools."

(Cf. my kind asking for some details on both, let unanswered.) With all due respect to both cited posters, and I use that term on purpose: If I had said,

"There are also some brilliant if comparatively expensive text processors out there, such as Word Perfect (WP) (which I prefer) and Word.",

you would have called me an impolite egomaniac/fool, and you would have been right.

I think that whenever several but similar sw categories are mixed up, one should have the right to remind ourselves of some differenciation criterion, and whenever a poster brings in such similar sw offerings, especially when they ain't as universally known as MS Word is, e.g., some details should be brought in, too, instead of just blabbing, and worse, of forcing really interested parties into finding-out-for-themselves, from start to finish.

Just compare the posts of some of us here, with such no-content posts as the above, left un-amended, and it will become clear as day to you that some posts are constructive, whilst others just steal your time, especially in light of the fact that their authors, even when kindly asked to improve them somewhat, remain silent, i.e. remain stuck on thair blabbing-only position.

At the end of the day, it's a matter of style, and a matter of mutual respect, and yes, I feel entitled to speak openly on this matter, with regards to MY communication style, which is not based on "see what I know" but on "here's what I happily share". Some younger posters here should perhaps think about that for some minutes. And again, on purpose, I happily add a


Ok, I did my first steps with Syncovery now, and I would like to share my experience, the bottom line being, buy as soon as possible, even full price, it's the very best sync tool out there when we speak of "consumer sw", i.e. of all those tools between 0 (free) and about 200$ or even some bucks more (i.e. I don't know the "prof. / corporate" programs in the higher 3-digit, or in the 4-digit price range, but I intimately checked almost everything "below" that quite other sw category.

(And yes, for "GoodSync", I only can speak for the "2Go" version, but that one (and their "customer service") is so bad that I can simply not imagine the "regular" version might be THAT one better, and "2Go" was like throwing my dollars into the loo.)

Now, let's start again from the beginning. There are some "prof." sync tools (the most "relevant" of them being ViceVersa Pro (as said, very beautiful imo), and BestSync (which is reputed "even more stable stan VV", according to some)) that all have one point in common: Paying users have been begging, for years, for "intelligent" processing of file AND folder renames AND moves, TO NO AVAIL. (Why this is an important feature, see above, and yes, I admit it, if I rename 1200 jpg pics (in groups, e.g. by FreeCommander, an almost-splendid file commander which I always prefer to most paying ones, which I own, too), then move them, from some inboxes into (intermediate or final) target directories, then, technically, it's not THAT important if they are recognized, by Syncovery, as renamed and/or moved, or if some dumb competitor just copies 1200 "new" files (of which not a single one is new in fact), then deletes 1200 "old" ones, but for serious work, let alone corporate settings, this can make a VERY BIG difference, of many hours, let alone the above-mentioned file consistency problems... but even for 1200 .jpg pics, it's a RELIEF to have them just renamed/moved within the target directory: IT'S NEAT, YOU KNOW?! It's work as it should be done, instead of crappy detours, and yes, I'm aware that not everybody will follow me here... ;-)

Now for Syncovery, which is in a class of its own. My very first tries had been desastrous, but I beg you, don't let you be put off by such misleading, false experience, and please let me explain to you how to avoid such beginner's mistakes, in order to wholly profit from Syncovery from day one on.

First of all, the help file is not outstanding yet, and you will have to search for "tracking", in order to get to the relevant help page, "Smart Tracking" (= if you search for "smart tracking" instead, no hit will show up).

Then, I'm not fond of its profile set-up, divided into "Step 1...6", i.e. into 6 consecutive dialog frames: Some "help" is right there, but not enough help for a Syncovery beginner, and there is no "Help" button, and pressing F1 will NOT give you any context help.

Now don't take me wrong: Yes, for me, (functionally sub-standard; standard for synch sw being set by Syncovery!) ViveVersa has got some visual value I can only describe with difficulty: For me, it's one of the most beautiful Windows programs, just like I consider Storyist (Mac only, unfortunately), the most beautiful sw out there currently (and considering neither my own nor FreeHand 4 ain't available anymore), but then, the term's of "design" is twofold: visual, AND functional, and on that latter aspect, VV (I'm speaking of the free version here) is brilliant, too: Its preview pane is "View: All, Matched,
Unmatched, Newer and Older, Excluded", and then, there is "Method: Align Source and Target, Augment Target, Refresh Target, Update Target, Update and Force Target, Prune Target".

I convene for these "Method" options, you first will have to have a brief look into the help file, but from then on, everything will become soooo easy! Whilst in Syncovery, that succession of 6 dialog frames, even with additional frames, depending on your options, well... I'm not really fond of such a gui choice, and which clearly set me astray. Hence my begging for some better way to this in Syncovery, and yes; I'm aware that in VV, there is a distinction between "general option" and then "compare what with what", and then "how to compare that", whilst in Syncovery, everything is done within the respective comparison profiles". Also, you should be aware of that help file recommandation that for any working rename/moving monitoring, there should only be ONE profile accessing the folders in question (if I understand that well), so my initial, multiple profiles d: > s:, d:\1 > s:\1, d:\2 > s:\2 weren't perhaps the very best idea of doing things here - in fact, since with Syncovery, any unnecessary copy-plus-delete now will have come to an end, integral synching of the whole hdd will be much less of a problem than with "competitors" before, anyway...

Now let's assume you have some working directories on drive c: or d: (but even synching from usb stick i: to usb stick j: or whatever will work without fault), so in "Step 2", your synch direction will be "Left to Right". Then, in "Step 3" dialogue, there's a real problem. According to Tobias, the (very kind and helpful) developer, even "Standard Copying" will work, more or less, for correct checking of renames/moves (and that's why even for that radio button, the check box "Detect if files have been moved" is NOT greyed out"), but I cannot back up this: In fact, for me (again, Syncovery beginner, so I might have made other mistakes there), this setting was a horror experience (= as bad as with GoodSync2Go: totally worthless).

No, forget "Standard Copying" for the time being, brashly opt for "Smart Tracking" (or then, "Exact Mirror", of course), and you will see that the button "Configure" will not be greyed out any more, in that very "Step 3"; of course, you will check the "Detect if files have been moved" check box (and which includes renames, too; I had big doubts about this, but it works tremendously fine). Also, you will check the radio button "Adjust location on right-hand side".

You will click on that (now activated) button "Configure", and for "Moved Files", you'll check "Right Side", and for "Deleted Files", you'll either check "Delete permanently", or "Move into folder for deleted files"; for "Conflicting Files", I currently have set "Do nothing / label as conflict", and notwithstanding my comments on the visuals of VV, I can assure you that it's a delight to visually check the Syncovery preview; just compare with the total mess in GoodSync, and you'll immediately grasp how superior Syncovery's visuals are (and even from a functional pov that is ;-) ); for the last tab there, "Detect unchanges files on the right-hand side" seems to be the appropriate option.

And with those settings/options (subject to possible optimization hints from Tobias), you'll get a perfect result, and it's a joy to check for this fine and outstanding program's intentions in the preview pane. For the corporate world, Syncovery offers some special versions, and I'm perfectly aware now of the interest of such versions: You can safely assume that even for several hundred dollars/euros, there's nothing superior on the market out there.

Syncovery is one of my best software buy ever, and even at its original price, it would have been a steal.

My kudos to this brilliant, functionally totally outstanding program, even if both the gui and the help file stay unchanged, and even though that would be, marketing-wise, a shame. ;-) There are few sw categories in which one single offering makes the competition pale; thanks to Syncovery, synching is one of those select few.

And here for my unavoidable going-on-your-nerves P.S.: I did some programming again, this Easter Sunday, and again with listening, again and again, to one of the very best pieces of music available on YT, Lisa Stansfield at Ronnie Scott 2003 - Google's YT currently being Google's ultimate apology for serving us such crap as cnet downloads as first hit. ;-) (And yes, people interested in ace sax performances should also have a look into and especially into the title "Amerika" there, or, cut out from there, here: )

General Software Discussion / Re: Micro-review: Scapple
« on: April 05, 2014, 02:33 PM »
"And I don't find mind maps all that useful for the way I work."

Again, neither Scapple nor the two other applics mentioned by Andus above are mind map creators, and whilst not bothering you with replicating my lengthy developments of the difference between MM/outlining/horizontal outlining(Warnier) and then just scribbling ideas on paper, and putting them into various groupings/connections (I did that both in the UR and in the outlinersw fora), forgive me to contradict you, decidedly, on both plans:

- Scapple et al. (or sheets of paper) are in another category than MM/etc. (MM being for presentation purposes, above all other possible use)

- NOT putting down good ideas (be it on paper, be it wherever it goes), when not having finished some project, is just throwing three quarters of those ideas into the bin, forever, since later on, they will NOT present themselves again (or then, just in part, without also telling you their respective core connections, which might have been the much more valuable part of that idea); from this second point, you might deduct: have some "writing down" device ready everywhere (incl. your nightstand, in your car*, etc.); use it!; for Scapple-or-similar applics: have the respective files in IMMEDIATE access** whenever your pc's on.

* = the car aspect would mean, use some traditional device (like I do: I've got SEVERAL such Sanyo full-metal devices (= effect's like with a beautiful pen) with traditional tape, but alternatively, there are tapeless devices), NO iPhone, since even when you don't do/receive a call with it, just just it, in most European countries, you'll be in for a 3-digit euro fine if catched

** = by macroing, by loading with every Windows start... and that definitely makes a big part of Scapple's attraction, since it's lightweight (cf. Mind Manager, and cf. MM's hindrance of your thinking)

Lately, I even created an AHK macro (Alt-F8) that from everywhere, will create me a new, empty .txt file, within the fraction of a second (except for my typing the file name) in my standard folder, and (by default) beginning with "0" = "ToDo, unassigned yet" (or then, I change that into 1...9), to put down any idea I might get anywhere, without leaving my current applic frames (and I only get an error message when my idea will NOT have been stored behind the scenes).

I understand your point: Don't over-develop further projects, instead of realizing your current ones. But then, don't discard any possible idea for those further projects. Btw, that's differenciating a CEO (or then, his head of strategy) from his staff: He'll never have to wait fore implementation chores, before creating something new - and that's why maximized delegation possibilities, for creative people, are of utter importance.

For any better/additional idea, I would, that goes without saying, give full credit.

This being said, my musings certainly are of general interest, since even if you do just SOME macros, here and there, with any (free or paid) tool, the same probs will quickly arise, although to a lesser extent.


Now, as stated before, I've heavily been into AHK macroing lately.

Thus, I have to copy with numerous sw's internal key shortcuts / shortkeys, and also with their respective menu shortcuts (Alt-xyz).

And of course, I always try to assign identical / similar commands to the same shortkeys, in different programs, i.e. I either re-assign the original shortcut (if such re-assignment is available) to my "standard" shortkey for that function, or I intercept the (un-re-assignable) shortcut of a given applic, and then reassign it to my standard AHK shortcut for that function.

Now, bec of various internal shortcuts in many progs, such a system will quickly create an almost incredible mess: In many progs, I've got hours and hours to work on re-assigning their multiple internal shortcuts, or to simply get away with them (which is a pity, and which is in order to not them interfering with my AHK macros... ok, with AHK, this is technically not possible, thank God, i.e. AHK key (combination) assignments will prevail, but I', not able, from my personality, to "overwrite" internal shortkeys, without bothering about such things, so I have to look them all up, one by one, and notwithstanding any further development of my macro system: so you easily understand all this is a conceptual nightmare...).

As described here, trying to overlay your personal macro system, to your set of possibly 100 applics and tools of any kind, is virtually impossible, so what can we do?

In this connection, I tried various ways of administering at least my own macros (let alone the internal shortkeys of every given prog), by sorting in spreadsheets, by doing comments in my macro lines, then filtered by editors/regex, etc. - chaos it remains, and for every changes in my macro system (which occur daily), I would have to search for possible incompatibilities (with native shortkeys) in dozens of applics. As said before, it's a nightmare.


So, what's a viable STRATEGY to macroing?

I've adopted this one: For every applic, I delete/reassign/note any Alt-Fkey combination, i.e. I do NOT accept any alt-F-key shortkey in any of my applics: I "need" them for my own macros, and this comprises Alt combinations.

Then, for every applic that is NOT "calculation", i.e. except for calculators, spreadsheets, statistical sw and so on, I "sacrify" the numkey block, and whilst /, *, -, +, numenter and numcomma/dot have global assignments, the ten digit keys there are all available for individual shortkey assignments of the particular applic in focus; it goes without saying that wherever possible, I assign often-used commands to numkey keys, whilst lesser-used commands "go" to Alt-Fkey keys.

Also, I've sacrificed the 4 keys F9 to F12 to "global scope", i.e. have reassigned any applic-specific key assignments from them to some other key (combination); the same is true for "special keys" like "PrintScreen", "Pause" and such.

This is to say that I deliberately refrain from doing "mnemonic" key combinations like "control-alt-p" and the like, instead heaving to memorize some "Num9" for some command, in order to not have to endure the above-described chaos, triggered by happily mixing up your own macros, and "native" applic-bound short-keys all over the "place" = all over your keyboard.

To tell you the truth, for the time being, I have also preserved (i.e. not yet reassigned) a dozen or so shift-control-xyz key combinations, all for global var(iable) toggles of my AHK system, and that obliges me to also check for such shift-control-combinations in any of my applications, but from my experience, in most such applics, these are very rare, so perhaps I'll even maintain those.

I also have got a Cherry 4700 keypad, since it's the only low-price keypad out there which is "programmable", i.e. to which keys you can reassign other key combinations which then will be interceptable by AHK: shift-control-alt-a... in my case, i.e. cramped combis never ever originally assigned by any applic of my knowledge.


From the above, you see that my point is, try to separate, as far as possible, your own macro system (which you will have to memorize, more or less, notwithstanding the fact that at the beginning, and for rarely-used commands, perhaps in rarely-used applics, you will need some sort of reference table system, be it on screen, be it on paper), from any possible shortcuts from your various applics - use (perhaps) shift-control keys, use alt-F keys (and yes, for special commands not that much similar to commands in other applics, why not assign them to "unused" Alt-combis in that particular applic? = nor yet assigned to a (useful-to-you) command there, neither (and especially) to a menu shortcut over there)... and be brash* enough to sacrifice your numkeys in every applic where you won't enter digits but here and then anyway - and any macroing will become so much more straightforward for you!

* = How come some Continental knows such terms? Well, it's a remembrance from an Auden poem in Visconti's Conversation Piece (1974, and no, not on YT (yet))

Pages: prev1 2 [3] 4 5next