topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday April 26, 2024, 2:57 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - peter.s [ switch to compact view ]

Pages: [1]
1
First of all, and as you know, my working system is XP, and with "only" 2 giga of memory; I acknowledge that in/for 2015, this is decidedly sub-standard.

But then, as you know, too, there have been several threads here in this forum, and many more elsewhere, which treat the incredible sub-standardness of FireFox, re its inexistent memory management.

As said, I'm not into defamation, so I have to admit that in part, my probs could come from Avira free (formerly, I had used Avast free, even more intruding than Avast free), and also, I have to admit that my probs started with the very latest Adobe Flash update (16), which they offered in order to overcome (again, and, felt, for the 1,000th time) "security" probs.

I had installed that Flash 16, and then, after opening just SOME tabs in FF, I quickly not only ran out of memory, but had my system stalled for good, up to killing FF "process" by Win Task Manager, and by thus losing any tab = all the work I previously had put into searching for url's, links, etc. - it should be obvious for any reader that by opening some 12 or 15 tabs from search and links in previous "hits", you've got some "work done", which is quite awful to lose then.

I've always said, "you get what you pay for", and I've always acknowledged there are just SOME exception to that rule, but then, ALL of my experience backs this up, to 99.5 (= not: 99,9) p.c. of all cases, this rule applies perfectly, and FireFox seems to be the perfect example of TOTAL CRAP, delivered by some "volunteers", who like the idea that they are "giving out something valid for free", when in fact, they tell us, hey, dude, I know I cannot sell my shit, but ain't you willing to swallow it for free? Of course, I'm opening this thread not in order to defame FF, but in order to get new ideas about how to do things better, this whole forum being about that, right?

Thus, my very first reaction to FF being stalled* by that infamous Flash update was to deactivate Flash, and to observe things coming from that, for some week or so. Here I've got news for you: Flash, except for YT, is totally unnecessary, AND it's omnipresent (= "ubiquitous"), i.e. almost ANY web site, as poor-in-content or modest in scope it might be, there's virtually ALWAYS that line above my FF content window, "Do you allow FF to activate Flash for this site?" (or something like that, DC NOT doing this shit).

*= Of course, I've got plenty of room for "virtual memory M" by Windows, on c: (since my data, as said, is on some external hdd), and "virtual memory is managed by the system") - but notwithstanding, even if I allow a quarter of an hour (!!!) for any command to become effective, I always end up by killing the FF "process", after HOURS of waiting. At the same time, all other applications functions "quite normally", i.e. they respond to commands, but with that little delay you'd expect by my system's having replaced working memory by multiple hdd accesses, considering FF has eaten all the working memory. It's just FF that doesn't respond any at all.

And fact is, in more than a week, I NEVER had to tell FF to activate Flash, in order to get ANY useful info, from any of those several hundred pages all begging for Flash. (It's understood that for JavaScript, the situation is totally different: If you don't allow for JS, almost any web page of today will not work anymore, in any acceptable way. But again, don't mix up JS and Flash; JS having become a literally unavoidable "standard", whilst Flash is a simple nuisance, except for YT, and then, for rare cases in which you want to see some embedded "film" - IS propaganda? No thanks, and all the rest, no thank you either; let alone for indecently idiotic porn.)

Back to FF: My getting rid of Flash did NOT solve my probs. It's invariably "CPU 100 p.c." over hours, with Flash de-activated though, and as soon as I've got opened more than just 10 or 12 FF tabs; I assume these are JS scripts running, but then, even after MANY minutes, FF never tells me, "that JS is running, should we stop it?".

I have to say that I know about the existence of "NoScript for FF", but then, it's not obvious how to run that NS in some smooth way, just in order to intercept too-demanding scripts whenever they dare run, but leaving alone any menu "scripting" anywhere; do you

I wish to confirm again that I'm NOT speaking of porn or other crap sites, but that I'm just "surfing" among the most innocuous web sites you could imagine.

As for Flash, before deactivating Flash for good, I had tried Chrome, and I had the very unpleasant experience that with Chrome, and that incredible shit of Flash 16, all was as incredible awful as with FF and that incredible shit of Flash 16 (sic), if not worse (!), so it's obvious that Flash 16 is even worse than FF 36 (or was it 35? it's the current version all the same), but then, Chrome will allow your killing ONE tab running, whilst in FF, it's "all or nothing", i.e. if you decide to kill the FF "process", you will lose all your "search" work, too (since FF stalls your FF process (i.e. not your system as a whole, so it's obvious it's all a matter of FF's memory management), so it's not even possible to switch from one tab to another one in order to retrieve the respective url's, even manually).

Btw., WITH that incredible Flash 16, simple Flash sites (which in fact would not have even needed Flash to begin with, see above) brought FF to 1200 meg, then 1,500, then 2,000, 3,500 meg... in fact, Flash's memory demands are simply unlimited, and that's confirmed not currently (I admit), but from Flash users' experience back in August, 2014, i.e. some few Flash versions ago, and who say Flash of summer 2014 asked for unlimited memory, 6 giga, 8 gia, 10 giga... they were on systems of 8 or 16 giga of working memory, and they thought it was unbearable...

The only reason I cling to FF is the fact that "YouTube Video and Audio Downloader" is available for FF only (i.e. not Chrome), and that it's the ONLY YT downloader of my knowledge which lets you select best AUDIO quality, too (and not only best video quality, as its competitors do, at best) - but in the end, you can perfectly use FF for this YT downloading, whilst using Chrome for anything else, so that's "no reason".

Hence :

- Except for very limited usage (YT), Flash is totally useless and, short of viruses, the utmost nuisance on pc (or Mac) (and as usual, Jobs was first to identify this problem, AND to resolve it, for much of the systems he's been marketing)
- ( Similar things could be said about the really ridiculous and useless Adobe pdf viewer, but that's another story. )
- FF is to be considered liquid, stinking, green, morbid shit: If not even in iteration 36, software meets most basic standards, it will probably not meet them in iteration 100 either
- Chrome is "free", too, but we all know you pay with all your data... BUT: At least there, you KNOW WHAT price you pay for their "free" service, whilst FF "do it all benevolently", and obviously serve you perfect crap (whatever the reasons of FF being totally stuck, with 2 giga of work memory, and plenty of "virtual memory", your only alternative is to kill FF throughout if ever you want to get rid of some "CPU 100 p.c." over many, many minutes, with no end, instead of killing JUST SOME tabs going bonkers, is kindergarten)
- And yes, Avira free could be "in it" to some degree, too (= I had less problems, even with FF, when I "surfed" without any "protection") (but Avast free was really "unbearable", by their pop-ups (i.e. at least, I thought so, before my current problems with FF)... but perhaps, function-wise, they would always be preferable to Avira free, which is less intruding re pop-ups, but doesn't work as well with FF, then?)
- Any insight into NoScript for FF? Is there a chance to get it to stop JS scripts running amok but letting go of any "regular" JS script anywhere?

Your opinion/advice/experience is highly welcome.

EDIT:

Sorry, my mistake above, I just read:

"Allow www.donationcoder.com to run "Adobe Flash"?" - Should we not enter some overdue discussion re "Are site developers trying to do Flash even in pure-text pages utterly nuts?", right now?

2
Call me conservative; up to very recently I used two Nokia 9210i - why?

I

Two reasons, not at all related to each other, but equally important:

- I want a physical keyboard (ok, the Nokia kb is really bad, so this criterion is highly debatable), so the only other current alternatives would have been either other old smartphones (used ones), or that RIM stuff (changed their name but you know what I mean)

- I bought lots of expensive sw for those phones, and most readers will know that, it's smartphone sw developers who very early succeeded in forcing hardware linking (or what is it called?) to users: any mobile phone has got an IMEI number, and almost any (from my experience, 99 p.c. or more) sw for smartphones traditionally has been coupled to the IMEI in question: No (legal) chance even to de-install sw from phone 1 and THEN only install it to another phone: When your phone breaks, your expensive sw is dead.

I suppose this is also true for iPhones and Android (in fact I don't know), but the big difference is, there's a plethora of (also quite prof.) sw for both systems, costing between 2 and 15 bucks, when really useful smartphones-of-the-old-days sw came with prices much higher, and even into the 3 figures.

This being said, for sw developers, smartphones of the old days were a dream come true; it's just MS who today insist upon your sw licence being broken, together with your hardware, whilst decent sw-for-pc developers all allow for re-install when you change your hardware.

II

Now for batteries. As you will have guessed, I cannot use my (virtually "unbreakable": good old quality from the ancient times) Nokia phones anymore since I naïvely thought batteries would not become a problem, those "Communicators" having been sold by "millions", in very high numbers at the very least.

Well, I was wrong: Currently, they sell USED "Communicator" batteries for 3 figures, and my own little stock had come to an end, BEFORE I had figured out I should buy some additional supplies (and then, you cannot store "batteries" / cells (rechargebable or not) forever).

Ok, they now sell big batteries (and with quintupled capacity), with various adapters, even for those "Communicators", but buyer beware: Even if you're willing to use a smartphone connected with some crazy cable to some heavy battery in your pocket (well, in the old days a simple mobile phone was about 10 or 12 kg), this is not a solution since all (???) of these (= from their respective advertizing, not one will have the needed additional functionality indeed) will only work if you have got a healthy regular battery in your smartphone, too; in other words, the external battery can spice up your internal one, not replace it. Why do I know or think I now? (Perhaps I'm even mistaken???)

Now for the difference with many (all???) notebooks: I never had the slightest problem to connect my (over the years, multiple) notebooks to the current, and have them work fine, as long as the respective mains/power adapter was working correctly, long after the internal battery working and/or being available.

The same does not seem to be true with smartphones in general (???); at the very least, it's not true for my "Communicators":

It makes no difference if I have got a worn-out battery in the Nokia, or if I leave it out: Just connecting it to the power adapter (which in turn is connected to the mains of course, I'm not that a lunatic) will NOT do anything in order to my being able to start the phone, it remains just dead, and the same is true if I put the phone into its (equally expensive) "desk stand" (which in turn is connected to the power adapter). And since I've got two Nokias, several (worn-out) batteries, several power adapters, several desk stands, and know about permutations, I'm positive that my problems don't come from some broken smartphone.

In other words, my Nokias need a working internal battery in order to be able to take advantage from any external power supply, and from their respective ads, I suppose those external batteries will not make any difference; my question is, is this behavior typical for smartphones, or is it just typical for the dumbness of Nokia staff? (As we all know, Nokia is gone.)

If it's typical for mobile phones and / or smartphones in general, beware of investing too much into (even a well-sold) smartphone: Once you won't get any more batteries for that, all your investments in that phone will have been flushed.

III

So what I do for the time being? Went back to a combi of Nokia 6300 (har, har, batteries available as for now) and my old sub-notebook (with an internal umts card, reverting to "sleep state" in-between, and as long as the third-party cell will be alive) I hadn't really used any more for a long time:

Since those sub-notebooks are total crap: A regularly-sized notebook is difficult enough to type on (with 10 fingers, nor just 2 or 3) when in the office, you do right and use some decent, regular keybord, so it's obviously a very smart idea to buy some lightweight notebook for the road, but which has got a KB OF REGULAR SIZE (if not shape) - and don't forget the oh-so-useful (both for digit entering as for macroing!) dedicated keypad, and trust me about that; any sub-notebook (incl. those immensely pretty Sony sub-sub-notebooks that weren't continued though and now are available, used, for quadruple their price new) will be a constant and real pain-in-the-you-know-where: It's weight, not size that counts*, believe me, I'm judging from enough unpleasant first-hand experience.

IV

I just read, "Nikon kills third-party battery support", i.e. they probably put some additional electronics in their reflex camera preventing third-party battery makers from creating battery compatible cells: Another (for the consumer: very bad) "highly interesting" "development".


Your respective experiences / solutions would be very welcome.


*= this rule does not also apply in inter-human intimacy

3
...are not that superior either?

This is a spin-off from https://www.donation...ex.php?topic=40074.0 discussing MR update from 5 to 6.

"I'm a big fan of macrium reflect.  Very fast, very stable, no bloat."

MR seems to be the premier backup-and-recovery sw on the market as far as the paid version is concerned (which is discussed above).

As for their free version, though, I only can encourage possible users to refrain from it, not because it was really bad (in fact, I never knew and don't know), but because it does not seem to offer any functionality going beyond what less-renowned competitors offer, in their respective free versions, or more precisely, it does offer even less than they do.

In fact, I went back to Paragon Backup and Recovery Free, where I can start to reinstall of my backup from within running Windows (which for that is than ended, then Linux will loaded for the rewrite of c: (or whatever), and then Windows is loaded again) - why should I fiddle around with doing lots of things manually, with MR (Free) if I can have this repeated os swapping, both by Paragon or EaseUS (and perhaps by others), done automatically?

MR (Free), on the other hand, did the backup (onto my hdd), and when I tried to reinstall that backup (after some bad experiences, I do such tries immediately after the original backup now, not weeks or months afterwards and hoping for the best in-between), it told me I didn't have an external reinstall device (or whatever they call it) from which to run the backup.

After this quite negative experience with MR (Free), I'm musing, of course, why MR (paid) is touted the way it is, since from the moment on you're willing to pay, you'll get incremental/differential backup/restore, from their competitors, too (Paragon, EaseUS and also Acronis: this latter I never touched, having read about very bad experiences from other users, allegedly having lost data with Acronic, and with several versions that is).

Also, MR did not seem anything "fast" to me, not faster than Paragon or EaseUS anyway, and at least for Paragon, I can say it's perfectly stable (I once lost data with their partition tool, but that was my fault, triggered by quite awful, quite ambiguous visuals in the respective Paragon program: So today I use Paragon for backup and EaseUS for partitioning).

And as an aside, MR even has got its own wikipedia entry, of which the wikipedia staff is far from being happy (and they say so), and which contains some direct links to the MR site where you would have expected links to less seller-specific info.

And to say it all, MR, on their homepage, currently advises you to update from 4 to 5, whilst above, it's said that 6 is imminent (?), and that updating from 5 to 6 is NOT free for v. 5 owners.

All this makes me think that perhaps MR do some very good pr and are able to create some hype, whilst at the end of the day, it's just a very regular, decent product which succeeded in realizing higher prices than their competitors are able to realize, by that hype.

If MR (paid) really has some usp(s), please name them; their free version at least is a lesser thing than their contenders' free products.

4
General Software Discussion / Scraper too expensive at 20 bucks
« on: January 16, 2015, 06:34 AM »
(Original post at bits and referred to here was "$19 + seems an awful lot of money for software you can get the same type of thing for nothing. (...)".)

The problem lies elsewhere. A price of 20 bucks is certainly not a deal breaker, neither would be 40 bucks (original price), and there are competitors that cost several hundred bucks, and which are not necessarily better or much better.

First,

if you search for "download manager", the web (and the people who constitute it by their respective contributions) mix up web scrapers (like A1) and tools for downloading files specified beforehand by the user, but the download of which will then be done within multiple threads, instead of just one, by this using your possible fast internet connection to its fullest; of course, most of the scrapers will include such accelerating functionality, too. Thus, the lacking discriminating effort in what commentators see as a "download manager" does not facilitate the discussion to begin with; you should perhaps use the terms "scrapers", and "download accelerators", for a start, but there is also some "middle thing", pseudo-scrapers who just download the current page, but without following its links.

Second,

the big problem for scrapers nowadays is Ajax and database techniques, i.e. many of today's web pages are not static anymore, but are built up from multiple elements coming from various sources, and you do not even see those scripts in full; scripts you can read by "see page source" refer back to scripts on their servers, and almost anything that is done behind these scenes, cannot be replicated by ANY scraper (i.e. not even by guessing parts of it, and from building up some alternative functionality from those guesses), so the remark that A1's pages from scraped Ajax pages do not "work" is meaningless.

The only other remark re A1 I found in the web was, you will get "the whole page", instead of just the photos, in case you would like to download just the photos of a web page; IF that is right, that was a weakness of A1 indeed, since these "choosing selected content only" questions are the core functionality today's scrapers could and should have, in the above-described general framework in which "original web page functionality" can not be replicated anymore, for many pages (which often are the ones which are of most interest = with the most money behind = with both the "best" content, and lots of money for ace programming).

Thus, "taking up" with server-side programming has become almost impossible for developers anyway, so they should revert to optimization of choosing selected content, and of making that content available, at least in a static way, and it goes without saying that multiple different degrees of optimization of that functionality are imaginable: built-in "macros" could replicate at least some standard connections between screen/data elements "on your side", and of which the original triggers are lost, by downloading, but this would involve lots of user-sided decisions to be made, and hence lots of dialogs the scraper would offer the user to begin with ("click on an element you want as a trigger, then select data (in a table e.g.) that would be made available from that trigger", or then, big data tables, which then you would hierarchically "sort" in groups, in order to make that data meaningful again).

It's clear as day that the better the guesses of the scraper in such scenarios, the easier such partial re-consitution of the original data would often become, and also, that programming such guesses-and-services-offered-from-those would both be very "expensive" in programming, and be a never-ending task, all this because today's web technologies succeed in hiding what's done on the server side.

In other words, from even very complicated but static, and even pseudo-dynamic (i.e. get it all out of databases, but in a stringent, easily-to-be-replicated way) web pages yesterday, to today's dynamic web pages, it has been a step beyond what scrapers sensibly would have been able to handle.

But it's obvious also that scrapers should at least perfectly handle "what they've got", and the above-mentioned example (as said, found in the web) of "just downloading the pics of a page", whilst being totally realistic, is far from being sufficient as a feature request:

In so many instances, the pics of the current page are either just thumbs, or then, just pics in some intermediate resolution, and the link to the full-resolution pic is not available but from the dedicated page of that middle-resolution pic, and the situation is further complicated by the fact that often, the first or second resolution is available, but the third resolution is not, and that within the same start page, i.e. for the same task at arrival, for some pics, the scraper / script would have to follow two or three links, in for other pics linked to at the same page, it would have to follow just one or two.

This being said, of course, such "get the best available resolution for the pics on current page" should be standard functionality for a scraper.

But, all this being said, it also appears as quite evident to me that for tasks beyond such "elaborate standard tasks" (and which could be made available by the scraper "guessing" possibly relevant links, then have the user choose from the intermediate results, and then the scraper building up the necessary "rule(s)" for the site in question), scraper programming comes with the additional problem that such "specific rule building" would be split into a) what the scraper would make available and b) what the user could make out of these pre-fetched instruments, whilst in fact, the better, easier, and ultimately far more powerful solution (because the limitations of the intermediate step would be done away, together with that intermediate step) would be to do scripting, but ideally having some library of standards at your disposal.

(Readers here in DC will remember my - unanswered - question here how to immediately get to "page x" (e.g. 50) of an "endless" Ajax page (of perhaps 300 such partial "pages" (or whatever you like to name those additions), instead of "endlessly" scrolling down to it.)

Anyway, precise selection of what the user wants to scrape, and of "what not", should be possible in detail, and not only for links to follow on start page, but also for links further down, at the very least for links "on page 2", i.e. on several kinds (!) of pages which only have in common the fact that all of them are one level "down" from the respective "start page" (I assume there are multiple but similar such "start pages", all of them to be treated in a similar (but not identical, see above) way.

Third,

so many scrapers (and download accelerators, too) tout their respective accelerating power, but few, if ever one, mention the biggest problem of them all: More and more server programs quickly throw your IP(s!) and even your PC out of their access scheme, should you dare scrape big content and/or, repeatedly, updated content, and again, as above, the more elaborate the content and their server-side page-build-up programming, the higher the chances are that they have sophisticated scraper detection, too.

What most people do not know, when they choose their tunnel provider, is the fact that in such "heavy-scraping" scenarios, it's quite "risky" to get a full-year contract (let alone something beyond a year), and that there are special tunnel providers where you rent multiple IPs at the same time instead - which comes at a price.

With these multiple addresses, many scraping guys think they are on the safe side - well, what's multiple addresses "abroad" (from the server's pov), and when in country x no such provider can provide you any, or more than just a handful of "national" IPs?

And it does not end there. How "visually good" is your script, from the server's pov again? Don't you think they cannot "put it all together again" when your scraping follows detectable rules? To begin with, your scraping is probably mutually exclusive, which is obviously a big mistake, but which facilitates combining the parts on your side, right? He, he...

And you're spacing your requests, of course, in order for the server not to detect it's a machine fetching the data? He, he, again, just spacing the requests in time does not mean the server will think it detects some real person, looking for the data in a way some bona fide prospect would look for that data.

Not to speak of bona fide prospects looking in certain standard ways, but which never are the same though, and that they don't do just sequential downloading ("sequential" does not mean, follow link 1, then 2, then 3, but link 35, 482, 47, whatever, but download, download, download!), but revert to some page before, press F5 here or there (but not systematically of course), and so on, and in endless ways: As soon as there is a possible script to be detected, those servers send a signal on a real person on their side, and who will then look into things, relying on their scripts-for-further-pattern-detection: time of the day for such a "session", amount of data downloaded, number of elements downloaded, order in which (sub-) elements are downloaded (patterns, too similar and/or or not "real-life" enough).

Then, even if you quite perfect all this, by having your machines replicating real-life behavior of different real persons, even most real-life prospects will not remain interested in the same or similar data over the years, and most of them, not even over months in a row!

And all this with the concurrent problem of the geographic repartition of your IPs again: Where almost all of their bona fide prospects would sit in some specific country, or even in some specific region of that country, and so all of the above problems, even if resolved in perfect ways (and this necessarily included lots of overlaps if you want your global scheme to remain "realistic") will be only partial solutions and not work for long if you cannot resolve the problem of how to fake IPs and their geography, instead of just renting some.

My 2 cent to put into perspective some naïve, "$19 + seems an awful lot of money for software you can get the same type of thing for nothing.", and I certainly left out additional aspects I didn't think of on the fly.

5
General Software Discussion / Desktop search; NTFS file numbers
« on: January 11, 2015, 07:55 AM »
This is a spin-off of page 32 (!) of this thread https://www.donation...x.php?topic=2434.775 ,

since I don't think real info should be buried within page 32 or 33 of a someday gross-page-long thread of which readers will perhaps read page 1, and then the very last (pages) only; on the other hand, even buried on some page 32, wrong and/or incomplete "info" should not be left unattended.
____________________

Re searching:

Read my posts in http://www.outliners...om/topics/viewt/5593

(re searching, and re tagging, the latter coming with the 260 chars for path plus filename limitations of course if you wanna do it within the file name... another possibly good reason to "encode" tags, in some form of .oac (Organisation(al things) - Assurances - Cars), instead of "writing them out")

Among other things, I say over there that you are probably well advised to use different tools for different search situations, according to the specific strengths of those tools; this is in accordance with what users say over here in the above DC thread.

Also, note that just searching within subsets of data is not only a very good idea for performance reasons (File Locator et al.), but also for getting (much) less irrelevant results: If you get 700 "hits", in many instances, it's not really a good idea to try to narrow down by adding further "AND" search terms, since that would probably exclude quite some relevant hits; narrowing down to specific directories would probably be the far better ("search in search") strategy; btw, another argument for tagging, especially for additional, specific tagging of everything that is in the subfolder into which it "naturally" belongs, but which belongs into alternative contexts, too (ultimately, a better file system should do this trick).

(Citations from the above page 32:)

Armando: "That said, I always find it weird when Everything is listed side by side with other software like X1, DTSearch or Archivarius. It's not the  same thing at all! Yes, most so called "Desktop search" software will be able to search file names (although not foldernames), but software like Everything won't be able to search file content." - Well said, I run into this irresponsible stew again and again; let's say that with "Everything" (and with Listary, which just integrates ET for this functionality), the file NAME search problem has definitely been resolved, but that does not resolve our full text search issues. Btw, I'm sure ET has been mentioned on pages 1 to 31 of that thread over and over again, and it's by nature such overlong threads will treat the same issues again and again, again and again giving the same "answers" to those identical problems, but of course, this will not stop posters who try to post just the maximum of post numbers, instead of trying to shut up whenever they can not add something new to the object of discussion. (I have said this before: Traditional forum sw is not the best solution for technical fora (or then, any forum), some tree-shaped sw (integrating a prominent subtree "new things", and other "favorites" sub-trees) would have been a thousand times better, and yes, such a system would obviously expose such overly-redundant, just-stealing-your-time posts. (At 40hz: Note I never said 100 p.c. of your posts are crap, I just say 95 or more p.c. of them are... well, sometimes they are quite funny at least, e.g. when a bachelor tries to tell fathers of 3 or 4 how to rise children: It's just that some people know-it-all, but really everything, for every thing in this life and this world, they are the ultimate expert - boys of 4 excel in this, too.)

Innuendo on Copernic: Stupid bugs, leaves out hits that should be there. I can confirm both observations, so I discarded this crap years before, and there is no sign things would have evolved in the right direction over there in the meantime, all to the contrary (v3>v4, OMG).

X1: See jity2's instructive link: http://forums.x1.com....php?f=68&t=9638) . My comment, though: X1's special option which then finds any (? did you try capitals, too, and "weird" non-German/French accented chars?) accented char, by just entering the respective base char, is quite ingenious (and new info for me, thank you!), and I think it can be of tremendous help IF it works "over" all possible file formats (but I so much doubt this!), and without fault, just compare with File Locator's "handling" (i.e. in fact mis-treating) accented chars even in simple .rtf files (explained in the outliner thread) - thus, if X1 found (sic, I don't dare say "finds") all these hits, by simply entering "relevement", for finding "relèvement" (which could, please note, have been wrongly written rélèvement" in some third-party source text within your "database" / file-system-based data repository, which detail would make you would not find it by entering the correct wording), this would be a very strong argument for using X1, and you clearly should not undervalue this feature, especially since you're a Continental and by this will probably have stored an enormous amount of text bodies containing accented chars, and which rather often will have accent errors within those original texts.

X1 again, a traditional problem of X1 not treated here: What about its handling of OL (Outlook) data? Not only that ancient X1 versions did not treat such data well, but far worse, X1 was deemed, by some commentators, to damage OL files, which of course would be perfectly inacceptable. What about this? I can't trial (neither buy, which I would have done, otherwise) the current X1 version, with my XP Win version, and it might be this obvious X1-vs.-OL problem has been resolved in the meantime (but even then, the question would remain which OL versions would possibly be affected even then? X1-current vs. OL-current possibly ok, but X1-current vs. OL-ancient-versions =?!). I understand that few people would be sufficiently motivated to trial this upon their real data, but then, better trial this, with let's say a replication of your current data, put onto an alternative pc, instead of runningg the risk that even X1-current will damage any OL data on your running system, don't you think so? (And then, thankfully, share your hopeful all-clear signal, or then, your warnings, in case - which would of course be a step further, not necessarily included within your first step of verifying...)

Innuendo on X1 vs. the rest, and in particular dtSearch:

"X1 - Far from perfect, but the absolute best if you use the criteria above as a guideline. Sadly, it seems they are very aware of being the best and have priced their product accordingly. Very expensive...just expensive enough to put it over the line of insulting. If you want the best, you and your wallet will be oh so painfully aware that you are paying for the best."

"dtSearch - This is a solution geared towards corporations and the cold UI and barely there acceptable list of features make this an unappetizing choice for home users. I would wager they make their bones by providing lucrative support plans and willingness to accept company purchase orders. There are more capable, less expensive, more efficient options available."

This cannot stay uncommented since it's obviously wrong in some respects, from my own trialling both; of course, if X1 has got some advantages (beyond the GUI, which indeed is much better, but then, some macroing for dtSearch could probably prevent some premature decision like jity2's one: "In fact after watching some videos about it, I won't try it because I don't use regex for searching keywords, and because the interface seems not very enough user friendly (I don't want to click many times just to do a keyword search !)."), please tell us!

First of all, I can confirm that both developers have (competent) staff (i.e. no comparison with the usual "either it's the developer himself, or some incompetent (since not trained, not informed, not even half-way correctly paid "Indian"") that is really and VERY helpful, in giving information, and in discussing features, or even lack of features, both X1 and dtSearch people are professional and congenial, and if I say dtSearch staff is even "better" than X1 staff, this, while being true, is not to denigrate X1 staff: we're discussing just different degrees of excellence here. (Now compare with Copernic.)

This being said, X1 seems to be visually-brilliant sw for standard applics, whilst dtSearch FINDS IT ALL. In fact, when trialling, I did not encounter any exotic file format from which I wasn't able to get the relevant hits, whilst in X1, if it was not in their (quite standard file format) list, it was not indexed, and thus was not found: It's as simple as that. (Remember the forensic objectives of dtSearch, but it's exactly this additional purpose of it that makes it capable of searching lots of even quite widespread file formats where most other (index-based) desktop search tools fail.

Also, allow for a brief divagation into askSam country: The reason some people cling to it, is the rarity of full-text "db's" able to find numerics. Okay, okay, any search tool can find "386", be it as part of a "string", or even as a "word" (i.e. as a number, or as part of a number), but what about "between 350 and 400"? Okay, okay, you can try (and even succeed, in part), with regex (= again, dtSearch instead of X1). But askSam does this, and similar, with "pseudo-fields", and normally, for such tasks, you need "real" db's for this, and as we all know, for most text-heavy data, people prefer text-based sw, instead of putting it all into relational db's. As you also know, there are some SQLite/other-db-based 2-pane outliners / basic IMS' that have got additional "columns" in order to get numeric data into, but that's not the same (, and even within there, searching for numeric data RANGES is far from evident).

Now that's for numeric ranges in db's, and now look into dtSearch's possibilities of identifying numeric ranges in pseudo-fields in "full text", similar to askSam, and you will see the incredible (and obviously, again, regex-driven) power of dtSearch.

Thus, dear Innuendo, your X1 being "the absolute best" is perfectly unsustainable, but it's in order to inform you better that I post this, and not at all in order to insinuate you had known better whilst writing the above.

____________________

Re ntfs file numbers:

jity2 in the above DC thread: "With CDS V3.6 size of the index was 85 Go with about 2,000,000 files indexed (Note: In one hdd drive I even hit the NTFS limit : too much files to handle !) . It took about 15 days to complete 24/24 7/7." Note: the last info is good to know... ;-(

It's evident 2 million (!) files cannot reach any "NTFS limit" but if you do lots of things completety wrong, and if you persistently left out 3 zeros, it would have been 8.6 (or, with the XP number, 4.3, but nothing near 2.0:)

eVista on

https://social.techn...forum=itprovistaapps :

"In short, the absolute limit on the number of files per NTFS volume seems to be 2 at the 32nd power minus 1*, but this would require 512 byte sectors and a maximum file size limit of one file per sector. Therefore, in practice, one has to calculate a realistic average file size and then apply these principles to that file size."

Note: That would be a little less than 4.3 (i.e. 2power32-1) billion files (for Continentals: 4,3 Milliarden/milliards/etc.), for XP, whilst it's 2power64-1 for Vista on, i.e. slightly less than 8.6 billion files.

EDIT: OF COURSE THAT IS NOT TRUE: The number you get everywhere is 2power32 = slightly less than 4.3 billion files, and I read that's for XP, whilst from Vista on, it would be double of that, which would make it a little less than 8.6 indeed (I cannot confirm this of course), and that would then be 2power33, not 64 (I obviously got lead astray by Win32/64 (which probably is behind that doubling though)).

No need to list all the google finds, just let me say that with "ntfs file number" you'll get the results you need, incl. wikipedia, MS...

But then, special mention to http://stackoverflow...iles-and-directories

with an absolutely brilliant "best answer", and then also lots of valuable details further down that page.

I think this last link will give you plenty of ideas how to better organize your stuff, but anyway, no search tool whatsoever should choke by some "2,000,000 limit", ntfs or otherwise.

6
General Software Discussion / And IT Man of the Year 2014 Is...
« on: December 24, 2014, 08:05 AM »
Diego Garcia.

(who of course stands for a rare, high-brow collaborative programming effort). Here's why (in French, but a Century ago, that was the lingua franca for the educated people of this world anyway, so some google translation effort should not be out of your reach):

http://www.parismatc...vol-MH370-2-2-675084

(this is "part 2", but which includes part 1 - as you all know, the French do have a reputation of being a little unorganized).

You will learn that Boeing have their own, official patent for their ways to remote control their own aircraft, which comes handy e.g. whenever they decide, for whatever reason, that such an engine should be brought down immediately.

It's a secret for no one that ace technology can almost exclusively be found in weaponry, and for programming, that's similar - whilst e.g. if you want to hear the most elaborate lies there are, both government authorities and air carriers (and their paid or free yappies) are prime addressees.

Of course, they ain't bearable as long as you consider their output fun, especially since if you don't take it all on second degree, your intellectual prerequisite should be that you think logic is a new iApp (just one example: yes, in order to bring down an aircraft onto some military base, in order to destroy it, yes, first they will let you do that, instead of intercepting you, and second, it's a brilliant idea to drill traditional landing on their runway: but let's not forget most people will swallow anything that comes from their thinking delegates - if that reminds you of stories of spit in the North Corean camps).

And now for the reasons of all this, well, don't trust "science" and her lies either, just "trust" the bad connections within your own, poor brain (or do even not if you don't want to fall to self-deceit any other second):

Here's a wonderful specimen of why, for example, even corporations like MS ain't able to output decent software (just two lesser-known examples: yes, Word has got cross-links, but have a look at the way they are implemented; yes, there is Active Directory, but look at its incredibly bad permissions M), and it also explains why even very smart people's output, in most cases, is abysmal: the smarter you are, the sooner in your life you will have internalized that total self-censorship is in your primal interest: They call this "survival instinct" (Darwin's "best fit");  it's not but in very few industries where "anything goes" that ironically you are entitled to set your thinking free (and the more perverted and / or strange the better).

But back to the regular way of collaboration and which assures that nothing outstanding will be created, even if the combined I.Q. of 3 people amounts to 500, and how they "sell" you their propaganda (note the lovely pic which I'd call the "shut-up nigga" - well, I'm just the messenger, and of course that pic reminded me of Uncle Tom, and of the mythological three apes; also note the perfect white-collar clothing of the shut-up nigga - so please identify to him if you're white, too: if even bogeyman can be hand-tame, you can be be a "good dog!", too!):

http://www.ozy.com/a...&utm_campaign=pp

Well, if you wonder how a "normal person" can title

"How to Succeed at Work? Censor Yourself",

here's why,

"After a childhood of jumping from country to country, Nathan is used to feeling like a tourist everywhere he goes."

Yes, that's the fate of many a diplomat's child: Lifelong deracination to the point of believing in the salvationary nature of any Ebola saliva they feed you, instead of just gulping it in order to survive for some more days. (Of course, I don't even mention possible insurgency against your gaolers: That's as out of the question for the lifelong inmates of Western oligarchies as it is for Pyongyang's slaves.)

If up to now, you only felt that it was oh so queer that even very smart people a) "believe" and / or b) produce quite underwhelming output, even in big corporations where there's plenty of resources, well, face it:

The human brain's interconnections ain't done that brilliantly yet... which might end up quite soon in some new ai mainframes even queerer than man himself, and that could be another end (except for the mythical cockroaches, and then, in some more million years from now...).

In the meantime, "Merry Christmas!" and similar are sort of an obscenity, don't you think so?

The above "How to Succeed at Work? Censor Yourself" is one side of the coin, the findings of the Milgram experiment (1963) being the other side, the "coin" being Man's Perverted Nature.

Hence, no hope for any decent MS software ever! ;-) And sorry for possibly having impeded your Christmas illusion, but at least smart people should revert to thinking mode here and then, at the very least, and perhaps Christmas' contemplative mood could lower your traditional, human resistance  to home truth.

(Notes: a: Man's worth in this society being determined by his standing, it's consequential that the smartest coders go to MS et al., instead of doing their own thing, since most own things in coding don't generate high 6- to low 7-digit incomes p.a., and that's why even from "independent developers", you don't get real goodies in most occurrences either; b: Don't blame me for not having read the original Cornell Univ article from http://digitalcommon...ll.edu/articles/910/ "Creativity from Constraint? How Political Correctness Influences Creativity in Mixed-Sex Work Groups", I reminded you of the reasons for this just days ago; don't blame me for not developing the bias between Nathan's article and the mixed-sex work group setting we're referring to - believe us: mixed-sex group thinking has very little to do with mental auto-crippling, fascism just being another word for group-dynamics, and vice-versa, and leafing thru any newspaper of your choice does show the effects of this very unhealthy miswiring of most of ours' brains, page after page.)

7
General Software Discussion / Focus by view?
« on: June 20, 2014, 06:23 AM »
As you know, I'm into outliners, the (available) 2-pane kind, and doing it simili-3-pane, by an additional file manager pane being my "project pane", far to the left of my screens. As you also know (since I bothered you with the descriptions of this setup more than you would have asked for), I got a second screen for the respective "input", i.e. internet browser (FF), Excel, pdf's, and so on, and you know I do the interaction between all this by AHK.

Now where I've got real probs, and which cannot be resolved by AHK means or any other such tool, is, I bought some additional keypads and such, in order to set focus, and/or to have DIFFERENT pgup/pgdown/etc. buttons for those several frames...

I've said this before, just tiny additional keypads/keyboards are realistic in any way (Cherry 4700 being something "great" since really cheap, in comparison), and then, sacrifying the original keypad, outside Excel/TenKey/etc.

This being said, I never discovered a realistic/"ready-for-prime-time" way of switching between my "multiple" frames, i.e. PM frame (by it by AHK, btw. I'll share such a thing in some time; be it by a tiny-sized file manager window; frame 1), your "main applic" (with, speaking of traditional outliners, 2 main frames, tree (frame 2) and content (frame 3)); be it your "source frame" (whatever its current content may be; frame 4).

I've tried them all, F keys for selecting focus, then the dedicated pgup/pgdn/etc. keys; additional keyboards with dedicated keys (Cherry 4700, original keypad, and also a Preh 128 keys-kb); also, lately, my "best solution" had been to do some toggles/set-ups by which, AHK switched focus automatically, depending on inactivity-by-seconds-and-frame--having-had-focus-previously (and setup/toggle in question).

Ok, there are touch-screens now, but, let's be realistic: You always need your kb, on your table, and any screen beyond that primal set-up; there is no space to lay down a touch-screen in front of you. Then, you might place a touch-screen nearer to your body/eyes/hands than you would place a traditional screen, but is that a real good idea for your neck, after all? And anyway, reaching out, with your finger, to anything beyond your kb is not that realistic for 14-hours work days, right?

Thus, there is no doubt whatsoever that we need (sorry for the "we" here, but I assume we've, more or less, similar probs, and even if you've just got 2 regular frames, not 4, as I've got, and if you've got 1 big screen, not 2 minor ones, like I do: Every "switching frames" from within your keyboard will get into your way, i.e. will constitute an hindrance to your "work flow": It will have nothing to do with any "intuitive computering". (I even considered "foot pedals" for this... OMG!)

That's why ALL OF US need the very simplest of things there could be: A camera above our screen(s), and which will monitor our eye movements (and which is NOT connected to the net, btw), and by which's perceptions (i.e. you look at some frame for more than x ms, and ideally, this time lag should be individual to every frame!!!!!) some sw will switch focus to that frame, in order for THAT FRAME to respond to your kb, incl. any navigation keys and all.

Fellow DC's, any ideas? ;-)

(I'd write it myself and share it happily with you, wasn't it for the camera's eye movements' monitoring part of such sw...)



EDIT: Not only the time frame, for each frame, should be individual, but also, the camera sw should differenciate if, within a frame, you look straight at a precise point (= then, the sw should switch focus quite quickly), or if your're "searching"/"reading" something within that frame, in which case focus switch should NOT be made as soon as that or not at all, except if your're looking at the end of the visible text/list there, in which case it should even scroll... You see, I'm delusionally dreaming...)


8
This is probably no news for most fellow posters here, but perhaps it's worthwile to remember, before buying the wrong edition or at the wrong price.

I

Look at the attached pdf please; you see a common phenomenon at amazon's, which is some crooks trying to sell books which are NOT out of print, to "idiots", i.e. customers not searching deeply enough, at two times the original price, be it for really used book, or for brand-new books declared as "used", in order to sidestep national book price binding legislation.

I did the search in amazon, with full title, in order to make a concise screenshot of just these two offerings one after the other, so you will not "fall" for this scheme here.

But ordinarily, you search otherwise, and amazon will present you with lists of books, and then those offerings, official price for new book, and some deluded offer for the same book, will NOT necessarily follow each other, but other books often will be in-between.

(The example is from the German amazon site, but I have seen this phenomenon on the USA/GB/F amazon sites, too.) (And see below point V.)

II

When the book really is out of print, there are two alternatives: Too few buyers, or many buyers. In the latter case, why would you pay twice or thrice the price for the former edition, when the new edition is highly probably imminent? In the former case, well, it might be cheap, then buy, but if some sellers think they can make a big benefit, why not photocopy the book from your library (if really you need it in full and permanently), which is perfectly legal in such cases. (Of course, this doesn't apply to "photographic monographies" and other coffee table books.)

III

amazon itself is not really honest in its prices: They often say, you will safe x p.c. from original price, when in fact that price is only slightly higher than their price, or in other words, they invent some "original" price which even the original source does not ask for. This being said, in spite of their lying about the original price and your savings percentage, often amazon.com has got highly interesting prices, but not for European customers (and downloads "to" Europe are forbidden, i.e. not available without U.S. credit cards, street addresses, and so on), whilst amazon in Europe, most of the time, is not of real interest, since some dealer or another will send it out for less, often for much less than what amazon.de or .fr ask for it.

IV

This (III) is particularly true with offerings from both dealers and individuals on the amazon platform: The same out-of-print book on amazon will often be several times more expensive on amazon, than on "competing" marketplaces, the quotation marks being the explanation for this phenomenon, or in simpler words, other marketplaces ain't quite real competitors for amazon anymore, and thus... This being said, never buy on amazon too quickly: First, try individual sellers, by a regular google search, or, for books, alternative platforms.

V

Speaking of google, there is another, big disadvantage related to point I above: When you don't search from within amazon, but when you search for a book in google, and then are redirected to amazon (which you will invariably be, as if other booksellers didn't even exist anymore), in many instance, this "amazon hit by google" will be the alternative, totally overpriced offering, and without any indication of, let alone a link, to the "real" one, the one for the new book at a regular price; in fact, my google search for the book in the pdf went straight to the overpriced "used" book offering (alone), and this not having been the first time, I thought it was time I shared some advice to not fall into frequent amazon traps.

VI

Similar with ancient editions. In 99 p.c. of all cases somebody wants to buy some book, he's after the current, most recent edition, some very rare exceptions proving this rule. Now, these google links to amazon will NOT cater for this need, but in many instances, they showed me the amazon page for some ancient edition of that book, and whilst in theory, such amazon pages would have a line, near the top, saying, "from this book, there is a more recent edition available; would you like to go the relevant page?" or something like that, I can confirm I've seen this line in some cases, but in the majority of such cases I did NOT see that line, so when in amazon, you'll have to search again for that same book, and look at the hits with a sharp eye, and you never know for sure:

When it says, it's from 2012: For a monography, that's quite recent, and in most cases, that's the (unique) edition you're looking for; but for a textbook, in 2014, that might even be TWO editions too old, not just one, an intermediary edition having been published in 2013, and the current one being from April, 2014. Similar for monographies: If what google shows you in amazon, is from 2004, there could be very well be some "revised and enlarged" edition from 2009, so be careful in amazon, and especially "coming" from google.

VII

Similar to the previous problem: Many alternative sellers in the amazon marketplace hide the fact they are selling outdated editions. It's ok they present previous or even ancient editions on the page of the current edition, but then, in the short description text you'll seen when you click their offer, only some honest sellers indicate their offer is for such an outdated edition, whilst the majority of them simply don't fee obliged to mention this very important fact, and that's why, by buying books from amazon marketplace, you'll get into lots of trouble, on many occasions, especially since the two biggest booksellers on that marketplace on amazon.de both give a dime for informing their customers about these flaws of their offerings, and they both have been doing this this perfectly illegal way systematically and for years now, without amazon (which is in perfect knowledge of these ongoings) doing anything about this abuse of its customer base, just as in ancient times ebay allowed crook sellers to administer retaliation evaluations on buyers who dared giving those crooks a bad evaluation for having had them.

VIII

ebay learned from the negative effects of this on turnover, and even on amazon, with their refund-no-questions-asked system, it's the buyers who today treat (especially individual) sellers not well: If ever you're sufficiently criminally-minded, it's up to you to "buy" some coffee table book on amazon, tear out the pages you're after, then send it back as "defective", and amazon will refund you, the seller being ripped-off at 100 p.c. (and there are very expensive coffee table books on amazon...) Thus, my last point is about the risks of SELLING on amazon, and it's even worse for the seller: Many books are just between 10 and 20$, so registered mail is not justified, and what to do if the buyer just pretends he didn't get the book? And of course, amazon is much too expensive for the seller... but then, e.g. priceminister.com, in France, is even more expensive, but that's another matter.



Anyway, in one sentence: On amazon, don't ever be sure that price and edition are the correct ones before having checked and re-checked (especially elsewhere).


EDIT: As you can see in the screenshot, it's the same book, but with the title slightly changed/rearranged, and that's how many of such "double entries" in the amazon db are created, notwithstanding the identical ISBN (!), but you've got such double entries even with identical title lines, and then with some irrelevant add-ons there, which seem to have been added on purpose, in order to create the "twin", to which google will then mislead buyers... (And yes, the ISBN thing should prevent all this...) (And of course, "neu" = "new" and "gebraucht" = "used".)

9
Yesterday, I here said, it's on purpose that I title some threads "Review" (and even if my first post there isn't a full-grown "review" (yet)) since I (rightly) assumed that such titles get good coverage with google, all the more so with DC as the site it comes from (and relative hit numbers, e.g. for the RN thread, vs. others, prove me right).

Today, I've been googling for "winsnap review" (soon on bits for 15$ = 50 p.c. off, regular price 2 months ago being 25$), and quickly gathering some hits from the first 15(! i.e. I'm among those who systematically look into the second tenner, too (and further on in some instances)), I then browse those pages.

Hit number 14 (so at least it was not listed within the very first google page, but then, those first 10 links weren't all for "reviews" either...) was

"WinSnap Review - StrategyEye Digital Media
digitalmedia.strategyeye.com/.../04/.../winsnap-revie...
21 apr. 2014 - WinSnap enables users to effortlessly capture the screen in five methods, apply drawing tools to prepare them for online publishing (including ..."

and clicking on it was this:

http://digitalmedia....4/21/winsnap-review/

with this:

"WinSnap Review
21 Apr 14RecommendTweetShareEmail
WinSnap enables users to effortlessly capture the screen in five methods, apply drawing tools to prepare them for online publishing (including watermarks and filters), and export the new images to multiple types of formats. It features an appealing and...

Read full article [this line being a link of course]
Source: Softpedia News - Global [this line being in a very tiny font]
Related Companies [plus button linking to many unrelated things]
Related Categories [ditto]",

with lots of other things, and with a pop-up "Free Daily Dose of Headlines from Our Newsletter - Submit"

and the tab was "WinSnap Review".

Now, clicking on "Read full article", you'll get a full review indeed,

"April 21st, 2014, 15:01 GMT · By Elena Opris
WinSnap Review"
[full review]

but the url being,

http://www.softpedia...-Review-438309.shtml

which also had been direct google hit number 8 yet.

Now that "portal" (or how would you call it?) has a search field, so I entered the name of the author there, "Elena Opris"), and I got a bunch of similar hits, i.e. some teaser on that strategyeye site, and links to external content.



Now this arises the question if Ms. Opris is somewhat connected to that strategyeye site from where she (like fellow authors there) has them link to her own articles on various sites, which would be perfectly legitimate imo, OR if strategyeye just collect material they are interested in, for commercial reasons = for touting their own site, by generating quite high-placed google hits, then delivering links to the "real stuff" which from the users' pov are worthless, since they will have clicked on the direct link (from google) anyway. Note that I'm not insinuating strategyeye does something "illegal", in that second alternative, since they don't "embed" that external content, but (except for the teaser) just provide correct, external links.

Now the first alternative would be perfectly legitimate, as said, since authors should be entirely free to do some "link gathering" for their disparate stuff spread over the web; whilst the second one would be considered a nuisance, since this "intermediate" site would be bandwagon jumpers who, for their interest, just bother the "googler";

in BOTH cases strategyeye does something really smart, they create lots of coverage for their own site, with external content, coverage that they would never get otherwise... and google's algorithms astonishingly not being "able"/willing to detect this "fraud" ("fraud" just from a philosophical pov: as said, nothing "legally illegal" here): "astonishingly" because it would have been more than easy, if they had been willing to do so, to detect (and eliminate or put them further below, say after 60th position or so) hits that just contain teasers from and links to real content that already has been listed anyway, further up (here, as said, 14th vs. 8th position).

Any insight, both re google (and why they don't cut it) and re such sites appearing as unwanted intermediates? For current google (algorithms), that seems to be a viable business model, even though, like numerous other business models, it represents a public nuisance, too?



EDIT: I should have added this link, too, perhaps, but I didn't want to blur the above question; on the other hand, google's brilliant coverage could be related to that link, in some way: http://www.strategye...talmedia.com/pricing

10
I

Since the public viewing of the Snowden affair is 1 year old, there are some articles in the press, on Snowden, but also on Turing and encryption/decryption, and I stumbled on this one (in German): http://blog.zeit.de/...a-zweiter-weltkrieg/

I never understood the Enigma, especially since every "explanation" about it you can find in the web, either is written by experts for experts, or by non-experts who don't understand the Enigma themselves, and the link above falls into the second category, but some comments there rise some interesting points.

It seems Turing began his decryption work on the Enigma after one functional Enigma machine fell into British hands.

It seems the "code for the day" was communicated using the previous code.

It seems some primary code needed for the real encryption, then to be made by the machine, with the help of the "code for the day", was a stable arrangement of the abc chars, and the British tried myriads of possible sequences for this, e.g. qwerty, and so on, but the Germans, "incredibly", just used "abcde", and it seems the mathematician Marian Rejewski did find out this, not Turing, and the commenter in the above link who brings this info into the discussion, muses that Rejewski had worked in Göttingen, Germany, beforehand, so had some first-hand info on German psychology / way of thinking, which enabled him to take into consideration the Germans might do it in the utmost basic, primitive way, a possibility excluded by Brits just admiring the machine but without intimate knowledge of German "national character" - I very much like this observation.

(EDIT: And of course, this over-emphasizing/relying upon the over-obvious resp. the "really-too-easy" reminds us of that E.A. Poe short story...)

It seems the breakthrough was then made by Turing's reflection that by the way the machine obviously worked, on a physical level - direct current was sent thru the rolls in one direction, then in the other direction - no character (a...z, etc.) could be replaced by itself, this drastically reducing the machine's encryption possibilities / possible permutations; in fact, the cited article is primarily about this phenomenon of "Selbst-Bewichtelung", no English translation found, just this transcription, "Players who receive their own gift in 2/3 of all secret Santa games."

Some commenter over there claims the biggest U.S. employer for mathematicians is the NSA - very funny and very convincing, even while no proof is given.

II

I, just some days ago, had mused about specific file formats countering effective encryption. Let's say you use MS Word files, or some other file formats where quite lengthy passages are, more or less identical, but highly standardized at the very least, for every one of your encrypted files, AND the decryptor knows (or can safely assume, from the presence of these applications on your system, or simply by the ubiquity of some applications, like MS Word, and its rather few "replacements", ditto for spreadsheets, etc.) which (few) applications you will have used to produce the encrypted data:

Then, this might drastically reduce the theoretical power of your specific encryption, since the decryptor (assuming even he doesn't have a way (which might exist, without us being aware of such possibilities) to determine where one of your file ends, and the next one begins, which would further cut the possible permutations into a mere fraction of their theoretical potential-by-strong-password) would try to decrypt those "standard passages" first, and even allowing for your individual data within these "standard passages", intimately knowing the "format" of the latter, incl. possible lengths of different such individual data in-between, and once these "file headers" are decrypted, your key will be known.

This would mean that usage of any application not producing just naked ansi files, but putting "processing data", "meta data" into the file, too, should be prohibited if you really want your data safe (= necessary but not sufficient condition)...


11
(As the title indicates, this does not treat the maths program.)

Maple Prof. is an interesting piece of sw, but my experience (checked with the developer) is somewhat mixed. Since there is no review yet, some info should help.

It's a regular 2-pane outliner, but db-based; this is worth mentioning, since from its regular behaviour, you would think it's text-based; for example, better db-based outliners like MI and UR store pics within the text content in a light format where jpg remains jpg (= even when you don't import the jpg as an item on its own), whilst "bad" outliners (like AO, Maple, Jot+ and others) blow those jpg's up into a format that for a 30k jpg imported into the text, 1 million bytes may have been added to your file.

Outliners (= tree-formed data repositories) often are very helpful on the road, with perhaps a tiny screen. Thus, I never understood why most outliners which have search hit lists, do these in an additional pane, since if you have 3 panes at the same time, tree, content and search results, on a 12- or 14-inch screen, you will not see much of any of them.

Now Maple uses, as search results pane, the tree pane, or more precisely, you can switch back and forth between hits and tree in that pane (different viewers), and this seems perfectly logical since either you browse the tree, or the search results, by you scarcely ever will need both at the same time.

Also, you can do regular search (over tree and content), or then, in tree only; this seems mandatory, but some outliners don't offer this way of searching.

Very unfortunately, though, Maple does not offer Boolean search, no AND, OR, or NOT, which means that any search will be primitive "phrase search" (which is a very good thing if it comes besides Boolean search, but which is unbearable when it's the only search you get); of course, presenting a hit table, but no Boolean search, is quite an incongruity, but the developer says Boolean search will be implemented in the future, but without giving a road map.

Sideline: From my memory, the corporation behind Maple (they do some other sw's, too, but for Maple, there is slow but steady development, whilst for AO, e.g., development is almost inexistent, and Jot+ is defunct or rather can be bought in its ancient state from multiple years ago; none of these sw's have a forum) seemed to be from Spain (which is quite exceptional), but the current stuff seems to be from Russia (which is quite regular); I don't know if I'm mistaken, or if they have been bought, or whatever.

Also, very unfortunately, formatting of tree entries (bolding, italicising, underlining, coloring, background color, etc.) is NOT possible with Maple, and it will NOT be implemented, which seems to indicate they chose a bad tree component and are not willing to replace it; of course, you have the usual icons instead, but from my (very extensive) experience, icons could never ever replace tree formatting; in fact, for me that's the ultimate deal breaker.

Now the outstanding Maple feature, which is why I took the effort to write this review:

The only (?) big shot "search over several files" outliner is currently MI, but Maple Prof. has got a similar feature. Now, if you need trans-file search, for outliner files, your best bet is some tool like "File Locator" (The internal text search functions of both XY and SC file managers do it, too, but don't find all occurences, as hopefully FL does, and, interesting, they overlook the same occurrences, which are listed in FL (free).)

Whenever you search in outliner files with such a tool (most indexing search tools refuse to index outliner files to begin with), you will (hopefully, see before) at least see where to look further then, but no exterior tool will get you to the right item there, of course, and this makes the tremendous interest of such in internal trans-file search tool: When you click on a hit there, the proper item will be shown.

Since Maple Prof. has got such an exception feature (as said, together with MI), it deserves its own review, after all, even in the absence of tree formatting and Boolean search. Let me put this straight: Permanent absence of tree formatting makes me discard this program which otherwise has lots of potential, even if I would like it to implement Boolean search immediately, not some day in the future, and I would even be willing to live with the rather bad rtf editor which blows up imported/inserted pics.

Or in other words, the day Maple would replace their substandard tree component, I would switch over to Maple.



Now some brief words on that age-old competition UR-MI.

MI development is much more "developed" than UR's is, and the key misses are:

- UR has no global replace, and the developer is unwilling to implement it (this is the deal breaker for me, since trying to replace some term in 20 or 50 items, by external macro, is crazy and unreliable)
- UR has no trans-file search, but then, you use both programs more or less as "global data repositories", i.e. few people will create multiple files in any of them, so this is less important

- MI (of which the cloning feature development is much more recent than in UR) has always a missing detail in its cloning function which makes it almost unusable: Whenever you add child items (or grandchildren and such) to a cloned (parent) item, this is then absent from the descendants of the clone. Now there are very few instances where this absence would be welcome indeed, but in most practical uses, this totally awful:

Not only for "one subject in different contexts in general", but especially in the "ToDo" part of your big tree: This absence makes it impossible to put some subject into the ToDo list, and then to work upon that subject from the ToDo list; instead, you will need to CROSS-REFERENCE the subject in the ToDo list, i.e. to jump to the "natural" (and unique) context, and to work from there then.

On the other hand, MI's developer could work on this, and then, his program would undoubtedly be the far better program, at least speaking of Prof. version in both instances, since only MI Prof. has the global replace feature (which everybody will need in the end, even if he thinks he can do without, that's blatant wishful thinking), and which UR presumably will never get.

(I'm repeating myself here: In UR, tree formatting is possible with a trick... whilst in MI, it's available straight on.)

Thus, with a little more development, MI will be number one (or, if you're willing to cross-reference instead of cloning), it beats UR even today).

And yes, Maple competitors should adopt Maple's hit table, in the frame of the tree, at least by option.

12
donleone, in this

https://www.donation...ndex.php?topic=37935

RN thread, said,

"- RightNote can do internal Quick-Linking to another note using e.g. the shortcut CRTL+SHIFT+K,
but the quick-link only remembers as long as you refer to an item on the same page/tree.
For when namely a quick-link is made to a note, that then gets dragged over unto another page/tree,
it breaks the quick-link and says "This item has been deleted" (even though it's just on an other tab)
So the ability to sustain note-links across pages, is a missing ability yet or bug."

and I said,

"Correct me if I'm wrong, but "other tab" is other file (except for hoisting of course), other db, and you describe a problem that currently harms any one of those db-based outliners, whilst the text-based ones are even worse, do NOT allow even for intra-file cross-referencing. OMG, I see I develop this too much here, so I cut it out to a new thread!" - here it is:

That's why I, 22111, in outlinersoftware, some months ago, devised the concept of a better db-based outliner, in which there would not be 3 distinct db's for 3 tabs/trees, but where the trees=outlines would be stored distinctly, as lists, from which the trees then would be created in run-time, from a set of ALL items, which in that db would be totally independent from each other, i.e. there would be 2 db's, one for all items / single bricks, and another one for multiple architectures for which all those bricks would be available in every which combination (order, hierarchy, cluster, whatever macro compounds).

Of course, there are some conceptual difficulties with such a construct, since in that second db, the one containing trusses from which the individual trees would be created, there should be some "combine" functionality, i.e. it would be devoid of sense to ask the user to build each tree up from zero, so multiple "partial trees" would be to be combined, in myriads of compilations and combinations.

And of course, there would be a third (distinct) db (part) from which you would have access to these compounds listed in part 2, and the interaction of all within 2, and/or of 1 accessing compounds in 2, or managing some of that combination work, is both conceptually demanding, and especially difficult since most prospects are deemed to immediately run if such a project isn't presented to them in a way to make them feel very comfortable, i.e. the fear to be inadequate vàv such a "difficult" framework would make people do not touch it to begin with, the French call this phenomenon "l'embarras de richesse", i.e. the completeness of such a system would also be its evident complexity, whilst you must hide complexity instead, in order for the prospective user to give your IMS a chance.

In other words, part 1 (part 3 above, let's rearrange it top-down instead: 1 = project level, 2 = compound level, 3 = innumerable, independent items) should give access to compounds (trees, lists, e.g. from search results), but should be as clear as possible, whilst in part 2, there should be all the possibilities waiting, but it's evident such a system should start piano-piano here, whilst in fact, here would lie the incredible force of such a system.

In fact (and I developed this in length over there), today's THREE-pane outliners just put an intermediate flat list between tree and content/single item, instead of shuffling the tree into the middle pane, creating a new, master tree within pane 1, and you see from the implications of your imagining such a second tree hierarchically below the first below that of course, at a strict minimum, there should be some floating pane with a THIRD three, FROM WHICH TO CHOOSE FROM (i.e. single items, or whole trees/subtrees), and which (the pane) could then contain any subtree from tree 1, or also any search result, both, as said, to choose from, for tree 2, the single project tree you are going to populate), and of course, that "target tree", tree to be constructed, or then, afterwards, to be maintained from tree 1 (i.e. in tree 1, you select the tree to be displayed in pane 2, or in the special "source" tree pane), should be able both to contain part-trees from other trees, in their synched, original/currently-maintained-over-there(i.e. in the original source) form, and in individualized forms, i.e. some items of the original sub-tree cut out here since not needed here, others changed here, and so on, i.e. I'm speaking of cloned parts, and of copied parts, or rather of cloned parts that get into a "just-copied"/augmented state later on, and this individualized for sub-parts of those originally-cloned sub-trees...

Just imagine somebody in the legit profession who, for some trials/proceeds, needs some legal dispositions in current state, and others in the state of their version being in effect at the time of the facts!

Thus, there should always be complete clarity of the respective state of deliberate de-synch (be it item contents, be it item versions, be it similar but partly different item groupings), in any which context, so part 2 will not only be basic lists of item IDs and the hierarchy info for the respective tree, but these tree-info-db's in db2 will contain lots of info...

And of course cross-referencing info: to sub-tree/heading, to item, to paragraph in an item... of whichever tree, you clearly see here the interest of separating item info from simili-item-info which in fact is entirely dependent of the respective occurence of a(n even perfectly identical) item, in multiple trees:

Not only, in one tree, an item has got some position, and an entirely different position in some other tree, but then, in tree a that item is cross-referenced to item xyz in tree c, but the same item in tree b might be cross-referenced to heading mno in tree pqr, and so on, in countless possible combinations.

All this is suddenly possible with that overdue separation of items and trees, and as I developed over there, in a corporate environment, there could be multiple item db's, but again, there should be a "management layer" between all those item db's, and their tree use.

As it is, cross-referencing between items in different files, and then their maintenance beyond renames and moves, IS technically possible, but would take lots of necessary "overhead", the above-described setup of independent items, and then a structure maintaining multiple tries, together with any linking info, in a separate "just-trees-and-their-info" db is by far both more functional and more elegant;

again, there's a construction problem, and that problem how to "sell" such a sophistaced structure to the user, "selling" meaning here, how to devise the gui in a way that the prospective user will start in it with confidence in his grasping it all, in time... ;-)


EDIT: Some more development of the "multiple trees, with live cloning, in just one outline db" concept in "Reply number 9" = post 10 in this RightNote thread here:

https://www.donation...ex.php?topic=37935.0

13
Hello, Rael (= the developer), Hello, prospects (and "you" sometimes means the first, sometimes the latter, but that will be evident from respective context),

I'm currently extensively trialling RN. Some first observations:

a) There should be a forum to ask questions in. (It's not mandatory for such a forum to be integrated in your site, so there are solutions which would cost you nothing.)

b) Why have Tree shortcuts different from general standards, i.e. "bold" is "Alt b" here, instead of "control b". The SCOPE of the shortcut will determine its action, so there is no problem to change it to the standard shortcut, and that I did, but there are many such idiosyncrasies in your shortcut assignments, i.e. there is a lot of manual tweaking work to do for a new user. (This Alt b instead of control b just being one example among many.)

b) Tree: I miss "italic" and "underlined" (but there is a color and background color formatting).

c) Tree: If I change shift enter to Ins (by "Special"), for "Add child note", this does not work then. Of course, the change is a matter of taste / personal appreciation, but the assignment should work if it's offered by the "Special" list of possible key assignments.

d) Tree: If you select an entry with the mouse, instead of by kb navigation, it will become underlined, even when you then change the selection to another entry/item. This is visually awful, and distracting. (And if there is any sense behind this, i.e. if "it's a feature, not a bug", please make it available by option only.)

e) "History" and "Recent" are totally awful, since those lists are populated by all those dozens of items you just touched a fraction of a second, by kb tree navigation, so those lists become totally useless since for identifying those "real finds" there to which you would like to navigate, using those lists as a "shortlist", you would have to read dozens of irrelevant "false hits". Solution to this problem is very easy: Just have items entered into those list JUST whenever they did appear for more than one second or such on screen (which is not the case for items "touched" by navigation only, and perhaps better, for more than 2 seconds (= opening of parent items), and even better, make it an option for the user to determine the length of display necessary in order for items to be included in those lists; I would probably choose 3 seconds then.

f) The looks: RN is currently one of the worst-looking outliners/PIM's out there (except for my possible missing possible adjustments?). I really beg you to have a quick look at some competitors, in order to do something about this. Tree, content (with title/tags), history/search panes, you imagined it "purist", but it's just ugly. As a first step (if you want it purist), please consider abolition of all those thin lines / thin frames, and the grey background. Especially, the title and tags frames, above and beneath the content field, are almost unbearable, as are the "lines" made up from the grey background between them and the content frame, and between tree and content frame, and between the latter and the search/history frame. (I'm speaking of its visual appearance in XP here.) Of course, such inferior appearance / visual appeal greatly harms the commercial outlook of any program, so it's important to work about it.

g) RN is MUCH, MUCH better than I had ever thought (except for the fact that a db-driven PIM should offer clones of course), but the help file does not reflect the power of this program (neither do the menu entries), and I invite newcomers/prospects to have a thorough look into the virtually endless "Tools-Customize Shortcuts" list, by which the hidden power of this fine program can be fully appreciated: There are many hidden gems there to be discovered! (It is one thing to just pretend, "RN's rtf editor is much superior to Ultra Recall's", e.g., but it's quite another thing again to discover the virtually endless possibilities in RN's editor(s), incl. not only tables, but extensive, powerful manipulation facilities for tables, too.)

h) The Boolean search problem (and I don't have to tell you how important such functionality, even without NOT or NEAR, is, for not getting endless hit tables). As said, there is a gulf between the power of this fine program, and what the help file tells you about it, and I obviously did not try ANY POSSIBLE variant in my extensive trials with the search function. In fact, from my understanding, and from the help file, it seemed "Fast Search" was something LESSER than "full-featured" search, but in fact, for the time being, it seems to be the only search flavor that correctly processes AND and OR search terms. (And my fault being, I seem to have left out this "Fast Search" in my frenetic search tries, and I'm indebted to PIMlover, from outlinersoftware.com, to have very kindly mentioned this point to me. So, yes, indeed, AND and OR WORK, for the time being, but just in this special variety of search you would have had a tendency to overlook, possibly, from reading the help file. (It goes without saying that I hope for extending this functionality on all three search flavors.)

i) There is no distinction in search between "just tree titles" / "just in the tree" and "overall" / "tree and content". If you remember just some key word(s) in the title, such a distinction would be more than helpful, since it would spare you perhaps dozens of "false hits" within the hit table, through which then you would have to unnecessarily browse in order to find that one item you would like to work on, or you need to access for reading.

j) Import and Export seem both very limited at first sight, but they both present file hierarchies (even of rtf, and possibly html files, not trialled yet) to be built from the tree, and to build the RN tree from, and some other outliners/PIMs offer similar functionality, and those also offer special competiting formats, and that means that for many outliner file formats, you will be able to import your stuff into RN, and even to export your RN stuff into competing offerings, whenever the necessity might arise to do so. (Of course, I'll have to check the quality of html/docx export, which at the end of the day are the most important features in this respect, in order to further process the "product" you produce, from your stuff, in an outliner/PIM.)

k) I do not think (yet) the tagging function(s) is/are quite neat, but that's perhaps (partly) due to my possible misunderstanding of parts of it/them, but even then, the possible fact that I don't intuitively grasp that functionality, even with the help of the help file, should indicate that some work on this feature (group) could not do any harm ;-)

l) Tree: F2 currently opens a full-fledged item properties dialog, whilst in most cases, you would just like to adjust the item title a little bit, e.g. for eliminating a typo; so the regular F2 for "edit title in tree" function would be very welcome, and the current properties dialog could be opened by shift-F2 or whatever.
__________

All this being said, I redirected Rael, the developer, to this thread, in order to comment here, as long as he will not have got a forum on his own, and anyway, I'll post more findings about RN here, and I invite fellow users to do so, too, since from my experience, positive observations should be shared widely, and criticism should be made public, too, in order to sufficiently motivate the respective developer to amend sub-standard functionality.

As for the above, point e) would need immediate attention, since the current absence of a usable history function (or then, the presence of a history function that forces you to navigate by mouse only!!!) makes this otherwise fine program almost unusable... and then, point f), the looks, seems to be primordial to me. ;-)

14
For any better/additional idea, I would, that goes without saying, give full credit.

This being said, my musings certainly are of general interest, since even if you do just SOME macros, here and there, with any (free or paid) tool, the same probs will quickly arise, although to a lesser extent.

I

Now, as stated before, I've heavily been into AHK macroing lately.

Thus, I have to copy with numerous sw's internal key shortcuts / shortkeys, and also with their respective menu shortcuts (Alt-xyz).

And of course, I always try to assign identical / similar commands to the same shortkeys, in different programs, i.e. I either re-assign the original shortcut (if such re-assignment is available) to my "standard" shortkey for that function, or I intercept the (un-re-assignable) shortcut of a given applic, and then reassign it to my standard AHK shortcut for that function.

Now, bec of various internal shortcuts in many progs, such a system will quickly create an almost incredible mess: In many progs, I've got hours and hours to work on re-assigning their multiple internal shortcuts, or to simply get away with them (which is a pity, and which is in order to not them interfering with my AHK macros... ok, with AHK, this is technically not possible, thank God, i.e. AHK key (combination) assignments will prevail, but I', not able, from my personality, to "overwrite" internal shortkeys, without bothering about such things, so I have to look them all up, one by one, and notwithstanding any further development of my macro system: so you easily understand all this is a conceptual nightmare...).

As described here, trying to overlay your personal macro system, to your set of possibly 100 applics and tools of any kind, is virtually impossible, so what can we do?

In this connection, I tried various ways of administering at least my own macros (let alone the internal shortkeys of every given prog), by sorting in spreadsheets, by doing comments in my macro lines, then filtered by editors/regex, etc. - chaos it remains, and for every changes in my macro system (which occur daily), I would have to search for possible incompatibilities (with native shortkeys) in dozens of applics. As said before, it's a nightmare.

II

So, what's a viable STRATEGY to macroing?

I've adopted this one: For every applic, I delete/reassign/note any Alt-Fkey combination, i.e. I do NOT accept any alt-F-key shortkey in any of my applics: I "need" them for my own macros, and this comprises Alt combinations.

Then, for every applic that is NOT "calculation", i.e. except for calculators, spreadsheets, statistical sw and so on, I "sacrify" the numkey block, and whilst /, *, -, +, numenter and numcomma/dot have global assignments, the ten digit keys there are all available for individual shortkey assignments of the particular applic in focus; it goes without saying that wherever possible, I assign often-used commands to numkey keys, whilst lesser-used commands "go" to Alt-Fkey keys.

Also, I've sacrificed the 4 keys F9 to F12 to "global scope", i.e. have reassigned any applic-specific key assignments from them to some other key (combination); the same is true for "special keys" like "PrintScreen", "Pause" and such.

This is to say that I deliberately refrain from doing "mnemonic" key combinations like "control-alt-p" and the like, instead heaving to memorize some "Num9" for some command, in order to not have to endure the above-described chaos, triggered by happily mixing up your own macros, and "native" applic-bound short-keys all over the "place" = all over your keyboard.

To tell you the truth, for the time being, I have also preserved (i.e. not yet reassigned) a dozen or so shift-control-xyz key combinations, all for global var(iable) toggles of my AHK system, and that obliges me to also check for such shift-control-combinations in any of my applications, but from my experience, in most such applics, these are very rare, so perhaps I'll even maintain those.

I also have got a Cherry 4700 keypad, since it's the only low-price keypad out there which is "programmable", i.e. to which keys you can reassign other key combinations which then will be interceptable by AHK: shift-control-alt-a... in my case, i.e. cramped combis never ever originally assigned by any applic of my knowledge.

III

From the above, you see that my point is, try to separate, as far as possible, your own macro system (which you will have to memorize, more or less, notwithstanding the fact that at the beginning, and for rarely-used commands, perhaps in rarely-used applics, you will need some sort of reference table system, be it on screen, be it on paper), from any possible shortcuts from your various applics - use (perhaps) shift-control keys, use alt-F keys (and yes, for special commands not that much similar to commands in other applics, why not assign them to "unused" Alt-combis in that particular applic? = nor yet assigned to a (useful-to-you) command there, neither (and especially) to a menu shortcut over there)... and be brash* enough to sacrifice your numkeys in every applic where you won't enter digits but here and then anyway - and any macroing will become so much more straightforward for you!


* = How come some Continental knows such terms? Well, it's a remembrance from an Auden poem in Visconti's Conversation Piece (1974, and no, not on YT (yet))

15
1

Jarte is the (free and paid) text processor for people who like Liberace. (But on that awful gui, you'll find 3 sets of icons; click on the left one in the central group, and then choose "Minimal design", and you will have got something far from beautiful, but perfectly palatable.)

Now even the free version has got a unique feature: By Alt-F7, you theoretically can trigger its "Hot Connect" command. For this to work, there's two steps:

Tools-Options-Hot Connect > dialog "Hot Connect Options", and there you first must check "Enable Hot Connect"

Then, it's important to know that the hotkey (Alt-F7, can be changed in that dialog) will not work within Jarte, for fetching the content of that other text window, but will work in that other text window, to duplicate its formatted (rtf) text into Jarte.

2

Now what is this feature good for?

Many 2-pane outliners just have got 1 text/content field (= the one of the currently selected item in your tree), very few have got 2 such panes (= a second one from some other item, not current any more from then on), by option, but even those almost never allow for concurrent editing in both panes.

Independently of this prob, many such outliners don't offer a good text/content pane, i.e. their respective component is primitive: just very basic rtf, but not much more. There are even former Ultra Recall users (or to be exact, at least one who told us so) who, for that reason of getting better text processing, switched to RightNote, an otherwise quite terrible applic which I wouldn't have for free (= so this proves the degree of suffering with substandard text panes).

Now there might be many other possible setups where "Main applic plus Jarte" could really and highly be useful, please list them if you find any (and yes, even writing such posts in Jarte, instead of the DC text entry field might be a not-that-bad one of an idea, especially when you live in the Congo and have frequent electricity breaks (Oops, I didn't find an automatic save all x minutes in Jarte, but I might have overlooked it, and then, you always can save manually, or so you think then...).

3

Again, what is it good for, finally?

Whenever you press Alt-F7 (see above), the rtf text will be replicated to Jarte (if it's running then), and instead of editing in the original applic, you do the editing in Jarte (and with Jarte's assumed better capabilities - of course, there will remain the question how many / which ones of Jarte's additional goodies will then translate back to (e.g.) Ultra Recall's content field.

In theory, you do this by saving the text in Jarte (control-s), by which it will (in its then current state) replicated back into the original applic: Brilliant!

4

And now for the probs:

This replication back into the original applic will put it back there, not trigger a SAVE there (which in the case of a browser would be not applicable anyway).

And your control-s does the replication into your original applic INSTEAD of saving in Jarte, not additionally to it (= so much for our Congo prob). (I then did Alt-F4 in Jarte, and it closed down without any warning, and when I reopened Jarte, among the recent files, there was no trace of this buffer possibly having been backed up somewhere, under perhaps some automatically-given name.)

In most? (very many) cases, you will not need the Jarte replica for better editing, but for editing an ALTERNATIVE text, i.e. you will want to switch the current text in your main applic to something else, for some combined source-and-target setup, including Jarte:

I've got really smart readers here, so I could end my little review here, but just in case google brings some additional readers to this one, I feel obliged to explain further:

Control-s in Jarte will (and of course I tried this, and no warnings / "Do you really want to..." dialogs whatsoever ever appear) overwrite the current text in your currently displayed item/file/tab/whatever in your main applic, and which in many a case (and in most cases in every workflow I could imagine) would be some of your "source" items/etc. (And totally independant of your using a 2-pane outliner as that source applic, or any more traditional sw.) (Ok, then you could try your luck with some control-z: best of British!

And in most outliners, if by chance focus is in the tree (or in some other pane) when you press control-s in Jarte, that Jarte text will be put into the tree, but not as a new item, but, e.g., as dozens of new tree entries, one tree entry for each Jarte text paragraph.

5

Now as you see, this INTERNAL Jarte "macro"/routine should have been done in a much more elaborate way, with lots of security sub-routines, checking the respective names of files/tabs/items, and then perhaps even switch over to those stored "names" defined as target in such a routine, then switching back to your current source (and also, triggering a save of that source-target applic).

But since the respective commands for such a thing highly depend on your respective main applic, you'll quickly that in Jarte, internally, this would only be possible for some standard applics (e.g. Word, and perhaps some), whilst done by an EXTERNAL macro, all this would be possible, more or less easily, depending on the respective accessibility of these "name" infos above, for AHK or other tools, and also on the respective accessibility of "go to" commands "from the outside", implying the (not always given) existence of such commands in that given applic.

E.g., not every outliner has got commands "go to item x", or "visited items' history", but even then alternative solutions are perfectly possible with tools like AHK, e.g. you make another file/tab, within the source applic, just to contain the target item (to be frequently updated from your secondary, editing applic, e.g. Jarte); then, your macro will just to have switch between applics, and between different tabs in one of them, and after the update there, it will shift back to your (then current) source tab - all this is easily possible in most applications "doing" tabs (i.e. for AHK, the respective file names at least, of those tabs will be accessible).

Of course, if you build such a macro yourself, you won't need Jarte as target-editing sw, but any other such word processor with formatting will do.

6

I don't have to say that such probs arise because you want to have source and target before your eyes, concurrently, but in applics that don't allow for a second window, and neither for a second instance to be loaded into memory. E.g. FF has got a very useful feature, which is accessible by right-click menu on a tab: "Move to new window", and voilà, FF windows on two screens, without even running a second instance for that (but which would be possible, too).

So we're speaking here of overcoming limitations of applics which at the end of the day are more or less inexcusable... but where for one reason or another, we're stuck with some bad prog since we rely on features not otherwise available (= in that particular combination, perhaps)...

But my point here is, with AHK (or similar), you can overcome such limitations rather easily... and now I'm going to write the macro... which will be around 50 lines, but just for the necessary (see above) security checks our Jarte routine evidently does NOT do. ;-)

P.S.: In rare cases, it helps to try to install the (crippled, but not time-restricted) trial version, too, of your respective main applic, or then, a former version of that, but from my experience, most progs don't allow for such "in-house combinations", registry-wise or for whatever reason.

16
repub of http://www.bitsdujou...ry-pro#comments82213 :

S(yncovery, ex VeryComplicatedAncientNameIDon'tEvenRemember), has been on offer here quite some times; in the past, I refrained from buying, since even half-price was "too much" for me, considering the "competition", which was half-price of that - obviously, that was a big, big mistake of mine, and I'll happily buy today.

Above, that was the executive summary, here's the longer story:

I

Most users do a lot of renaming of folders, and of moving files into other (sub-) folders (renamed beforehand, or not). Then again, they synch (if they are smart; years ago, I was foolish enough to NOT synch, an booh, some day I lost some data I very much cherished, and which I never ever then could replicate).

Now there are quite some synch tools out there, some even for free, and for years, I have been using Vice Versa Free, which, graphically, is the most beautiful, and the most "functional" (= by immediate "acting about things", i.e. by actiong about subsets of files, manually, by right-click menu, and this implies the graphical presentation of folders and files there which is absolutely outstanding), not overall, see below) synch tool I ever encountered, and I trialled them all (= between 0 and some 200$, that is).

Now, why didn't I just BUY VV Prof., for living happily ever after? (= Is 60 bucks for the perfect synch tool too much? OF COURSE NOT!)

Bec that folks simply refuse to implement monitoring renames and moves of files, or renames and moves of folders (which from the programming pov is NOT that difficult to implement; I did some 80,000 lines of code in my amateur-programmer's life, so I've got at least some MINIMAL credentials to speak out here).

Two (or three) effects arise from this refusal:

- wear-out of your hdd's (= of course, it makes a difference if your synch tool is "smart" enough to just rename/reassign 30,000 pics in the target directory, or if it insists on copying/deleting 30,000 files, for hours, as will do my (graphically beloved) VV and 90 p.c. of S' competitors out there

- NO synch tool of my knowledge does verification of source file and target file (i.e. by check sums et al), so when S just really synchs perhaps 200 really new files, instead of doing idiotic "synching" of 30,000 files of which in fact 28,800 are already there in the target directory, your risk of getting broken files, in your "safe" repository, with S, will be divided by 1,000 (or by several hundred, to be exact ;-) )

- As I say here, I've done a lot of manual work within VV Free - why? Simply bec of my wanting to minimize those two aforementioned probs with synching - but anytime I do so, I'll do, for multiple minutes, completely unnecessary, manual work, by having VV Free on one screen, and my file manager on the other, in order to replicate, at least, all possible folder renames in my target (parent) folder/target hdd/stick/whatever, in order to then minimize the real synching: THIS IS CRAZY LABOUR! And which S will from now on spare me if it functions well!

II

Similar effects (i.e. for points 1 and 2, no means to avoid point 3 above) arise from VV's (and most other competitors') refusal to do delta copying, i.e. if you've got a 1 giga OL (Outlook) db file, it WILL make a difference if the synch tool of your choice just replicates the changed parts of that file (= which is "delta copying"), or if it will replace that 1 giga file, as a whole, again and again and again.

And yes, S DOES delta copying... and very few competitors out there do (a partial/"easy" synonym for delta copying would be "incremental synching, but a correct use of that term would apply to backup tools only).


III

And now it's time to share my experience with "GoodSync", bought here, some time ago, half-price. Well, I don't want to do diffamation, so I'm obliged to tell you that it was "GoodSync2Go" (i.e. so there's a chance the original "GoodSync" is any better), BUT I had installed "GS2Go" on my main usb stick from which I run my 6,000 (or is it now 7,000?) lines of code of AHK macroing, without that stick never ever failing me: So why it fails my, every other minute, with "GS2Go"?!

In fact, I paid "GS2Go" here 15$ plus VAT, which is "nothing", BUT I *GOT* NOTHING, too!

In fact, GS is another one of those, very rare, synching offerings that claim to monitor file/folder renames/moves, for synching.

Well, sue me, "GoodSync", but I call you "BadSync" now - in fact, here and there (i.e. less than 10 p.c. of such files), such a folder rename/move was tracked indeed by "GS2G", and my files to be synched got a mention "renamed to xyz" when then I started the GS synch (and were then, I suppose, properly processed), but about 90 p.c. or more of my "moved" files - most of the time, not really moved, but just within a renamed/moved folder/sub-folder - were faithfully copied anew by GS, and which then were faithfully deleted in their respective folder of origin (as said, about 29,800 out of 30,000, or make it 10 p.c., i.e. 27.000 - all this abysmal).

And for such a "result", I had been prepared to live with with GS's gui, which is, from my humble pov, the worst I ever encountered (not just graphically: also, have a look at their not "opening" any other folder (= not even by option), above just 1,000 files listed, and which could you cause much probs, incl. data loss and all, within their source files' listing)...

And yes, I tried to discuss those probs with that people... without any success... they did not even answer me anymore...

And that explains why, being "proud" (in fact, ashamed: I'd been a fool by buying!) owner of GS, I had reverted to VV Free, and then I did all my necessary synch work, about 1 times a month (whilst it should be done every 2 or 3 days, let alone advocates of doing it daily, which of course is best!), within VV Free, for hours each time...

IV

So why didn't I buy S not full-price, between bits offerings? Bec, after my experience with "GoodSync", I lost my faith, partially, in developers' claims: Better lose 30€ (= 30$ plus VAT) with anything, than 60€, with some other offering that possibly doesn't live up to those claims, right?

This being said, I rate the risk, with S, to not fulfill its claims at about 5-10 p.c., not more, and I'll happily:

- buy upgrades then, full price (i.e. about same price as half-price here, today, for the full product), and

- will tell you that S will have entirely lived up to its feature claims, next time it will appear here on bits,

if, and that's my 90-95% very strong expectation, S is capable of faithfully monitor my folder renames and all that.

And yes, I just would be entirely happy if S tried to optimize its GUI... just have a look at VV in order to get brilliant ideas. ;-)

Btw, aforementioned VV probs are replicated with SyncBack, another commercial (and otherwise "renowned") synch tool, but which is worthless for me bec of these probs it comes with.

V

There's another aspect to consider in synching, which is "versioning". Well, of course, real versioning is quite another task, but some sort of "simili-versioning" should be possible with every paid, good synch tool: Not just "overwriting" of previous "saves", but replicating those, either by an automatic numbering system within the original target folder, or by automatic shifting of those previously "saved files" into an automatically created sub-folder within the target folder.

No need to say that this is the core difference between most free synch sw tools, and the paid ones, and it seems that S executes this task in the very possible optimized way (i.e. sub-folders, upon request). Such a feature only is really important for really important files (and will "eat" lots of unnecessary hdd space on your target medium of it's done "either-or, globally", so my wish for S, to become really perfect, would indeed be, allow users to specify, within a synch job, to specify which ones of your multiple folders/sub-folders will be replicated-with-replication-of-the-previous-saves, and which ones will just overwrite the previous savings - if S introduced such a individualisation of target folder treatment within the same synch job, it would be something really perfect - outstanding from the "competitors' crowd" it is even today (if, by all chance, it processes renames/moves correctly)).


So you see, Syncovery will be the very best "individual" synch tool there is (i.e. in the non-corporate range, for individual users and with affordable prices) if both its delta copying and its renames/moves monitoring will function as advertized, and so 30 bucks for this will be an absolute treat.

In every sw category out there, we've got a "winner", and we've got all chances that Tobias' offering here is very well the winner in t(his) categoy.

As said, I'll buy this presumably in-a-class-of-its-own tool today (and my given name "Schleifer" stands for burying bad sw), and I'll be eager to give S a rave review whenever it just does what it pretends to to, next time it'll appear here! ;-)

(And yes, I understand that for some users, compatibility with this online storage offering of that might be important, too, but then, CORE functionality and its perfect realization is my subject, and it very well appears that Syncovery Pro is the leader of the pack, for that core functionality every paid synch tool should have in the year of 2014 - bec in the year of 2525, we'll all be dead, and we've got a right NOW to profit from what's have been technically possible even years ago... and Syncovery Pro seems to be the only offering out there that really delivers. So what's 30$... even plus VAT? ;-) )

17
This is part of my AHK tutorial here,

https://www.donation...ex.php?topic=34948.0

but since GK will be on bits in some days, I searched for a review of that tool, not finding any though. Even here, the search term "karnaugh" just brings "You may have meant to search for Karna." So... but please, do NOT read on if you think it's a crime to treat two related subjects in one post.

Now, have a look into the wikipedia article on Karnaugh Map: http://en.wikipedia....rg/wiki/Karnaugh_map

Then, perhaps, follow some of its links: to the overview, of course:

http://en.wikipedia....olean_algebra_topics

But especiall to Venn Diagram, since that's the really useful thing here, in many cases:

http://en.wikipedia....rg/wiki/Venn_diagram

And you can do this (and also 0/1 = Yes/No tables, etc., so-called "truth tables", but variants of them are also very useful for numerical variables, see below) on squared paper...

But following the link to the Quine-McCluskey algorithm will be instructive too,

http://en.wikipedia....3McCluskey_algorithm

since that's the more specific thing, for programmers/scripters, but we'll come back to K-maps' more general (or more specifically: rather deviant) scope in a moment.

First, have a look here, http://gorgeous-karn...ugh-for-programmers/

and try to understand the examples the developers gives there. (It is understood that the professional programmers will look upon all my posts with deep repulsion, but that we poor non-professionals have to find ways to get by, too, and that's why I think my posts can be helpful; so my invitation to "try to understand" addresses people like myself.)

You will see that the K-map enormously facilitates combinations of conditions... but of conditions, only, and that's the prob, for programmers/scripters.

So have a look into the "Gorgeous" developer's site in general, and you will quickly see that not only the K-map was invented for electrical engineers (circuitry), but that department is where it clearly excells.

Now, the prob is, K-maps greatly refine input, but what about output - it's not by accident that the "Gorgeous" developer made up his examples (on the linked page above) with just a true-false = if/else structure, and here we are back at my second subject (I've written some 70,000 or more lines of code, so I had ample occasion to make architectural and construction mistakes, and then ample occasion to amend).

In many "longer" routines (i.e. spanning over 1 or 2 pages; remind yourself: you should do a sensible max of subroutines in order to not repeat code, but see below), the first part is the "gathering" part, the second part being the "executive" one. In reality, that's not entirely true, but in so many cases, that first part has a very evident penchant to gathering data and to making decisions, whilst the second part more or less "DOES DO things", and that's why you should not totally mix up these more or less "natural" parts of a routine.

Of course, when there is a "check" result that will discard the routine, more or less, in two lines of code, do it at once, no prob: e.g.

else if blahblah
{
   aCertainGlobalVariable = 0 ; = you do e.g. a variable reset to default value
   return ; you leave the routine
}
else if blahblahblah ; etc, etc.

But if a certain condition will trigger 5 or 10 or many lines of code, perhaps with sub-routine calls, returns from there, etc. you should do a GOTO; goto x, goto y, goto z..., i.e. you combine the elements of the trigger part, and then, you combine the elements of the execute part.

"But my prog language does not allow goto's!" (whining, whining)

No prob! That's what are variables for, among other things. (Have a look at the truth table above again, and bear in mind I said it's also great for numerical variables.) So, without goto, have it exactly as stated, but instead of goto x, goto y, goto z, have a variable jumpto (or whatever you name it, but have it local!), and then write

if blahblah
   jumpto = 1
else if blablabla
   jumpto = 2
etc., etc.

And then, for the execute part, have a similar conditions structure

if jumpto = 1
   your 20 lines of code
else if jumpto = 2
   another 5 lines of code
etc., etc.

And bear in mind, if the code of such a part is too many lines, triggering your routine flowing over more than 2 pages or so, call subroutines, on other "pages" (= when printed out, or other screen "pages" / outliner items).

But now for the "see below" above, re "use subroutines". Well, there could be some subroutines, for code you will use again and again, on many occasions, but which ask for much specific data: If you can use global variables for that data, and/or if that data is within such variables anyway, very good: use the subroutines. But I have had many cases where I would have had to write many lines of data/variables, just for the subroutine to get that data, whilst the subroutine itself, without that part, would only have been 2 or 3 lines of code. In such cases, it's not useful to multiply these subroutines, and this is only logical: Whenever you get many lines out of your routine, by calling a subroutine, do it; but if such a call does not straigthen out your code, don't use a subroutine, and hold your code - repeated or not - together. (Of course, a very good solution to such a prob would be a function, and indeed, you should use functions (instead of subroutines) whenever "applicable", i.e. whenever that's possible.)

A related remark: Even when you need code that will NOT be re-used for other routines, write a subroutine notwithstanding, whenever your routine "becomes too long", and then why not replace 10 or 20 lines of code by just two lines: the subroutine call, and a comment line; and even if that call will need 5 or 6 lines (because of data to be transferred, i.e. because now you need variables that without breaking up your code you would not have needed): if you can put 20 lines within a subroutine, that will have "got" you 14 lines less (in our example).

Also, don't make the "advanced beginner's" mistake to think that just a min of lines of code "will be best": Of course, there are some high-brow algorithms "no one" understands, except real professionals, but you see, these have been created by more-or-less-geniuses, and then they are used again and again, on multiple occasions, by many programmers, but within strict application scope, i.e. it is known (from high-brow books) how to put which data into them, and what then to except where as the outcome... but those algorithms function as blackboxes: No need to try to do the same, and by this to create algorithms that look elegant but then give faulty results... ;-)

Now back to truth tables, with their above-mentioned numeric variants (= technically, they are not truth tables then anymore, but they are really helpful indeed, whatever you name them, and all you need is one leaf of quared paper from the exercice book of your girl or boy).

In the above example I gave, i.e. several conditions "in", then several distinct procedures "out", in the second "half", well: In real life, it's quite a little bit more complicated, and that's why I can't see the utility of K-maps here, not even for the "input", i.e. for the "first half" of the task... and in the second part, it's the same: programming is all about variants.

Which means, you will not have, as in the above link, dozens of factors, and then, there is a yes, or a no, but with many of the main factors/elements, there will be secondary, subordinate factors, and which will NOT influence the true/false outcome, as in the linked example, and they will not determine which one of several main "outcomes" in the "execute" part will be triggered, but they will determine variants, WITHIN these main "outcomes", and whilst some of these factors will apply to just one main outcome, and trigger a switch within there, other such factors will trigger similar variants within, or FOR (i.e. execution afterwards, another "goto" FROM there), SEVERAL such main outcomes, or even for all, or most, of them.

Now, how to manage such complexity? Very simple, by just "encoding" those variants, within both the first, and the second part, by numeric variables, INSTEAD OF CODING the processes: First, do the right construction; then, in a copy of your code, write the real coding lines - but don't mix up the thinking about structure, and the real coding, whenever it gets a little bit complicated.

Now, how many such variables? There are, let's say, 4 main outcomes, so var_a will get the value 1, 2, 3 or 4. Then, you will have a certain variant in some cases, wherever that might be, and you do var_b with its values 0,1 (if it's no/yes), or 1, 2, 3... for several possibilities (and by defaulting the value to 0 beforehand), and again with var_c, etc.

So, your point of departure, in your code structure, is simply building up the logical structure. Then, "on arrival", you will not replicate the building-up structure from above, but you will build a logical structure

if var_a = a
else if var_a = b
else if var_a = c
etc.

and for each if var_a = xyz, you think about possible variants, and then you either include them there, or you just "call" them there, i.e. some of these var_b = xyz should not be integrated within the main if's, but should be processed afterwards, i.e. you will not leave the else if var_a = c part with a "leave routine" command (= in AHK: return), but with a goto xyz command (or just with "nothing" if the goto is positioned immediately beneath the main var_a structure), where then var_b will be checked.

And so on, i.e. you will have to understand (and then to construct, accordingly) that var_b is NOT (necessarily) "subordinated under" the var_a structure (but perhaps dependent from it, i.e. without any "hit" within the var_a structure, the var_b will become irrelevant... but not necessarily so), it's just another logical category, different from the "var_a range" (with its respective values), and then, perhaps, var_c is clearly subordinated, logically, to the var_a range, whilst var_d is perhaps subordinated, too, but will only apply to 3 of 5 values in the var_a range, and var_e will only apply in one special var_d case, etc., etc.

As for the input structure, write this output structure down on squared paper, in order to not overlook possible combinations... but then, as said, not every possible permutation will make sense, so do your thinking "over" your squared paper, over your "adapted truth tables" (and yes, use several colors here). And then, when you write the code outline (see above), do it strictly from your paper design, and whenever you have doubts about structure, don't mess up your code, but refer to your paper again: Be sure before rearranging code lines.

You might call my style of coding the "variable-driven" style, meaning variable values as pointers; you multiply such values, instead of "doing things", and then, you check for these values again... but by this, you'll be able to structure your programs' actions in perfect logic, which greatly reduces construction probs. Professionals might have other programming styles, but then, they might even understand Quine-McKluskey: do you? I don't. But so what: We've got a right to write elaborate code, too, don't we? (And yes, doctors hate the web, and no lawyer's happy when you will have read some relevant pages before consulting him - it's all about "expertise" and laymen's challenging of that.)

And finally, in languages like AHK, you then can even replace some of the guide variables with goto's, again, before writing (in order to save the "if/else if lines "on arrival"); and no, don't call execute-part sub-routines from the first, the "gathering" part: prefer to write some unnecessary lines, and then put those calls deep into the second part, precisely at that position there, might it be deep down, where that call logically belongs:

Program in perfect, understandable, VISUAL LOGIC, even if that means lots of "unnecessary" lines.

And yes, there might be even 1,000 programmers worldwide who really need Gorgeous Karnaugh (and legions of electrical engineers), but for the rest of us, it's the same problem as with the Warnier system: It cannot guide us all the way long. (And yes, I know you can apply the K-map to conditional structures, to multiple else-if's, etc., but that doesn't resolve the inherent prob: it's too confined into a minute part of the structural prob: as said, similar to Warnier: it's a step beyond the chaos it left behind, but then you'll become aware of its limitations). And yes, try Venn diagrams if you like them visually; I prefer squared paper, and then the combination of a "checklist" with an outline, and then "manual thinking work upon that". (And yes, you should keep and reference your paperwork.)

Even professionals who laugh about such structural devices, should consider the possibility that their customer, in some years, will have to put lesser people on the code they will have left for them to understand, and with which they then will have to cope with. It is evident that a more "primitive" style but which is highly recognizable in its actions, will be preferred both by the customer, and by his poor coder-for-rent then. And yes I know I explained my style of procedural scripting here, object orientation being something else yet.


EDIT:

I forgot, above: You can further "simplify" your variable encoding (and shorten your code), whenever var_x is really and unequivocally subordinated to some other, or to just one/some value(s) of a specific other, by changing regular values of the "priority variable" to intermediate values, and in some cases, this is even useful - but in most, it's not...

Example: var_a can have values 1, 2, 3, normally. But in its value 2 case, there is var_k with values 0 and 1, or var_m with values 1, 2 and 3. Now you can change the values of var_a to 1, 2/3, 4 instead, 2 being original-2 with var_k 0, and 3 being 2 but with vark_k = 1; 4 being original-3 of course; 1-5 for var_m values 1, 2, 3 instead.

As you can see, this structure flattens out your if / else if (in your routine, you would ask for if var_a = 1, else if var_a = 2 or 3 (and then, within that code it, if var_a = 2 / else), but it also complicates the logical structure, so for most cases (where originally I happily used it again and again), I would never ever touch this anymore.

On the other hand, there are specific cases where I really love this encoding, and where I use it even today, to best effect, and without it getting my structural thinking mixed up: It's indeed where two factors are deeply "interwoven", to the effect that none of them is "superior" to the other, and where, in an outline, I would have a hard time to decide if it's element-of-kind-1 as multiple parents, and then element-of-kind-2 as its child, or the other way round.

Here, I systematically do just ONE var_a, and then the odd numbers (1, 3, 5...) are aspect 1 without aspect 2, and the even numbers would be aspect 1 with some value, and with aspect 2 present, too, but this is a valid construction only, I think, when aspect 2 is a toggle only: to have ranges 1,2,3, then 4,5,6, then 7,8,9 would again be chaotic.

But as said, even today, I love such structures 1,2, then 3,4, then 5,6..., whilst I would not redo these above-mentioned "truth tables" with entries of 8, 9 or higher numbers, generated by my previous, too excessive combining of several such variables into just one, and then counting their values higher and higher.

As said, "more elegant" style can be less readable, and "multiplication" of such "pointer variables" might not be high-brow, but assures perfect readability, so today I do it in a more "primitive" way than earlier, and since this is another one of those multiple, counterproductive "I'll do it in a more sophistaced way" traps, it's worth mentioning that I today, from experience, refrain from it, except in those even-odd cases, and even those are debatable.


Oh, and I forgot some explanation: In the curse of my programming and scripting, I discovered that (always in non-evident, i.e. a little bit longer routines) the "gathering" stage ("what is the "is" state? if, else, multiple else if's") usually has a "logic" that is totally different from the "natural execute logic" thereafter ("do this if, do that if..."), and my above-described system completely CUTS those two logical structures, or in other words, it enables you to build the "ToDo" structure as natural as possible, without undue consideration for the status quo.

So my system - other systems might do the same, so we're not speaking about superiority on other ways of coding, just about superiority on spontaneous coding - puts a level of ABSTRACTION between "real conditions", and then the execute part, with its "procedure conditions", which in most cases do not have that much (if anything) to do with the former. Whilst constructs like "if x, do y (= "all in one")" mix it all up.


And perhaps I should more clearly indicate what's the "status quo", what's the "gathering". In fact, it comprises the "what do we want to do?" part, the DECISIONAL part, but that's - and that's the "news" here if I dare say, "news" for beginners in programming/scripting at least - NOT identical with the "how is it to be done" part, and that part will then have a logical structure all to its own, hence the "necessity" for abstraction between the two, and real necessity, without any quotes, whenever you aim at producing highly readable code... whatever means you apply to that aim; mine, described here, is just one of some ways to realize that necessary abstraction.


And also, I forgot to specify that in fact, the "check list" is about the "what is, what should be done" = part 1, and the logical structure (= with more or less "truth table") which is checked to the checklist, in most cases, is part 2 = "which way do we do it", or in complicated cases, we need that, in a formalized manner, for both parts. In real life, you'll do it not that much formalized in most instances, and even when it is necessary, you'll do it just for the parts that really need observation and thorough checking if any possibility has been dealt with - the "message" in my description being, keep "task" (= together with the analysis what the task will be) separated from the execution of the task, and in most cases this means multiple crossings of the "lines" from a certain element in part 1, and then the "treatment", the "addressing" of that very same element in part 2 (where in most cases, it's become something very different anyway), or in other words: the logical grouping in part 1 is very different from the "steps to be done"-grouping in part 2, or at least should be: hence the necessity to build up some TWO logical structures, for just one (compound) task, and to coordinate them, without making logical errors, and without leaving "blanks" = dead ends = cases not treated. And in this coordination work, squared papers helps enormously, whilst "input simplification tools" are not really helpful, in most cases, since they counteract your need to realize variants in part 2, from variants in part 1: You will need those variants there, again (but in other constellations), instead of having them straightened in-between. In this respect, it's of interest to know that K-maps are the tool of choice for actors' signals' processing in alarm systems and such, since here, multiple combinations of different signals will trigger standardized reactions = transmission of other, "higher-level" signals, but only in combinations, and this is a task quite different from traditional programming, where we don't have "complicated input, and then standardized output", but "complicated input, and even more complicated output".

18
Well, posting general considerations in a DO fanboy thread was naïve, since it could only be met there with admirers' blah-blah, not addressing the underlying conceptional considerations. So I take the liberty to open a new thread for those, be there some arguments to come or not.

Unchanged, that other post:

Features, not benefits

Andus is right, and I want to spit in the soup anyway, or so it might seem to some.

- Update scheme: I think 2007, 2011, 2014 (full 3 years between 2011 and 2014) is acceptable. It's just that people buying on bits (or worse, paying full price whenever the current major version has been on the market for some time then) are a little bit fuc**** up, since asking price is high, update price is high, accordingly, and on bits (that's my impression at least), this prog tends to be offered whenever next major update, without being imminent, is not THAT far away: was it spring / early summer 2013, last time, i.e. 2/3 into the "life span" of the then current major release? And this means, if you want to profit from this prog, release of a major update is the time the buy (full price, unfortunately, but full price for 3 years' it being up-to-date is better than paying half-price or more just for 10 months, right?).

- Scriptability: Well, that's been offered before, and also from XY, SC and some others. As I said before, I don't see the real value in a given file commander's scriptability, since I've got an AHK macro system which gives me access to almost any "command" I am in need, or just in want, of, from ANY file manager today, and also (here I have got some more work to do, though), from "anywhere where it applies", i.e. it's possible to access the respective file M functions from within another applic, from within there would be an "interest" of having access to the respective file M function, i.e. without switching to your file manager, and then triggering the relevant commands from within there.

So this is a common prob of all those scriptable file managers: Finally, they let you do lots of elaborate macroing, but from within that given file M, and I prefer it light, smooth: Why would I have to bother with a file manager, or worse, with a specific file manager, when I can do the work in a much more elegant way? (= Any file manager is an additional access gui to what you want to do, under the hood, so being able to have it done under the hood, without getting by a file manager for that specific task that arises within your REAL task, the one you are doing in your main prog at that time, IS a much more elegant way to do things, with no file manager coming into your way.)

And as said, I do such things from ANY file manager even today, so for me, even FC XE does not have any limitation, e.g. compared with X2 (I don't own DO but don't see any additional functionality in there): from both (and several others), I trigger the same AHK commands, instead of relying upon what one file manager might be able to do, and another, not.

- As said before, there is additional functionality both with XY and with DO, for pic browsing, and I use XY extensively for this, but at the end of the day, hadn't I my license to it, I'd do it with XN or other freeware, and Fast Picture Viewer is also there, as the superior prog to anything else (I've been extensively using the free version for years, because the version has been 1.95 for years now (or so it seems to me), and I'd be happy to buy 2.0 (but would have been extremely unhappy to buy 1.95, just some months before 2.0 came out. In other words, the developer will have prevented many prospects from buying, by having upheld his 1.95 version number for years or so now, and I very much hope this will change soon, i.e. he will bring out 2.0, finally.)

I also said this before, general preview, both with XY and DO (i.e. not only for pics), seems to be superior to their competition, but then, in order to be fully functional (and unparalleled), DO's preview pane has to be spiced up with an external add-in that costs (if I remember well) 40 euro, i.e. some 55 dollar. Also, and even if you're willing to pay for that upgrade, too, you'll realize that most of those additional formats you just bought are legacy formats, often from the VERY early days of personal computing...

So here is a viable concept but which lacks in realization quality for today's needs - many current file formats you would be eager to have preview access to in your file manager, are simply not available here (and I don't have the impression that there is much development going on on that subject, either).

- Also, in both DO and X2 (not XY, as before: they have their own, proprietary format for that, and in TC, it's even more weird, with their maintaining the venerable descript.ion format), the ADS-NTFS-format meta data concept (which has been dumped by MS) has been upheld (and probably, it's even fully interchangeable from X2 to DO and vice versa, but that's not for sure in every possible circumstances), but both are unwilling to discuss specifics: Whilst XY just isn't helpful, DO's lapdogs, in their usual style, even become rude when some paying user (not me) dares to bring up the subject. So, at the end of the day, it's more than doubtful that ADS file attributes could ever be an argument pro DO or pro X2, since you will never know what are their respective intentions with that, let alone those of MS.

- I said this before: File managers like XY, X2, DO, SC, etc. try to justify their price by integrating some additional functionality into them, e.g. batch rename, file search, and even some file synch, duplicate search, etc.: As said before, most of the time, there are (even free) tools available that do this even better (= with more options to choose from, e.g., i.e. more fine-tuning allowed), and so what really remains from this, is their argument that it's more convenient to have it all in one applic. Well, I don't share this pov, for a double reason: I prefer having the very best tool available for any given task, not some tool from the file manager but which more or less limits what I want to achieve; and even those tools "integrated" into those file managers are often just triggered by the file manager in question, appearing then in their own, additional window, which means they are not as fully integrated into the file manager as they try to make you believe, and then, if there is no full integration anyway, why not trigger, by a shortkey, the additional tool of your choice to do the work in question? Here, the promised convenience seems to lie in the fact that most users don't have immediate access to anything, by simple shortcut and/or custom menus (AHK again), but would have to fiddle with opening those additional tools, instead of them being instantly available. So, for me, this argument of "convenience of having all (???) needed tools in one tool box" falls flat, all the more so since, again, at least in some cases, non-file-manager-dependant tools of the same kind are MUCH more sophisticated than the (more or less) "integrated" ones.

- So, from my pov, it's just looks, and yes, DO's pretty, and from that, perhaps it's not that bad an idea to buy DO 11 LITE on bits when it reappears there, since that would add another file manager to your collection that might be a pleasant-looking replacement for FC XE (or anything else), for triggering your AHK commands from there, like you would do it from any other such file manager. Just don't expect file managers to be "complete" in any way, since even the most expensive ones are just compromises for anything they do. I did buy that bunch of file managers over the time for lack of understanding then that even by being willing to buy them "all", and at any price, I would NOT get real good stuff, and not even by combining their respective highlights, by switching between them. And in conclusion, I seriously think that expecially fervent DO advocates did not yet see this evidence:

You will have to build up your own toolbox; by waiting for some perfect toolbox, comprising it all, and delivered to you, at any price you're willing to pay, you'll just grow old - and that includes DO, notwithstanding their snooty pretending otherwise.

And now some additional remarks, re that 1001st DO thread:

"just a pint of beer frequently cost >= $15 USD"

As we all know, Norway (the country of the poster of that "argument"), as well as Switzerland, has highly inflated prices, AND highly inflated wages, so almost ANY price will be considered "cheap", in direct comparison with local prices, and with local wages, in those countries (and in Oil countries: Kuweit, etc.). BUT THAT IS NOT THE POINT. The point is, what does a 100 bucks sw does MORE for you, than, say, a 0 bucks sw? I.e. it's all about comparing apples with apples, not with the purse of your Lady.

"Auto filters on folders?  That's just plain cool."

Of course it is, but then, again, this automatic filtering of files should be accessible from within your main application, automatically presenting you a dataset reflecting what you're doing in your real work environment: Any file commander is just INTERFERING, badly or slightly, but interfering, with that "natural workflow" which in most cases, today, has NOT yet been realized for most people.

Of course, for people JUST doing file M, the "mileage" varies, but for most of us, a file manager is something that gives us access to some "material TO" our real work, and for this task, today's file manager do NOT "deliver", or in a way no manager who's in optimization of business processes, could ever find even reasonably satisfying. (Developers are more or less aware of the fact, so they invent "virtual folders/collections/whatever", but they lack the imagination to do this in any real useful way.)

Btw.:

We've got a similar prob with MS Excel, which is misused by legions of managers for "data crunching", "data analysis", as a data BASE, and so on, more appropriate sw not being (financially, organizationally or intellectually) available (for most people), à la différence près que in file M, there does not even exist such an ideal solution... because here, the ideal solution would be complete integration of all relevant file M functionality into what you're REALLY DOING. And it seems that this paradigm is more than most people's minds can cope with, conceptually, hence the blatant absence of relevant solutions.

Thus, DO might be "best" or "among the best ones" within very low confines, within a real flat world.

19
(I shifted this post from the "bury your software here" sub-forum.)

I revive this 5-year-old thread since I wanted to see reviews for it. Now they got a redactional "review" from PCWorld, and a user review on CNet (no, not mine), both 5 years old, and the user told them the awful dark tree background had to be done with - no reaction to this in 5 years; they've got 13 screen shots on their websites, 5 years later, and every one of them makes you wish to flee - this software's gui is incredibly ugly - which is probably a shame since it has much functionality, of which many cannot be found elsewhere in that combination. But of course, developers "so bad in looks" should better listen when kind people give them some design advice, no?

And what about having changed the db engine, in 5 years? No info on that (that I would have found at least).

And then, why did I search for a review on it in the first place? Because they had had the chuzpe to more or less ask a review from Prof. Kühn, in his famed takingnotenow blog, and said he might download a trial from their website for this.

Of course, my immediate, spontaneous reaction to this was thinking, are those people nuts? I think that if you want a review on a blog that will give your product lots of exposure, the most basic thing on earth would be to offer a free version to the blogger, instead of inviting him to download himself a trial one.

Some people are so bad in marketing (incl. not listening to the only real reviewer in five years they've got) that their lack of success cannot be but worded as "they buried it deliberately".

I'm writing this in order to kindly invite other developers to learn from this exorbitant example in what-not-to-do.

And oh, yes, they rose the price from 22 to 50 $, but that's not the problem at all.

But allow for my shifting this post into the general software forum, since, as said, it is meant as a general example.

20
I kindly ask for some knowledge about folder symbols some experts here might have; I'm going to trial some tools and am very willing to share my experience with them afterwards, it's just I don't have any knowledge on this subject yet, so I hope to better adjust my approach with some additional info.

There are some tools; one free tool just has got the same (closed) folder symbol we all know, in just a few (ugly) colors; I would need many more, and especially, many more (and brighter) colors for that same symbol. It seems there are many symbol collections out there, both free and paid, so the question here would be, in what format such symbol should / would have to be imported into the tool, in order to be as tiny and without problems as possible.

Then, there is a paid tool in two versions, where the free (or cheaper) one just replaces your folder symbols on your current system, but not for any use, for example by usb stick, on another pc (where the symbols of the folders in question you might have re-colored before, would then revert to the original yellow).

Of course, this intrigues me, not because I want just buy the cheaper version but have the functionality of the more expensive one, but because I would like to understand what's going on.

Of course, I know I can replace folder symbols by delving into the right-click menu, without buying any tool for this, but here again, the question arises if such a replacement will only work on the current pc, in light of what can be concluded by the different versions of the above-mentioned tool.

Why such replacements (other symbol and/or just other color) would not be persistent to begin with? What does Windows do with such symbols, meaning does it process them in any way, instead of just displaying them?

Of course, we have the problem here with two concurrent folder symbols, closed and open; I don't assume I would be able to use them both, I just would be happy to have the "closed" symbol, but in as many different colors as I need them.

And if Windows does indeed some processing of such folder symbols, the question arises if this processing is different between my XP, Vista, 7 and now 8.

And finally, it would be of interest to know if such new symbols work invariably in any file commander or if, depending on the file manager in question, you should be up for bad surprises, again, in case, because of some special processing the layman would not expect to be done behind the scenes.

That's why I greatly would appreciate some more info, also perhaps by links I didn't find (yet) on my own.

21
For Q-Dir, there is a common problem with partial underlining, depending on the settings for "compute directory size". This problem is addressed wherever you search for "Q-Dir underline".

I am NOT speaking of this particular problem. I am speaking of the GENERAL problem that EVERY entry in Q-Dir is underlined FULL-LENGTH in my system (XP) where every other file manager I bought does NOT underline these entries, be it folders or files.

Of course I asked the developer, but he did not respond, just deleted my question from his blog, and there is no answer to my question world-wide, and it goes without saying that I spent HOURS with trying to get rid of this underlining.

So I would delete this Q-Dir shidt from my system, was it not for its being able to display 4 (!) different folders simultaneously, which is not possible with any of my expensive, paid file managers.

Hence my question here how to get rid of general underlining of every entry, folder or file, in Q-Dir.

Thank you!

Do you know other web sites where I could post such a question? I already did on superuser.com, but don't know any other.

Pages: [1]