avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • May 24, 2019, 10:59 PM
  • Proudly celebrating 13 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - evamaria [ switch to compact view ]

Pages: [1] 2 3next
See simultaneous, previous post!

Programming solution of the original problem

We have to distinguish between "Moved Lines"

from what "they" say, these programs can process them "properly" (this notion will have to be discussed later):

ExamDiff Pro, Code Compare, Meld, WinDiff, WinMerge, UCC, Compare Suite (?), XDiff/XMerge

As said, I tried WinMerge to no avail, but with blocks, so perhaps with just single lines (and see below)...

and "Moved Blocks"

here, we can assume that no tool will be able to process these properly if it isn't able to process moved lines, to begin with, so this group should be a sub-group of the above.

Also, there might be "Special, recognizable blocks"

Which means, some tools try to recognize the used programming language in your text, and then, they could try to "get" such blocks or whatever such tools then understand by this term, and recognize them when moved... whilst the same tools could completely fail whenever normal "body text" is concerned.

In the case of WinMerge, perhaps this tool is an example of this distinction-to-be-made, but then, it would be helpful if the developers told us something about it in the help file; as it is, WinMerge does NOT recognize moved blocks in my trial.

The Copy vs. Original problem

In my post above, I mused if I had overlooked something important, since the solution I presented, is easy to code, so why the absence of proper moved-block processing in almost all (or perhaps all) relevant tools? I know now:

In fact, my intermediate solution resides on working on at least one copy of the original file, on the side where moved blocks are detected, and then deleted in order to "straighten out" the rest of the, for any "non-moved-blocks" comparison.

On the other hand, all (?) current "differs" use the original files, and (hopefully) let you edit them (both).

The file copying could be automated, i.e. the user selects the original, then the tool creates a copy on which it will then work, so this is not the problem; also, editing on a copy could then be replicated on the original file, but that's not so easy to code anymore.

Now, we have to differenciate between what you see within the panes, and what the tool is "working" on, or rather, let's assume the tool, for both files each, will process, in memory, two files, one with the actual content, the other for what you see on the screen.

Now, I said, the tool should delete all moved blocks (in one of the two files, not in both): Yes, it could do this within a additional, intermediate copy, just for speeding up any block compare later on, OR it could do these compares on parts of the actual file, reading, from a table/array/list, the respective lines to compare the actual block to:

- first, write the block table (array a) for file 1 (by the BlockBeginCode, e.g. Alt 0165): block 1 = lines 1 to 17, block 2 = lines 18 to 48, and so end up to eof (endoffile); also, write the very first chars, perhaps from char 1 to char 10, of the block (our block 2 example: chars 1 to 10 of line 18 of file 1) into the array
- then, write the block table for file 2 (array b), in the same way

- then, FOR EVERY block in file 1 (= for block 1 to block xxx, detailed in the corresponding line x in array a):
-- run array b (= the one for file 2), from line 1 to line yyy, meaning: check, for any line in array b, if the first characters of block y (= the characters from the line "startnumber of block y" in array b, put into the array earlier) are identical with those read from line x in array a
-- and only IF they are identical:
--- on first hit for this x here in 2, read the text of the lines of the block x of file 1 into buffer "a" (for non-programmers, that's a file just existing within working memory), and in any case:
--- both put all lines of this block y of file 2 into a buffer "b",
--- and run a subroutine checking if the respective contents of buffer "a" and of buffer "b" are identical (in practice, you would cut up this subroutine into several ones, first checking the whole first line, or better, the whole first 50 chars, etc., with breaking the whole subroutine on first difference)
--- and IF there is identity between two buffers, the routine would not delete the corresponding block y, but put a "marker" into both line x of array a and line y of array b
-- and just to make this clear, even if there is identity, for this x block, all remaining y blocks will be checked notwithstanding, since it's possible that the user has copied a block, perhaps several times, instead of just moving it

- then, the routine would have unchanged files 1 and 2, in its buffers 1 and 2, respectively, but would have all the necessary processing information within its arrays a and b
- it would then create, in buffers 1a and 2a, files 1a and 2a, the respective "display" files onto which then to process

- then, it would not only process both display panes, according to the array info, but also the display of the "changes ribbon" (which most "differs" also display):
- the prog then takes checks for all those "markers" in array a, and will display just a line, instead of the block, saying something like "Block of n lines; has been moved in file 2", or "Block of n lines; was moved/copied 3 times in file 2"
- similar then for all "block markers" in array b: the display just shows a line, saying "Block moved/copied here"

That's all there is to it: Bear in mind the program will constantly access both arrays (which in reality are more complicated than described above), and thus will "know", e.g., that this specific "Block moved" line somewhere in pane 2 both is line 814 of buffer 2a, and in reality represents lines 2,812 to 2,933 of buffer 2 and actual file 2; and so, if the tool is coded in a smart way, it would even be possible to move that "Block moved" line, manually (= by mouse or by arrow key combination), on screen, to let's say line 745 of screen buffer 2a, and then, the program will properly move lines 2,812 to 2,933 of buffer 2, and within buffer 2, right after the line in buffer 2 which corresponds to line 745 in buffer 2a:

As you can see, there is no real problem whatsoever in implementing my way of "moved-blocks processing" into existing "differs"; it's just a little bit more info to be put into those arrays that are already there anyway, in order for all those "other differences" and their encodings to be stored after checking for them, the bread-and-butter tasks of any "differ" out there.

Hence, again, my question, where's the tremendous diffulty I don't see here and which hinders developers of tools like BC, etc., from implementing this feature we all crave for?

wraith808, you are perfectly right, and of course, there are BIG differences, and it's not true that "with such character traits, you could run a concentration camp, too", as some German politician once said about some adversary. It's just that I have a problem with closing down threads (over there) that undeniably contain (sometimes highly) constructive content, for being "not constructive", AND I have a VERY BIG problem with what we've seen from ALL our European governments, these last weeks, when at least two thirds of our population(s? can't speak so much for other countries than Germany) would have much liked to grant asylum to Edward Snowden, and when all we see is EVERY government out there would like to "get him", in order to make a package out of him straight to Obama, the Nobel prize. So I perfectly all mixed it up here, will try to export my political views to a second thread, in order to clean this up as far as possible. To do something constructive:

(Tinman57, you're not speaking of proper moved-blocks processing, which is the subject here?) ;-)

So, problem and askSam solution

As said, I hadn't got my newest outliner files at hand; I only had backups, and not the newest ones.

I assumed they were sufficiently recent - I was wrong for two big files; but I was smart enough, at least, to work on copies, not on the old originals, and within a specific, new folder to which I copied the files I needed; it would have been much smarter, though (see special prob below), to classifiy these copies as "read-only", since for any new stuff, I had created a new file "NEW" anyway.

Now, my file "c" (= in fact, my "Computer-related things" "Inbox" (!), always much too big) is in 3 versions:
- May (964 items, the old backup)
- MayJuly (964 items on start, a copy of "May" I worked on in July, and 964 items in the end = pure coincidence)
- June (497 items, the current file before doing that mess in MayJuly (and after having moved out, in June, many items to other files)

Special problem: I had mistakenly assumed the old May backup was rather recent (or rather, after some days, I simply forgot it was as old as it was...), so I moved lots of files within the MayJuly file, which would of course have been to be avoided, so I worsened my original problem considerably.

Task: Identify any item in the MayJuly file that had either been altered or added, and deleted files here, compared with the May version, and replicate the changes / additions within the June file, and taking care the aforementioned special problem of the order / position of many items having been mixed up between May and MayJuly.

My outliner allows for exporting items (title and content) as plain text, and for separating items by a code sign before each title; it does NOT allow for exporting items in a flattened-out, then sorted "list" form. (Remark: Some (especially db-based) outliners will allow for identifying changed/added items within the application itself, so you will not need external tools in such cases, but we try to find a solution for any case where this easy way out is not possible, i.e. in "lesser" outliners, in editors, or in any text processing application.)


In your outliner or text file:

- Do the plain text export, for May and for MayJuly file, with a Yen sign (Alt 0165) before each item.

In askSam:

- Open a "blank" file in AS (if you don't have a license, the trial version will do)

- "File-Import", then "after the last item" (there is an AS header item there, so after import, there should be 1 plus 964 = 965 items); other setting there: "Document delimiter: STRING", then Alt 0165 or whatever, and you also can check the box "remove string"); file origin is Win Ansi (default), but the important thing here is, for every new import, you have to enter the "string: alt..." anew: it's AS, there's nothing intuitive here... - after import, you'll get a message: "Imported: 964 items" (and so, in this case, the AS file comprises 965 records)

- Do the same for the second file, i.e. open a second blank file in AS, and import your second .txt file here (as said, pay attention to enter the string delimiter again)

Then in AS, for the first file, then for the second one:

- If my description does not suffice, there is a subject "Saving a file in sorted order" in the AS help. So:

- Do not anything in the line of "Actions-Sort", but:

- Do the menu "File"-"Export"-"Select Documents" (= NOT "entire file")
- Click button "Clear All" if not greyed out
- Click button "Sort : None" (sic - we're in AS country here..., and don't click "Help": it's NOT context-sensitive...)

- Now, in the "Sort" dialog, click in the grey field beneath "Sort on", then select "first line in document" (this will also show "Type:text" and "Order:ascending"
- Click OK

- This will bring you back to the "Search" dialog, where it always shows "There are no items to show" (= not intuitive, rather deterring, but it's simply the info that your previous "ok" just made settings, but didn't trigger the actual sort yet), but where the button changed to "Sort Defined"
- Click OK here, again (Don't click on "Clear All" now, since the Sort button would revert to "Sort: None", of course)
- This brings the "Export" dialog, finally, into which you'll enter the target file name, and again "Ok"

- You'll get the message "Exported: 965" (or whatever: number of your items plus the one "header" item exported, too)

You'll do all this for both files, then import them into the "differ" of your choice.

Now, perhaps a new problem arises: AS will have sorted your items just by the very first line of them, which is, by the outliner export above, the title you gave to your items within the outliner; similar if you exported from within an editor: first line.

Problem in my case, I often title items "again", or "ditto" or such, for indented (sub-) items, and here, a "sorting tool" like AS creates chaos, since there is no further, sub-sorting, by line 2, 3. In fact, I've got dozens of such "ditto" 's among these, so my "sorted" .txt files didn't work any better in my "differ" than the original ones, and I needed several hours to work it out, "manually" - it would have been smarter to number all my ditto's first, then sort, then take those unwanted numbers there away again... - This problem also shows the importance of sorting QUALITY, i.e. of the need, for good sort, to not do it just by one criterion but by several. (AS is not really to blame here, since if you sort by field content there, it lets you combine several fields.)

But then, in a programming environment, you will probably give "expressive" names to your routines, etc., so as to not have identically-named "items" to be sorted then. As said, in case of necessity, do it with out-commented lines before the beginning of the routine, etc., then using the special comment character you use here, as the record divider character; with "special comment character", I mean the ordinary comment character, plus a second character, e.g. a ";", plus a second ";" here (whilst for ordinary comments, you'll just use the single ";" character), since AS accepts strings as "record code" - if you do something similar with a tool that only accepts ONE such "record code" character, for programming, you will have to do the same, within your code, i.e. do the regular comment character, plus a second special character, i.e. double the comment character or anything else... but before export from your editor or such or before import into the sorting tool, you'll have to replace the "double comment char" with something special again, since you would certainly not want the sorting tool to separate ordinary comments from code because it "thinks" there is a "new record code".

So this is a viable solution in case we will not discover a "differ" doing "block identification" in a really reliable way; in my case, the only problems I encountered, arose from my identical naming of different items, hence the need to pay attention to make these very first lines of your code entities distinct: If they are code, they will be automatically, but then, the "new record code" will be difficult, so it will be a comment line, and so some numbering will perhaps do the necessary distinction. In most outliners, you will be able to put the special character before your items, afterwards (as seen above), but even in an editor or such, in order to properly "fold" your code by various criteria, properly "encoded" comment lines starting each code block will certainly be a good idea: first, the comment char, then some special characters by which you will fold your code in different ways, then any "real comment" here.

This remains me: Some writer (I believe it was a novelist) mentioned somewhere a programmer friend of his tweaked KEdit for him to write all his stuff within just this editor (which I explained elsewhere in this forum just some months ago since it's one of the best (and weirdest, yes) you can get), and I'm sure that writer does use such "code chars", too, e.g. for chapters, subchapters, etc., and even paragraphs to be refined, etc., in order to "fold" by them (and no, he didn't explain his system any further, so I don't see the need for searching the relevant links).

(Here, originally rant about the Snowden affair.)

Oops, that'd be difficult. Of course, my problem arises because of my having left behind my AC/DC converter AND my backup stick elsewhere, some weeks ago, so I had to work on less recent backups, and now I'm facing a 960 items, multi-MB outliner file and its dump into ".txt", ditto from May, ditto from July(fromMay), and from June, with heavy moves of items between May and June: so this is the (unshareable) real-life prob / file collection here; a dummy thing would be some work of any author-within-the-public-domain, and then some chapters mixed-up. Will try and find such a text, and then mix up chapters, and try to upload the mixed-up version and give a link to the original version. Problem is to find something truly without any rights, in order to not "get problems"... ;-)

Ok, I found "", and then, a short story by E A Poe will do. Will try

Update: Just a poem of some 15 lines. Several other sites do pretend to have a "copyright" on the texts there: It's obvious they inserted some typos within the texts they copied from elsewhere, anyway... So this is high-risk territory.

Does anybody know "safe" sites, where not only the classic authors, but the published texts, too, are within the public domain?

Ok, I found at last:

Ok, the poems of Edgar Allan Poe, two times, in the version publishes them, and then mixed up, both in plain .txt format; this second version is slightly broader since deliberately, I inserted one bit from the original, twice into the mixed-up version, in different places.

I did NOT scramble minor parts, too, but only mixed up "complete entities, i.e. bits beginning by

       *       *       *       *       *

and then down to (and including) their last line above the next "       *       *       *       *       *".

Thus, with a "separator" like "       *       *       *       *       *" or just like "*", this would be a really difficult file comparison for most "differs".

EDIT: Sorry, Shades, I had overlooked your first post mentioning ExamDiff Pro, and indeed, it's on my list in order to trial, as the other very few mentioned below.

Thank you, Shades, for sharing your experience. In fact, I'm heavily googling for such db dump and compare tools, and there are several ones that bear 2 price tags: structure only, and then structure plus content, so I thought about dumps indeed, but such a dump then would be nothing more than I have got with my text files and their item = "records" divider code character: Those "records", then, are in total "disorder", from any text compare tool's pov, so putting my things into a db (which will be a problem in itself), then extracting it, dump-wise, will not get me many steps forward!

"Not many steps", instead of "no step", because, in fact, there IS some step in the right direction here: In fact, many text compare tools and programmable editors and such allow for SORTING LINES, but as you will have understood, my items ain't but just single lines, but bulks of lines, so any "sort line, then compare" idea is to be discarded.

And here, a db and its records would indeed help, since in a db, you can sort records, not just sort lines, and that's why...

HEUREKA for my specific problem at least:

- I can import my "text dump" (= text file, from my 1,000 "items", divided by a code character into "records") from my file 1, into an askSam file.
- ditto for my file 2.

- I then sort both AS files by first line of their records.

- I then export both AS files into respective text files.

- I then can import those NEW text files into ANY text compare tool (e.g. BC that I "know"), and then, only some 10 or so new or altered items would be shown, not 200 moved ones.

- The same would be possible with any sql db and then its sorted records, dumped into text files, then compared by any tool.

- In any case, this is a lot of fiddling around (and hoping that buggy AS will not mix up things but will work according to its claims).

Back to the original problem:

My googling for "recognize moved text blocks" gave these hits, among others:

= "6 Ways to Recognize And Stop Dating A Narcissist" (= hit number 43 or 44)
Now hold your breath: This got 1,250 comments! (= makes quite a difference from dull comp things and most threads here...)
Very funny article, by the way: laughed a lot!

So we are into Damereau-Levenshtein country here (not that I understand a mumbling word of this...), of "time penalties", and of "Edit distance"
( http://en.wikipedia....g/wiki/Edit_distance )
(Perhaps, if I search long enough for "edit distance", I'll get back to another Sandy Weiner article? "Marriages come and go, but divorce is forever", she claims... well, there are exceptions to every rule, RIP Taylor/Burton as just ONE example here)

Also of interest in this context, "Is there an algorithm that can match approximate blocks of text?" (well, this would be another problem yet):


And also, this Shapira/Storer paper, "Edit Distance with Move Operations":


(Very complicated subject of which I don't understand but that my, call it "primitive", way, is definitely sidestepping this problem: equal-block elimination (!) first, THEN any remaining comparison:)

Now, not being a programmer (and thus, not knowing "any better"), I would attack the problem like this, for whenever the setting for this speciality is "on":

- have a block separator code (= have the user enter it, then the algorithm knows what to search for)
- look for this code within file 1, build a list for the lines (= block 1 = lines 1 to 23, block 2 = lines 24 to 58, etc.)

for any block "a" in file 1 (see the list built up before):
- apply the regular / main (line-based) algorithm, FOR block "a" in file 1, TO any of the blocks 2 to xyz in file 2
- mark any of the "hit" blocks in file 2 as "do not show and process any further"
- this also means, no more processing of these file 2 blocks for further comparisons with further file 1 blocks
end for

- from block 2, show and process only any lines there that have NOT been already discarded by the above
(for other non-programmers: all this is easily obtained by first duplicating the original file 2, then just deleting the "hit" parts of that duplicate, progressively)

So where, good heavens, are the difficulties in such an option / sub-routine, by which otherwise fine progs like BC (where users have been begging since 2005, and where the developers always say, "it's on our wish list" - what?! a "wish list", not for users, but for the developers? I'd always thought their wish list was called a "road map"! or who's deemed to do the work, for these developers, then?!) or Araxis Merge (about 300 bucks, quand même !) seem to be unable to realize it?

Do I overlook important details here, and is it really by Damereau-Levenshtein et al. that all this would have to be done, instead?

Good heavens, even I could do this, in AHK, and with triggering the basic algorithm, i.e. the "compare tool part", again and again, for each text block, from within my replicate-then-delete-all-identical-parts-consecutively script! (Of course, in such a hybrid thing, there would be a lot of time-consuming shuffling around, since the "compare tool part" triggered again and again would need to be fed with just the separate text BITS from both files, each time.)

At this moment, I can only trial FREE tools, since I don't have time to do a complete image of my system (and I want to stay able to trial these tools later on, too), but it seems most tools do NOT do it (correctly): E.g., I just trialled WinMerge, which is said to do it, with the appropriate setting ("Edit - Options - Compare - General - Enable moved block detection"), and with both yes and no, it does NOT do it, and btw, it does not ask for the necessary block divider code character either...

Other possible candidates: XDiff, XMerge, some parts within Tortoise (complicated system), ExamDiff Pro, and finally Diffuse which is said to do it - for any of these, and CompareIt! of course, the question remains if they are just able to handle moved LINES, or really BLOCKS, and without a given, predetermined block identifier character, I think results will be as aleatoric as they are in that so-much praised WinMerge.

At the end of the day, it could be that there is NOT A SINGLE tool that does it correctly, and it's obvious many people over-hype (if such a thing is possible) free sw just because it's free (WinMerge, e.g. has navigation to first, prev and last difference, but lacks navigation to next difference, as incredible as that might look...).

EDIT: Well, I should add two things that go without saying, but for the sake of a "complete concept": first, that of course, the algorithm "doing" all these blocks can be much simpler than the regular algorithm that afterwards will do the remaining comparison (= time-saving), and second, that of course, we're only into "total identity or not" here, so it's obvious this sub-routine will speed on to the next "record" as soon as it detects the tiniest possible dissimilarity; both simplifications are NOT possible if I have to use a "regular", "standard" compare tool within a macro, for this part of the script.

There's a thread asking a similar question here:


"Which file comparison tool can handle block movement and multiple revisions?"

unfortunately closed by an overzealous mod, but then, many of the answers are false hits, and 21 "up" votes for the mention of a program that precisely doesn't offer the wanted feature, well... but in any closed thread, false info will stay forever, instead of being corrected...!

As for the similar problem in databases, there's this thread:

http://stackoverflow...-two-mysql-databases :

"Compare two MySQL databases"

which also has been closed by an overzealous mod, for "being not constructive", and here, I'm a little bit speechless, though, since as we all can see, it's a highly constructive question, and many answers seem to be more than helpful, so it would have been a good thing to add more of such.

Thank you, Ath, will try! There also seem to be some more of this kind now, will report if there are positive results.

"ExamDiff Pro" seems to have a "moved blocks" function, but is 35 dollars per year. That's not so expensive after all if it functions well, but I totally hate subscription schemes...

"dropping a question, without documentation, at the rate the OP is dropping them"

Well, you couldn't blame me for doing this, could you? And this shows the advantage of "NO explanation questions" vs. "LENGTHY explanation questions". ;-)

The REAL questions here would be:

- what might be the real advantages of AA on free and elaborate solutions like AHK and AI?, and then:
- what might be the advantages of AA on its competitors, like iMacros (rather often on bits, rather bad reviews) or WinTask (no reviews at all, but a very strong contender)?

Any insight here? (Also in the line of "AA does not even do... which AHK/AI (or WT...) does", of course.)

This goes without saying, since without doing this, you'd have even many probs for your very first pics in such a special page. The question remains, how to quickly get to pic number 1,000 or whatever, when you will have closed your session and open a new one.

Here on DC, there is an old "comparitive" review of Beyond Compare:


It states, there is one prob with BC, which is lack of in-place editing. Now, the review being from 2005, this prob has been resolved long ago.

But there is another prob with almost all of these text compare tools, which is they are unable to compare "displaced text parts", or whatever you would call them.

"All" these tools just compare line by line, and the respective line content, and are thus unable to "see" that a bunch of 10 or 50 lines is UNCHANGED, whenever that bunch of lines has been displaced elsewhere in the second text.

For every programmer who displaces routines within his global set of programming lines, this makes these ordinary tools almost unusable, and existing tools which lack such functionality, unfortunately are NOT amended in such a way (and the developers of BC (of which I have a license) are very friendly but don't do the necessary development to their otherwise fine program either), but there should be SOME tools at least that DO such a thing.

Now let me explain. I know that for discovering that a single LINE has been displaced elsewhere, first, such a tool would need a much more complicated algorithm, since without being really sophisticated, it would process lots of false "hits": It goes without saying that in normal texts, here and there, but in programming, LOTS of lines would be identical, but without being displaced, it's just that the same lines occur, again and again, within many, DIFFERENT, contexts. So, these respective contexts would have to be analyzed, too, by such a tool.

Also, the same problem COULD occur with "PARAGRAPHS": Since in programming, a line is also a paragraph, for such a tool, checking for "paragraph" first, would not be helpful in order to avoid such "false hits". On the other hand, most paragraphs in normal texts would be "enclosed" by blank lines, whilst in programming, these lines would be paragraphs, but normally NOT "enclosed" by blank lines, so the algorithm could check for "REAL" paragraphs vs. paragraphs that are just separate lines, and then try for finding just these "real paragraphs" elsewhere, whenever in place they would be expected, in text 2, they are missing. So this would be ONE OPTION for such a tool.

A SECOND OPTION for such a tool (and this could be realized even within the same tool, "by option") would be to look after a special character or any other "divider character combination" or such, i.e. it would not even to check for displaced "real paragraphs", but only for displaced "paragraph groups" / "text entities" or such, meaning, when text enclosed in these "divider codes" is there in text 1, but missing in text 2. Such "divider codes" could be TWO blank lines, or a really special character that does not occur BUT for separating your "programming entities" within your big file (e.g. the Japanese Yen character, or anything you want), and it would be easy to put such a very special character into your programming text whenever needed, since any programming language has got a special character for "comment line", and you would only put such lines between your sub-routines, in the form

CommentLineCharacter and then SeparateEntityBeginCode

Also, such a tool, with such functionality, would be a relief for anybody doing his work within an outliner, since outliners "invite" to re-arrange all your stuff again and again, in order to have, in the end and ideally, all your stuff within "meaning ful context", in the same way some people don't so much multiply separate paper files, but put them, whenever possible, into lever files, in order to group them. (It goes without saying here that this is a good thing for "reference material", but not really advisable for separate customer files or such.)

Now, most of these outliners have got an EXPORT function, to plain text at least, and for a text compare tool, this is the format in which such "outliner files" could be read and be compared, and it immediately becomes evident what the above-mentioned functionality would be able to to here:

Any outliner item that you just would have displaced, would be checked by such a tool, and then discarded as identical, which in fact it is, and this tool would only show then items in which you would have done ADDITIONS or CHANGES, or NEW items, or (in the other pane, respectively,) DELETED items, but it would NOT show all these, perhaps numerous, items that are unchanged, but just displaced to another position within your tree.

So, here as with programming bits above, the question is, which TEXT compare tools are able to to such a more sophisticated comparison, without all those "false hits" showing up in less elaborate tools,

AND, there also should perhaps be some DATABASE compare tools that are able to do such a comparison, by database "RECORD CONTENT" comparison. Here, the very first problem is that most databse compare tools do NOT EVEN compare content, but only structure, and those that are (possibly) able to compare content, are "just for MySQL", so the question is, are they able to compare "records" in just text format, and with the "record begin code" of your choice (of course, it would be possible to use the special character the db compare tools then "needs", or to replace the "commentlinecharacter plus recordbegincharer" to the special "recordbegincharacter" the db compare tool then needs in order to properly function.

But there is also the question if these db compare tools, comparing content, are able to then compare the content of any record to the content of any record, or if they, too, as in the usual text compare tools, just compare content of record 1 in "text" 1 to content of record 1 in "text" 2, then record 2 to record 2, and so on, which would be devoid of any sense. (Of course, there is an additional prob with db compare tools, price: some of them are 1,000 dollars or even more, so I'm looking out for such a tool, in case, that's not as expensive as that.)

Hence my questions:

- any insight into text compare tools, with respect to these details?
- or any insight into database compare tools, ditto?

I won't fall on fellow DCs' nerves by endlessly repeating how happy I am to have switched from IE to Chrome, you will have understood this. But now, it's my task to "personalize" Chrome a little bit - which is a pure joy, having all those "extensions" around, BUT some of thise get just rave reviews, and then, ain't but crap, or something similar at least.

First, there is many ERRONEOUS "info" about Chrome's cache M (for management): Many "info" will make you believe the setting "Menu - Settings - Show advanced settings - Privacy - Content settings - Cookies - Keep local data only until I quit my browser" (my gosh!) will clear cookies, history and cache after closing down Chrome - some of them even outrightly will state so.

This is all rubbish.

It seems that some time, Chrome HAD a similar setting (in their early "twenties" perhaps), but they have done away with it, in order to better sell your browsing history (and there is no tool to automatically (!) do away with all those awful Flash cookies, anyway, so you have to regularly go to that macromedia site by your own).

Current state of affairs is, you have to use an extension for clearing your history and your cache (or do it manually; and with such an extension, you don't need those Chrome settings above anymore, anyway...).

As for relevant extensions, multiple sites propose Click&Clean, so I installed that one, but in fact, there are many more, and I suppose they ain't any worse:

History Eraser (from the same developer), ClearCache, OneClickCleaner, CleanTheJunk, NoHistory, SimpleClear, Browser Privacy Clean-Up Assistant, or then BetterHistory (which is something different and could be useful in some instances). ("The winner takes it all" - "nowhere", these alternatives get any coverage, so "they all" install C&C, but well, it seems to do what it promises to do, then!)

And now for tab M and use logic hampered by developers' technical incompetence:

In IE8, this was totally awful, the prob being you get from one link to the other, and at a certain moment in your browsing session, you will have opened 60, 80 or 120 tabs: On my XP system with 2 GB of working memory, response times are totally down in such circumstances.

How to manage such links "for later"? Doing bookmarks? Not handy! So many a times, I left my comp on, for the next day, and even further days, and of course, at some time all of this will become totally corrupted, and you will lose all your finds you will not have properly "processed" then.

Now this in Chrome: First, memory M is MUCH better than with IE, even without that incredibly effective AdBlock running, but WITH Adblock running, it's pure joy to have dozens of Chrome tabs open, in direct comparison with IE: Acceptable response times, no real problems.

But then, from a less compare-it-with-total-sh** but more objective, "2013" pov, having dozens of tabs open in Chrome isn't THAT fun, since you will have to process them (= some checking, some hdd storage if there is something of value for your current research subject, etc.) in a row, which is not really possible without leaving your comp on for days, depending on the subject, and not speaking of any "navigation" between such pages, virtually impossible here, i.e. you have to go "one-by-one", then process the page that presents itself to you, and close it, in order to have, many hours (and / or some days) later, a Chrome state from which then you could close down Windows and your pc.

So there is one extension that only gets rave reviews but which is almost useless though: OneTab.

I installed it, b/o all those rave reviews, and the idea behind it is simple: Close down all your currently openend tabs, but have them stored in one "container" tab, so it's sort of a "local, intermediate bookmarking service" - this also frees your working memory, but my pov is twofold here:

- for one, this freeing of the working memory, most of the time, isn't even really necessary
- and then, whenever you click on such a page, it has to be reloaded, from the web it seems (judging by the response times then)

So, in the end, immediate availability of these pages would be preferable, in most circumstances, but an OPTION to have them cleared from working memory in extreme cases, would indeed be helpful.

Now for my saying it's almost useless, and to explain that part of the title saying that some developers don't code in a logical way, from the users' pov:

In practical use, whenever you have got too many tabs open in your browsing session, you would click on OneTab's symbol, and then have all these pages relegated to the container tab.

But afterward, you will need to "open" these tabs again, in order to process them: "Restore all" would do this, and since you can even form "groups", this doesn't seem to bad at all.

Where things GO BAD, though, is when you will be PROCESSING these re-opened tabs:

Of course, you assume that they will vanish from OneTab list whenever you close them down, or to speak the truth:

There are THREE scenarii in which you would need such a tool:

- in the scenario above where the tool serves for clearing "tab clutter", when there are too many tabs opened at the same time
- in a totally different scenario where you would like to constitute groups of bookmarks for further use
- in a combination of these two where you would use it to de-clutter your tabs, but here and there, you would even want to preserve SOME tab for further use, instead of getting rid of it after processing it

Now, getting rid of tabs is almost impossible with OneTab, since whenever you then close one of your re-openend tabs, it will NOT vanish from the OneTab list, and unfortunately, this makes this tool almost unusable for its intended main use, since you never really know which one from all these pages listed there is ready to be deleted from there, too, manually, and which one has to be preserved, since most of the time, doing research, these pages are rather similarly-named, and, as already said,

there is no back-synch whatsoever from your closed tabs to their listing in the OneTab list.

But again, OneTab is not marketed as a permanent-bookmark tool, but as a tab M tool, and as such, evidently, it's a total failure.

So let's muse about the background: The developer has no knowledge / expertise, presumable, to DO that back-synch between closed-down tabs and his list. Of course, two routines would be needed:

- the usual control-F4 one, for "close current tab AND delete this page from the OneTab list", and then
- a special key combi, let's say, shift-control-F4, for "close current tab BUT preserve it in the OneTab list for further access to it"

It's evident the first alternative would be needed in most cases, for making OneTab the tab M tool it wants to be, AND, indeed, the second alternative would be most helpful for SOME such tabs, but only some... and it's evident the developer doesn't know how to code this, so he codes some appealing offering that at the end of the day is almost useless.

Question is, why those rave reviews, then, on https://chrome.googl...lloiipkdnihall?hl=en ?

Some interesting extensions I'll have to check out:

Tabs Outliner (very interesting thing, with which OneTab won't work together anyway...)
Tabman Tabs Manager
Awsome New Tab Page
Sidewise Tree Style Tabs
and perhaps some more

But as we see here

1) sheer programming incompetence of developers prevents many good ideas from being realized, and worse:

2) instead of doing nothing, they then realize what they are able to do, and which is not much, and worse:

3) they even get rave reviews for such deceptive tools

Awful. And yes, why not pay some 20 bucks for a tool that lives up to its promises, but good-enough free tools that only do the minor part... the part the developer was technically able to program, leaving out the relevant, really important part... life's too short to be endlessly bothered by such minor software.

Tinman57, thank you very much for this hint. As always, my posts were rather lengthy, so I will certainly not criticise you for not having read them in their entirety, but instead, let me repeat the real problem here:

I don't need the addresses of individual pics, as individual files / tabs, but I would like to identify some pic, in order to get, WITHIN THE "CATALOG" home page, as fast as possible, back to that pic, in order to continue my "browsing" which I had left another day.

At this time, I suppose it's quite impossible to have ONLY those pics, in the "catalogue home page", that REMAIN to be viewed?

Let's assume I reload the same "catalogue home page", with the option "don't show pics", perhaps my "endless", necessary pressing of the "end" or "pgdn" key(s) would proceed faster - I've had this idea myself, but without trying it.

But then, in order to do my further browsing, from "pic number 1,500" (or whatever) on, I would have to reset the option to "show pics again", and here, I suppose, the browser would first load all those 1499 pics (in our example) "that come before", and THEN only would I be able to brose from pic 1,500 on?!

My own research brought the correct search term, and numerous hits for SCRIPTING (but not for handling, as a user) such a page:

It'd be "ajax dynamic scrolling pages"

Currently, I'm musing if the page

or some elements mentioned there could be of any help to find a better approach to my problem detailed here, my underlying prob being that I know NOTHING of Ajax and all this...

And this being a "consumer", not a "developer" prob, I could scarcely justify my delving into these probs for a week or so, all the less so since I'm not sure at all I'll ever find a solution by my own means (provided there IS any solution, to begin with, and which has not been established yet...).

Hence my question again, is there a web developer in this forum who'd be willing to share some of his expert knowledge with us?

Kudos for this ad for Since I didn't really understand the principle from your description, I went to their site, and there, I understood immediately, but also understood why I hadn't understood your post, spontaneously:

Unfortunately, there is a logical fault in this approach (if I understand it correctly): When you look at vids (very funny currently, Putin's gesture, right on their home page ("From Russia with love"), and I have to say that I've much difficulties to NOT like Putin these days (and remembering all the BAD things he did before and will do in the future), considering his stance in the Snowden affair: outrageously smart AND not too harmful to Edward (if I dare call him by his Christian name)).

(As for the Putin vid, for readers whenever the visual link will have vanished from their homepage ( I LOVE that (photoshopped) photo: )

As for the logical fault in this concept: When you look at vids, you will NOT buy books and such, and vice versa. So, whenever I buy something at those "affiliates", I would have to remember there was, go back there, then click the link there, then do my buyings... or just do my research?

Since on top of this, 99 p.c. of the time, I do my RESEARCH on amazon, but then do my "buys" within the interlibrary loan system between Germany's university libraries...

(and then, customer service of has become so bad these days, whenever a problem occurs (and problems multiply with them, lately), and you try to phone them, they interrupt the communication, or when then you choose their "chat" function, they leave you without any answer for 10 or 15 minutes, when you finally will close down the communication for lack of any result)

but anyway, be it for research or for buying, having to go back to a specific site, in order to access the relevant site from there, in order to handle them over some p.c. of your possible purchase, is simply not realistic, and even when I buy with, it's from links from, since I want price comparison, and amazon's prices are far from being "good", most of the time, so SOMETIMES, the amazon price shown in idealo (or any other price comparison tool in the web) is "good" for me, but most of the time, it is not. Thus, in such instances, not going grom my price comparison tool to the vendor, but going back to a specific (video or other) side, in order to access the vendor?

"Come on"! Such a behavior is virtually inexistant, even when I really LIKE a specific site, I would not do it, for it being far too much fuss for me!

The inherent problem of all advertizing is, for us, intellectual "elite", most offerings in ads are simply sub-optimal, or more precisely, they are sub-optimal (as a serious offer to us, I mean) in almost ANY case, but the "underclass" will not bother (but buy whenever the advertizing is sufficiently strong, and the price isn't TOO outrageous, comparatively, and within their reach).

Thus, my - strictly personal - "answer" to this problem is, make available info in order to sell something else, which is related to your info (so there is a change to sell your product / service), but which is not so intimately related to your info that your info has to become biased / dishonest / fraudulent, i.e.

Have your info as honest and complete as possible, in theory. And yes, when I write my content, I somethings get stick to some content I so much would like to publish, but which would harm my business... but seriously, I try to publish it to the max, with withhelding a strict minimum of details, preferring to put them in a way that hopefully the effect of publishing them, i.e. of people "seeing" my honesty here, will overcome the dissuasive effect.

This is travelling on a rope, but whenever in doubt, I prefer to "say" it, and all the worse for my business lost to less honest contenders - but I'm not a saint or such: Whenever I "speak out" in such a way, I make sure my "open wording" will harm my contenders' businesses, too, if ever it's able to harm mine.

So I'm accustomed to "speak out" "difficult things".

And that's why as for "advertizing" in that sense we all encounter it all the time...

This being said, I could imagine technical ways to implement such a "deferral" you have presented here, and which would DO IT:

Some "special cookies", stored differently, stored elsewhere, but with the users' specific allowance, to begin with, and which then would, whenever you arrive at or, would "pop up", asking you, " is a site from which we, in case, would get some p.c. - would you allow us to do a referral here?".

Without any hesitation, I would click "yes", even for real buys, for many more than just one site I regularly visit...

and of course, it would become VERY amusing whenever we'd get two or more such pop-ups, for the SAME site, and we would have to really choose then, who's going to get our money now?

And yes, I'd prefer ONE standard-pop-up dialog here, letting me choose. Remember, we're speaking here, exclusively, of referrers to which originally you will have given the permission to pop up this dialog! (And in such a dialog, there should be several options: "Yes", "Not now", AND "I revoke my permission to you to ask me that question ever again!".)

Under these circumstances, I'd be happy to have referrers have "my" money.

Yesterday, I had a similar problem. 100 p.c. system on my 1-core system, terrible, but distributed between Chrome and YTD Downloader (4.3), and after (!) I had downloaded a group of files with YTD Downloader... I had to shut down YTD Downloader, in order to get even Chrome revert to normal state, so I suppose if it's not some "special" site that slows down Chrome in such a way, it's the INTERACTION with something else.

This is weird and should not occur, but I like Chrome too much in order to switch to FF, and in my tries with FF, I also had moments where things didn't went as smoothly as they should have, so there seems to be an "aleatoric factor" in these things.

In my case, that'd be: Switch to another YT downloader, but stay with Chrome; I hope you find a similar cause and solution.

"you should randomly click on anything you see anywhere to make it as hard as possible for the advertisers to make sense of their click data"

Oh yeah, reading such an advice makes you spring up 1m up in the air!

Well, as for newspaper sites, I often HATE those newspaper sites even more than those advertisers, so I don't want them to get "my" money, even if this brings me the occasion to reprimand the advertiser (I said already there's unhealthy musings going on in reactions to ads...)! ;-)

What I already said above, sometimes, when I wanted to "bring" some money to some blog, I even went into "navigating" within the site clicked by me, for fear "just a click" would "not do it", but that on top of that, I had so simulate some "real interest" - I presume for most such linked ads, that's totally dispendable, though!

"You can get the web site owner in trouble if you do this repeatedly" - you're speaking of the advertizers' checking the respective dns addresses, of course, and that's another detail that really complicates things.

From what I "hear" in the web, for those google ads, it seems indeed be possible to click the ads of your competitors, again and again, every new day anew, with identical or very similar dns address, AND THEY HAVE TO PAY!!! INCREDIBLE!!!, and from what I see from my own dns address, my respective dns addresses, between browsing "sessions", are VERY similar, so that "manual checking" would indeed instruct the advertizer it's probably the same person / "household" (hence the problems you are speaking of since the advertiser would also (here, wrongly) suppose this clicking again-and-again was (be it technically, be it "socially") triggered by the owner of the respective site.

All the less so it's understandable that in similar circonstances (i.e. similar / identical searches in order to get the ad again and again, then click on it to trigger the payment to google), googles makes advertizers pay?!

Is there any first-hand knowledge available on this matter?

This being said, I'm VERY happy to "surf the web" with Chrome and AdBlock now, AND in the curse of this, I encountered some lines within a blog that I think are worth sharing. The lines, in German, went along this:

"If you like this blog, please consider turning off your ad blocker just for this blog here, and consider clicking on an ad here."

This is, without any doubt, a very smart way of handling ad blockers, on the part of a that blogger: His script checks if you have blocked his ads (that might be easy and just some lines of standard script you will probably find in the web), and if the check is positive, you get such a hint to which you will hopefull react if indeed you think his blog is worthwile.

The irony here is it was an awful blog I will certainly never return to, so I didn't do as he wanted, but the lesson is, HAD I considered his blog worthwile, I CERTAINLY would have tried to do as he said (don't know how to do it, but will check next time I'll get such an invitation, and WILL click on some ad), and many other people do hopefully do the same whenever they like some site (enough in order to take this little effort).

So this is certainly a very good idea, worth sharing for anybody maintaining a blog or any other "real estate" within the www.


"On data storage and applications going cloud (Surfulater, Mindjet et al.)
« on: January 13, 2013, 06:35:34 AM »":

first post there, by "helmut85":

"2. On the other hand, there's cloud-as-storage-repository, for individuals as for corporations. Now this is not my personal assertion, but common sense here in Europe, i.e. the (mainstream) press here regularly publishes articles convening about the U.S. NSA (Edit: here and afterwards, it's NSA and not NAS, of course) having their regular (and by U.S. law, legal) look into any cloud material stored anywhere in the cloud on U.S. servers, hence the press's warning European corporations should at least choose European servers for their data - whilst of course most such offerings come from the U.S. (Edit: And yes, you could consider developers of cloud sw and / or storage as sort of a Fifth Column, i.e. people that get us to give away our data, into the hands of the enemy, who should be the common enemy.)

3. Then there is encryption, of course (cf. 4), but the experts / journalists convene that most encryption does not constitute any prob for the NSA - very high level encryption probably would but is not regularly used for cloud applications, so they assume that most data finally gets to NSA in readable form. There are reports - or is it speculations? - that NSA provides big U.S. companies with data coming from European corporation, in order to help them save cost for development and research. And it seems even corporations that have a look upon rather good encryption of their data-in-files, don't apply these same security standards to their e-mails, so there's finally a lot of data available to the NSA. (Even some days ago, there's been another big article upon this in Der Spiegel, Europe's biggest news magazine, but that wasn't but another one in a long succession of such articles.) (Edit: This time, it's the European Parliament (!) that warns: - of course, it's debatable if anybody then should trust European authorities more, but it's undebatable that U.S. law / juridiction grants patents to the first who comes and brings the money in order to patent almost anything, independently of any previous existence of the - stolen - ideas behind this patent, i.e. even if you can prove you've been using something for years, the patent goes to the idea-stealing corporation that offers the money to the patent office, and henceforward, you'll pay for further use of your own ideas and procedures, cf. Edit of number 1 here - this for the people who might eagerly assume that "who's nothing got to hide shouldn't worry".)

4. It goes without saying that those who say, if you use such cloud services, use at least European servers, get asked what about European secret services then doing similar scraping, perhaps even for non-European countries (meaning, from GB, etc. straight to the U.S., again), for one, and second, in some European countries, it's now ILLEGAL to encrypt data, and this is then a wonderful world for such secret services: Either they get your data in full, or they even criminalize you or the responsible staff in your corporation. (Edit: France's legislation seems to have been somewhat lightened up instead of being further enforced as they had intended by 2011. Cf http://rechten.uvt.n...ryptolaw/cls2.htm#fr )"

and, especially, the post by "clean" there (far down in page 1), almost in its entirety, and just for an example:

"Why this is so harmful?


Even with their "patent frenzy" (cf. their allowing for "patents" for things not new at all, but just because you have the necessary money to pay for the "patent" of these processes et al. perhaps known for years; or have a look at U.S. sw "patents" which cause scandal world-wide in the "industry"), the U.S. do invent less and less, and with every year, this become more apparent. Thus, the U.S. government is highly interested in "providing" their big corporations of "nation interest" with new info about what would be suitable to make some development on (= forking findings of third parties), or simply, about what U.S. corps could simply steal: Whilst a European corp is in the final stages of preparing patents, those are then introduced by their U.S. competitors just days before the real inventors will do it.


It's not only the Europeans who are harmed: Whilst the Japanese ain't not as strong anymore as they had once been, it's the Chinese who steal less and less from others but who invent more and more on their own and who risk to leave trailing the U.S. industry anytime soon."

and there again:


And it's not just inventors, etc., abroad that are at risk: It's perfectly sensible that some innovative, small U.S. companies are spied for the benefit of big U.S. companies, be it for simple stealing their ideas alone, and / or for facilitating their taking over for cheap.


Of course, it's not only and all about inventions, it's also about contracting (Siemens in South Africa? Why not these same contracts be going to General Electric, by using core info? I just made up this example and I'm not insinuating that GE might want or go to "steal" from Siemens, but yes, I'm insinuating that some people might be interested in "helping" them to do so.)"

and there again, some "practical advice" (see below):


So it might be time, about 30 years after Orwell's "1984", to store "old" comps (Win 8 and Office 2003, anyone? har, har!), "old" sw, and to divide your work between comps that are connected to the net, and those that are not, and to transfer data between them with secure USB sticks, in readable, "open" (and not proprietary) data formats (perhaps XML instead of "Word", etc.), in the end."

and then, of course, that now superbly confirmed:


The purpose of this post is to show that I'm not speaking out of paranoia, but that reality (here: brand-new MS Office 2013, probably the biggest impact in sw for the coming years, except for operating systems) outstrips fears by far."

As for the "advice" to shift around data between your regular network and your secured pc's, well, that was a little illogical, but I'm sure what's "clean" wanted to express, was:

Do NOT make available core data "they" might be interested in, on any pc to which they could ever have access, be it by MS "backdoors", or by special secret tools "they" might have loaded into your system for this purpose, AND "clean" wanted to express, whenever you TREAT such data, prefer data formats you are able to look into, in order to check that the data itself has not been infected by a secret sending tool ("Troian horses") or whatever else "they" might have invented or might invent.

We should perhaps add here that the current state of affairs, any of our web usage causing unknown amounts of data being transferred BOTH ways, each time, and with content unknown by us, and our having accepted such state of things, instead of not allowing any such "outbound sending" when we don't know / cannot check the contents, in a "readable" format, is the real cause for the current situation, i.e. we should have fought about this "constant data sending" about 10 years ago: now, the web can never become "safe for us" again. So...

- So, physical distinction between "data they can have access to" and private data (= core knowledge of your corporation, big or small) is primordial today

- There has been an (unfruitful) discussion if these encryption tools have backdoors or not; we should assume they have

- Some smart people prefer "replace tables" or such, which seems to be the "Enigma principle": I'm not an expert in cryptography, but I once wrote a rather lengthy such table, of which the principle was to replace not just "e" with "1a" and such (of course not), but to replace "e" with "1a" if it was on xth position in a line/paragraph, but with "1b" if it was on yth position there, and/or if it was the first "e" there, but with "1c" if it was second, etc. (and you can mix up this "position vs. occurence" principle freely if your table is big enough, and there are also other possible combinations, by counting occurences of other specific characters before, in a subset (line, paragraph, or sets of 100 original characters or whatever) - and then, you could even have different such tables for different subsets, e.g. a table for the first 100 original chars, then a table for the second 830 chars, or whatever, and also, it would be possible to have BEGIN the following subset / table according to the CONTENT of the first subset: e.g. first table for the first 100 original chars, and then for as long as they ain't 100 original "e" in that subset BUT go to table 2 immediately IF (even without 100 original "e") there are more than 67 original "a" - or whatever: endless possibilities here, and I suppose any brute force will be totally ineffective; Enigma's concept was much simpler than mine, I suppose, so...)


No, it wasn't, a short look into the respective wikipedia article taught me so, Enigma was pure genius, even by today's standards, so considering its time... (and they got hold of it by other means, not by breaking the code it produced...)

So I'm intrigued by the question how to COMBINE "keys" with tables, and, of course, the question: Why does it SEEM to be impossible, even for the developer of encryption software, to decrypt data which was encrypted with his own program? Why that missing backdoor today, as they want to make us believe? As soon as you can decompile a program, or have the source code to begin with, why not de-compile all these algorithm-triggered processes, table-assignments, etc., too? It seems that today's "keys" trigger rather basic processes? But then, as soon as those tables are within the algorithm, you just need the algorithm's source code, and the tables are worthless, so there should be DIFFERENT keys, one key being some additional "concordance" table (as detailed above, in all its complications) for another key, but this logically means, as soon as "they" don't have your key, and it's sufficiently long and complicated, your key could very well combine the "key" part AND such "table" parts... which means, "keys" and "tables" are NOT conceptionally different (as they are presented on some web sites), but are the same thing, it's all just about two things:

- the processes the right key must then trigger, must be as complicated and time-consuming as possible: here, the Enigma was outstanding, too: Since it was a digitized-but-not-computerized system, application of brute-force was technically impossible

- parts of the ALGORITHM, i.e. of the command structure "what is to be done with the elements within the tables" must remain unknown: just hiding the table is unsufficient if the instructions are all there and available to the decryptor, because then again, they'd brute-force simply all data combinations within that table, which would be high numbers, but not "endless"... but as soon as part of complicated instructions for the same table/key are missing, things become interesting!

- a possible "solution" to this problem could be to have the algorith itself as vague as possible, meaning within your algorithm, there would be hundreds of variables that would then "decide" upon further processing, and which would only be delivered to the algorithm in "real time", BY the intro of that key, which means, the key would not only trigger the "transposition code", for fixed-length bits of the data, but would be a table in itself, deciding in real-time upon the lengths of those data bits, and upon the sub-processes how these are to be processed

- IF you construct encryption software software this way, that would indeed be a perfect means to prevent back-doors: There would not be any possible "de-compilation of what the code DOES" in this scenario

- And this means (I hope), if you write your algorithm yourself and make it sufficiently complicated and vague, i.e. not predetermined by its own construct, but "needing too many exterior data in order to proceed in any sensible way and which is all missing without the key", your key, if sufficiently long, even might be of the kind of "most of this even makes sense", since this brute force approach on the key will then, internally, trigger so much "internal response time" by "over-complicated table comparisons and triggering all sorts of DIFFERENT things for each for possible permutation", that it should be unbreakable as the Enigma would be today, by information technology means, too

- Of course, you could have "two keys", one being the key, and another one being your "table", but it's evident that one key of enough length could contain "key" and additional tables, there is no conceptual difference between them, you only have to look after utmost complication of the KEY-WISE processing instructions, whilst the algorithm-wise processing instructions will be "in their hands"

- So, the result of my additional musing here is (and I suppose encryption experts will have known all this for ages, but it's so sweet to re-invent the wheel, so much more satisfying than doing cross-words), do not only check encryption software source code for backdoors, but also for level of complication, of pre-determinedness of processing the key = all those keys their bruto-force will try: if the algorithm is rather simple, brute-forcing will be simple, but even if processing is totally aleatoric AND CANNOT BE RE-SIMPLIFIED by them in the process, they should be up for a real task

- In other words, a good key is a real good key whenever its compents, its characters, ain't introduced as a whole into a simple transposition algorithm, but when there they have to read into as many different entry variables, spread all over a complicated, variable-VALUE-wise inter-dependant code structure (missing chars in your code: default values... and even those could be aleatoric-at-the-arrival, depending on existant chars in your code string? I'm not sure here in case they know the algorithm) - it seems to me such a thing is virtually unbreakable


- As we all know, "they" will end up getting your tables, in this scenario, since you need them in your working memory, and you will also have to store them somewhere (but perhaps with some other means of encryption?). What's perhaps of utmost interest here (as long as normal people don't get tortured in our "democratic" nations in order to "give" their commercial secrets):

- What about Windows' hdd storage of the "working memory", AND how to prevent it? And how to be sure to have prevented it? In this scenario, you "just" would have to cut electricity (and allow for some seconds in order for the memory chips to lose their current state of activity, though)

- Also, it occurs to me that perhaps SOME currently available (traditional, non-table but key-based) encryption programs do NOT all have such backdoors, but, thanks to the developments of these last days, any of their future updates will have, so, if you continue to use your current encryption tool, but only in physical circumstances where they can't even update it without your knowledge (!), you MIGHT be relatively safe with YOUR data

- But what about emails?

- And, as in ancient times, what about staff from yours "they" simply and secretely "buy out"? BUT, that's for sure, ancient methods ask for much more effort than all these new electronic harm "they" have done to us these last years, so making "them" have to rely upon human spying, is not actually facilitating "their" task

- So there's plenty of room for TECHNICAL discussion here!

As for spies, if I was Edward, I'd marry Anna, have some beautiful kids with that overly cute and certainly very smart woman, shut my mouth, and be as happy in the residential outskirts of Moscou as it gets...

Since we all see now that humankind (which is myriads of people and their respective governments) simply ain't worth it to be nailed to a cross - and that's all I'll say about the "political implications" of this current affair.

As for technical discussions, why not share our respective knowledge, and our respective hints and ideas?

Yours Truly,

from wikipedia: "In October 2010, Chapman posed on the cover of Russian version of Maxim magazine in Agent Provocateur lingerie." - hasn't this been a truly wonderful idea?

Thank you very much, Curt, I hadn't been aware of that. There is a difference between the thread title and the individual titles of the respective posts, and I assumed the thread title could not be changed but by a moderator. This is good news for my AHK intro, too, since I had to become aware that google is NOT able to make the "link" between "ahk" and "autohotkey", so my intro is virtually lost in google, when in fact it was meant to product "traffic" to DC! (and to be read by a maximum number of people who otherwise would buy some (mostly inferior) macro tool, so a better thread title would come handy...
So thank you very much, very helpful hint!

As for identifying known photos there, well:

- when I have scrolled down to photo #1,000 (as an example), I KNOW the - special - url of that photo, meaning the url it has when, in the scrolling page, I click on that photo

- but it seems that knowing that special url is useless here, since, in order to browse photos (example, again) #1,001 to #1,500 here, in the scrolling page, I would need to identify that special photo #1,000 not as a "singularity", but WITHIN the scrolling page:

- in order to "open" that SCROLLING page "at photo #1,000" (or whatever its numer, whatever its name / identification means there)

So the technical question here is:

How to identify a given photo (or other element) WITHIN the scrolling page, and which currently is NOT YET shown there, within the scrolling page:

Let's assume the scrolling page currently just shows photos 1 to 50, but I need photo #500 - and yes, I would know "something about it", from my previous scrolling down to it, from my previous "scrolling session":

How then to make such a page scroll down to that "element #500", or more precisely, two steps:

- First, how to KNOW which way "photo 500" IS identified, within the "scrolling page"? (Since just clicking on it will open its url in a NEW "window")

- Then, next time, how to make the scrolling page scroll down to that photo, identified in the previous session?

From looking at the source code of such "scrolling pages", it seems evident that the special url of "photo 500" is NOT yet listed somewhere in the code of them, when current state of affairs is that they just show photos "1-300"

So, there must be some "trick" to have them shown, and I fear in order to know this trick, "you" need to have better knowledge of HOW such "scrolling pages" WORK, to begin with?

From google, I did not get any help accessible to me, in German, it would be "nachladende Webseite", and if I translate this to English, I get numerous web sites dealing with fire arms, so, first, I'd need the specific English TERM for such scrolling pages...

Alternative: You don't close down Windows in the evening but run your comp for a week or so, and each new day, you try to browse some more 500 photos there. Prob here, Windows will "choke" after some days, and I don't have time to do this, every day in a row.

So I hope for a better alternative.

Btw, people who opt for such pages, in tumblr or elsewhere, do not seem to have seen this problem that all their "previous" photos are literally LOST this way, for people "new" to their respective blog.

I'm aware this is a difficult question, but there are many blogs in this weird format, so an answer how really "to do it" would be tremendously helpful.

General Software Discussion / Ad blockers, newspaper sites, etc.
« on: July 02, 2013, 06:16 PM »
So, I said, after 12 or 15 years of IE, I switched to Chrome, and first thing I installed was AdBlock, oh my! So read here my defence speech!:

As for "ads needed in order to finance the sites", well, I understand the argument, but I mainly browse such news sites, and during these last months, there has been lots of propaganda there, in the way of "web users should PAY us for our quality journalism" (= on top of looking at ads, and of clicking on them, please...), when in fact, the only interesting thing there are the users' comments, and certainly not any "quality journalism" which is blatantly absent from such sites.

Also, they all cite the example of the NYT, and here, there IS quality journalism (as there is in, but in NOT ONE of the many German or French newspaper or weekly magazine sites), and so, or, with their constant reminders of them delivering "quality journalism", went greatly on my nerves, these last months, all the more so since all of them, on top of delivering (heavily-biased) "news" those same newspapers would have been deeply ashamed of just 10 or 12 years ago (cf. and Der Spiegel today, and that same weekly newsmagazine 20 years ago, being the best one in the world at its time), AND their heavy censorship of those aforementioned user comments.

In fact, in order to get some "back-up" news or such, some details that shine a new light on the news they present, some info that better makes you understand what you hear and see, you now have to rely virtually exclusively upon some user comments, and this means you must be "thankful" to them for any such instructive user comment they do NOT censor as soon as possible, in order to "mainstream" their sites as much as they can. And yes, here and there, you even have the impression that they "leave some important info in" such user comments, instead of deleting them, because, here and there, they're just too ashamed of holding back ALL relevant facts, and not being allowed, from their owners, to present them themselves, they at least leave such info alone, when it comes from some well-informed user.

All this has brought me, a once heavily, and long-time paying reader of Der Spiegel, Die Zeit (both expensive weeklies) and FAZ (, Frankfurter Allgemeine Zeitung) to seriously thinking, get paid by your owners, and by those powerful people you do your daily propaganda for, and if you don't get paid enough that way, go to he**.

And this way, I never considered clicking on any of their ads, in order to get them some click money: I just endured those ads since for IE, there isn't a good ad blocker: Here, with Chrome, Adblock works tremendously well if I dare say!

As for blogs and other sites where ads "should" be clicked, well, it seems you can do some setting in Adblock or Adblock Plus, in order to see them again there... but I have to say, being not sure if a click was enough, or if I also had to do some clicking within that ad page, then, in order for the blog, etc. to get my click money (= without me buying anything there), I very rarely took the effort to click an such an ad, and then navigate within that site I wasn't interested in, and in any such case I wondered if what I did there was perhaps completely pointless, without my buying that crap on offering there.

So, having installed an adblocker now, after 12 or 15 years of browsing WITH ads, is also a means to get out of such schizophrenia to just SIMULATE interest in ads, in order to "help" bloggers, etc.

A similar phenomenon with Google ads (which ain't blocked this way): Very often, instead of getting good hits (within the very first 30 or 50 hits there), you just (or mostly) get crap, but also the ads of some overpriced offerings when in fact you want info, not buy unnecessary goods/services, and the very fact of not finding what I was searching for, triggered my clicks on such ads, in order to make them COST their unwanted advertizing.

But this is weird, unhealthy, or, as the French say, louche et malade!

We all pay a monthly fee for our browsing experience (in my case, 35 euro), and why not distribute some of this money (let's say 15 euro, or make it 25 plus 15, = 40 euro) evenly to those sites we spend our time in? Ok, this would undeservingly advantage those "newspaper" sites (in my case at least), since they would be paid for my reading "their" user comments... but it would certainly be a much healthier approach to webspace financing than all this unwanted advertizing now.

And not speaking here of all those Google ads like "lawyer (specialty) (town)" on which 90 p.c. of the clicks come from "another lawyer (same specialty) (same town)".

So discussing ads is discussing visual clutter, and burnt money... and whenever I want to buy something, I'm searching offers in vain that'd read "look here, we're NOT more expensive than our competitors, but we have a real good product/service: here's proof:..." Never ever. Everytime I want to spend money, I have to search for hours, delving into biased "reviews", offers made as intransparent as possible in order to make them as in-comparable as it gets, etc.

What's blatantly missing, especially, is a thing honorable vendors could easily do now, in most countries, and yes, even in Germany:

They could compare their product/service to those similar, by their competitors, and they could do it in an honorable way, listing not only their products' strengths and their competitors' products' weaknesses, but they could do a balanced, equilibrated, real comparison - at this condition = at this "price" of total honesty, they could even do it in Germany, where "comparative advertizing" had been forbidden for many years.

This way, products' and services' quality would be literally multiplied within a few years, and you know what this would imply for the "economies" of the Western Hemisphere:

They would literally roar up!

But no, "everybody" convenes instead in lying to us, in taking all efforts to have us not know the negative core aspects, in a word, they treat us like idiots: They BLUR our knowledge, instead of widening it up.

And that's why I'm very happy to use Chrome, with AdBlock, now, at last.

Thank you, Tinman57, and as soon as I'll have both comps side by side, I will make the effort to check / compare the settings (that's why I leave them in their original state here with the old IE8 version) - and that will be my very last contact with IE - but I owe this (and then reporting my finds) to the very big help I've got here with this matter.

Also, I'm thankful to cmpm since he guided me to an alternative I was willing to trial, and that was a big step for me to leave IE behind me. I've already switched to Chrome now, in spite of the rather intensive-blue tabs crumped right under the "ceiling", and in spite of not having found a way to change that color to something less intrusive - all these "skins" ain't neat enough, i.e. put some unnecessary coloring "behind" the tabs and appearing between them, and so on, so I'll have to live with blue tabs, and I, after having done 10 or 12 or 15 years of browsing with various IE versions, will not look back to all those ugly ads there slowing down my system: Having installed Adblock, my screen is much "lighter" now than it was before, with url's like or or such, and much more pleasant to look at, even with too-blue tabs there.


Well, I finished by opening a new thread for this, it's perhaps a subject worth separate discussion!

app103, Curt, Carol and StoicJoker,

thank you all very much for these kind ideas! As said before, I should have withheld these questions to the point in time where I will be able to try the respective ideas, since on comp 2 now I've just got that, working, old version, but I'll try them out and will report, especially in view of the fact that by what you've found, app103, it's obviously NOT the new version that is at fault, but some of my setting, so "reset to factory settings" is primordial here.

VERY funny here: Even the hiding of the last lines in this very text entry field, making it impossible to enter more than just some lines of text here, in IE8, is gone now... that I've changed to Chrome, for good (what an incredible relief... no wonder nobody ever bothered to look after this, IE8 compatibility isn't really any more serious objective (if you don't sell things, that is...):

___The macro trap___

I also installed Maxthon cloud browser, which also seems to be quite ok (but in spite of trying "skins", I couldn't get rid of some baby-blue background). And then I installed google / chrome as they call it. Well, it seems that in the end, I found my new browser... (And nobody today will believe anymore that Chrome will store more of your things, when fact is, all of them will store all of your things, for their masters-in-the-dark (and yes, I know, there are so-called stealth Chrome versions, but I don't believe in them either).

Back to macros

So chrome doesn't have a caption, and the tabs are on top - at least, they are neater than in Avant (where at least it is possible to hide the awful toolbar, and then the address line begins where it should begin to begin with, to the left of your screen, not somewhere in the middle). I didn't get back to my macro file (I "lost" my usb stick together with the AC/DC thing), but this way, at least, hdd and backup are in different places, for once!), but tried the AHK Window Spy: So, the good news is, all these Window windows have "titles" and "classes" indeed, independently of any visible caption: you just have to look them up, once and for all, and then write your scope macros, the caption is really and only there for your being content with what you see on screen, or let's call it "for historical / tradition-compatibilily reasons".

And also, there is a way to retrieve the content of the address line, by "Visible text in slow title match mode", just as in IE. Then, all necessary kb shortcuts are there.

I should have looked up all this months before, but it seems that I first had to "loose" my traditional working environment, in order to get rid of this crazy IE8 sh**.

At the end of the day, as soon as you know the necessary commands (i.e. how to navigate between tabs, how to address the address line, to create a new tab, and so on), transposition of even numerous macros from one such browser to another one should take 20 minutes... but the macro trap exists nethertheless: You built up your macro system around some specific applications, and then you're stuck with them, not by real necessity, but because you "invested some work" into them... when in fact, that was an investment you had to do anyway, independently of every such application (e.g. dozens of specific url's, all two-key on two F keys, together with a second, abc key then), and in which conceptual work the non-transferrable "specifics" for that specific application were only a very minor, negligeable part (that's why serious applications (not Ultra Recall, though!) have a function "replace x with y"), and this means, most of your work is NOT lost, buy a transfer to alternatives!


___Tabs above address line___

There are two conceptual points of view for this:

Mine is:

Adress line first, since there are some MORE addresses out there within the web, NOT YET present in my tabs. So "address line urls" are "farer away", whilst those addresses in tabs are "nearer to me", hence the logical order address line, then tabs, then the content from the current tab.

Another one is:

Tabs first, since they are the topmost category. Then the address line, since your choice here will choose the content of the selected tab... or even, its current content is, again, the title of the current tab, but in full length... and then that content; this point of view seems to be as logical as mine above, but we see here that you should at least have a CHOICE, PLEASE!!!


___So it's Chrome or FF, depending on what you like best... but why others?___

Chrome vs. FF seems to be a question of personal style... but IE8 or Safari 5 seem to be for lazy people only preferring lots of fuss to some 3 or 4 hours of delving into alternatives.

Opera perhaps, for some? But it seems (I trialled a previous version of it, didn't like it, but would'nt pretend to really know it now), for some special uses, or had it been vanguard some 5, 8 years ago, then got left behind?

I understand Avant has got several, alternative "machines", in order to display sites "like" Chrome, "like" (a newer) IE, "like" FF - would this a real compatibility advantage? Are there many sites Chrome renders faultlessly, but not FF, and vice versa?

I'm not really happy with Chrome, but I can very well live with it, whilst living with IE8 had become a chore.

Thank you, fellows! You asked me right question: why bother?

The lesson here: Masochism should be transitional at least if it cannot be avoided altogether, which would be best.

Does anybody know how to go far back within those tumblr "archives" that ain't e.g. 12 new photos on each new page, and where by this you can select page 124, by its number, within the address line, but which are made of just ONE, endlessly scrolling page? Let's say you want to see what they did publish there some 3 months ago... prob here, you'd have to scroll down, again and again, the same 1,000 or 1,500 photos, before even reaching the first photo you might be interested in! I won't give you response times for this in IE8, they are incredibly awful even for the first 500 such photos... but Chrome isn't THAT much better here on my system (with 2 GB of memory), so how to do it, or what to do alternatively, except for making a spider attack in the night?

I even thought of "hiding photos" when endlessly scrolling down, and then "unhide photos" after having scrolled down by a 1,000 photos. Any better idea than that (since it THEN will break my system, trying to display all these 1,000 "unnecessary" pictures, before displaying any "new" (i.e. sufficiently old) one), anyone?

So this is quite an awful web page format, but quite common on tumblr, and perhaps some web specialist here knows how to handle it best, from the outside?


I'm sorry, I should have put a question mark at the end of the title line.

Another example of these special "back-loading" pages - or what's the correct denomination of these, in order to google for an answer? - ist this one:


Here again, every "pgdn" or "end" pressing will load more pictures, again and again. Are there elements in the source code that could be of interest, to check, or even to manipulate, in order to get to the "depths" of such a page, more quickly than by incessant, endless scrolling down?

4wd: I don't think it's cookies settings, since without cookies, almost nothing would work, so I allow them (but have them automatically deleted afterwards).

Of course, I don't have comp 1 running, so I should perhaps asked my questions when I had got hand on it in running state again, but I've been too eager to find an alternative RIGHT NOW! ;-)

cmpm, Avant Browser is not something really brilliant, but it works. I thank you for this idea; I installed it, and for some weeks, I'lll go by it!

F5 is refresh, F6 is address bar focus, F8/F8 is prev/next tab, Alt-s is searchbar focus, control-d is add to favs, control-n is open in new tab (they call tabs "windows" (well, to be precise, it IS a - tabbed - window, after all...), but very fortunately don't mean independant windows by this, which would have been unacceptable) - so I don't need a menu.

But I don't know yet if my AHK macros will work: content of the address line must be identifiable (meaning, not by loading it into the clipboard, but as such), and there is no real caption, but tabs instead, right at the top - it's for that design detail that I never stayed with google browser or its derivatives...

But IE and MS have become unacceptable for any pc user with a minimum of respect for himself or herself, that's for sure, so...

Will share my experience with Avant when I will have tried to transpose my IE macros.

Btw, for IE, on current pc 2, the version is "8.0.6001.18702 IS" (by "Help-About"), so I tried to find a download version "8.0.6001", but it seems the respective links have all been updated to download the latest "8" version from them, which is a common prob with all these download links of freeware or shareware.

And btw, again IE, it seems this old 8.0.6001 version hasn't got the probs with Disqus I opened this thread for, but has got many other probs, especially with ridiculously bad memory management. Of course, it's a different pc, but with the same processor and with the same 2 gb of memory (and plenty of virtual memory in both cases), so I suppose the old IE version is the culprit, not my pc or pc settings.

That's why I will now try and really find an alternative to my IE crap - I'm ready for a change, finally.

Some months ago, I had briefly tried the Apple browser, but found it hadn't the necessary keyboard shortcuts then. Will trial the whole bunch again.

Btw, Avant has got 10th place out of 10 tested browsers in the test, but then, from their testing of software I'm more intimate with, I can tell that their criteria, and their test results, are often rather weird...

At the end of the day, AHK's able to identify a window even without a traditional caption, so... but am I the only one to prefer captions just for visual tradition? Anyway, having all these tabs right at the top... it's a real prob for me!

And then, (almost) all other software does NOT do that, so it cannot be considered "new standard" or such, it was just a design idea by google, then shared by some and abhorred by many...

Be this as it might be... in any case:

Carol, you're right, IE (whatever version) isn't worth my attention, efforts or loyalty anymore.

Some update here: Safari for Windows seems to be "dead": I had tried v. 5; v. 6 does not seem to be transposed into any Win version. So that is out.

Yes. But I only change things when really necessary, and whenever I do so without that, it's 1 or 2 years later at the latest I put all my stuff back into my ancient things. I know this is not really the way I should do it.

Btw, on this comp 2 (with the about 1-year-old Win XP (and IE8) version(s)), installation of Adobe Reader XI... and then even X (!!!) FAILED!!! So I installed Foxit Reader 6, with traditional toolbar though. Now, this is much more pleasant than my Adobe Reader X experience had been, on comp 1: visually very light, fast, tabs (!!!)... only drawback: The "hide the (really ugly!!!) toolbar" setting isn't persistent, so that ugly toolbar reappears with every new pdf I get from the web.

As for FF, whenever I have switched to it, I quickly get back to IE8: There's too much "middle grey" in the FF layout (I know there are other color schemes, but don't know how to try them), and especially, I like in IE that the address line is ABOVE the menu, when in FF it's beneath the menu! I'm aware these remarks might appear ridiculously minor, but it's why I always switch back to IE as fast as possible... And btw, do you think the iCult would have grown the way it has if looks and ergonomics didn't count at all?

Thus, any ideas how to transfer my running, 12- or 14-months old IE8 version from pc 2, to pc 1, in order to install it there, too? Or what are the possible settings (out of all those 300 IE settings there) that I should check first, in order to try to amend those probs there? Unfortunately, I think it's NOT a settings prob, though, since a year or so back, I think I remember having read IE had NEW problems with some web sites now... but of course I had thought  they were working on them (ha, ha!), so I was eager to install all possible updates. So now, a year later, and in view of the perfect running of my "old" IE version, it seems to be the best tactic to "go back a year", IE8-wise, but how? (And it might be the old IE8 version, AND something in the old Win XP version, too)

So, my question in a more general way: How to DOWNGRADE that MS crap, instead of monthly updates?

I normally use comp 1, with IE8 and MS updates latest version; unfortunately, this IE8 doesn't work properly (anymore) with many websites (I've got XP so IE9 won't install). So, in order to get beyond such probs, I also installed FF, using it just for such problematic sites.

Now I "lost" my AC adaptor for my comp 1 (will have to do 160 km to retrieve it, in some weeks), and so I unpacked pc 2 which in fact I hadn't used for months (or was it years?), and without having done the "necessary" MS update installs.

Now, a revelation: Many of the websites that don't show up properly in IE8 on my up-to-date comp 1, don't cause any problems here, with IE8 a little bit older... or is it just different settings here?, and certainly with XP3 in a version many months older than on comp 1.

To give a precise example: On the website, it's the users' comments that are really interesting (as it's in lotsa other press offerings:,,,, and many more). Now, in order to read those comments, I had to revert to FF, on my comp 1, "in spite of" doing any possible MS update (when in fact, it's probable those updates CAUSED those problems!)... and here, on comp 2, I CAN read the comments (which would not open in comp 1 anymore, for many, many months now) again! I don't have to say this is an incredible relief for me, it's so more pleasant to just use just ONE browser, and I'm fine with IE8 when it works!

Hence my big question: How to assure that in pc 1 (when it will work again) I can read the "Disqus-powered" user comments of again, like in old times?

Should I check for some special settings? And yes, I've got very old backups, but they would destroy ALL new things in my pc 1 system, so going back a year or two isn't a viable solution.

(In the normal curse of things, you'd assume that you could resolve problems with updates; in MS' case, it seems to be the other way round - how to get an early version of IE8, then, for downloading, after having de-installed the current version - if that ever is possible? Or, to put that question in another way: On pc 2, I obviously have got a working, old (but installed, not fresh) version of IE8 - how to retrieve that, in order to install it over my new crap version on pc 1? (All those are English versions, which I prefer.))

Hello Sicknero

"But the second screenshot on the page shows a "Search for images" dialog with add/remove image buttons, which looks like it will fit the bill."

I didn't get this by looking myself, now I understand: If you're right, this screen would represent a "virtual folder", with 1 or more such photos, to which all the others then would be compared: This would indeed be an almost perfect solution for the task, especially if you collect some photos here that are similar in a certain way - but doubtful if the algorithm is able then to "get" that common feature... This relates to:

"Face recognition". In fact, in practical use, it would be way beyond identification of multiple photos of a certain person like in "wedding" photography (or similar situations): Have a certain detail on numerous photos, let's say a landscape with a barn, and that barn from numerous perspectives (front, sides, back), in different areas of the respective photos, and in various zoom grades: It would be helpful to have software able to identify that barn, OR even similar barns, upon request, which is two different tasks, in fact:

- setting "try to find every instance of this object / person / barn / whatever" (which would imply that within a certain photo, you do a virtual cropping of some area in order for the software to "know" what exactly you are after, too, and why not such "selective zones" on more than one photo at a time!), which means the algorithm would have to make guesses, and when in doubt, EX-clude the find

- setting "try to find similar objects / barns / persons..." which would mean that when target is a woman, the algorithm would IN-clude other women, on condition that their hairdressing / hair color or face shape or something is similar: there could be options like "by predominant color", "by predominant shape", or combinations of such

"want to search for similar pictures on my hard drive to create a slideshow" - here again, the algorithm should be able to search for shapes or tonal ranges, by settings done in palettes or such - of course, development would be rather demanding, and if you think twice, it probably occurs to you that it's Adobe indeed who should have implemented such functionality in their image CATALOGING software, brand-new Lightroom 5, since

- LR is sold in sufficient quantities alone, in order to justify development cost, and

- such a "find similar" functionality has its natural place in image catalog software, i.e. it really gets of interest for the prof. photographers having collected a high 5-digit number, or a 6-digit number of photographs:

It's all about building up "virtual collections", any time, years later if needed, instead of forcing you to foresee your later needs, and do heavy tagging up-front. So, you even could say, photo compare software is a software category that would not even exist, had Adobe done their homework right!

"They're in the news again for that this week" - that's why I mused how to get sms by pretending to take photos - indeed, at the end of the day, it would probably be "sufficient" to have google image search's functionality only... but for your own stuff... and without google indexing your own stuff and putting it into the www, at the same time!

"I was thinking of it as a possible workaround" - of course, a simple macro would do that, but the above "virtual folder" or "collection" functionality seems to be really good - another example, btw, how to NOT do screenshots: They should have put there SOME photos only, in that screenshot, in order to make people see that it's kind of "form" to enter "examples to search for" there, but no, they filled it all up, so that for me (and some others, I suppose), the intended functionality was obscured.

Age of components: I'm not against the respective "design" of the screen, but I think the "texture" is awful, by this meaning the visuals of the "background" of the components used there, "looks like really old Delphi stuff" - not the repartition of the controls, etc., which is a completely different thing. (Btw, Foxit Reader 6, brandnew, is a good example for very "modern-style" visuals.)

As for FS Viewer, both aspects ain't the way I like them: the components obviously are very old, for one, but worse, FS doesn't give you the option to get rid of some controls (and they don't even answer your mails when you say you're willing to BUY MULTIPLE (!) licenses if the introduce such an option), i.e. you cannot hide all those toolbars cluttering the whole screen, and so, for just VIEWING photos (and for which you simply don't need all these controls then), it's really, really ugly - the Swiss product "Fast Image Viewer" (free version available if you can do with just some standard formats) is at the other extreme: Really fast - fastest thing I ever trialled for picture viewers (well, it pre-loads pictures, among other measures), and a real beautiful screen (and yes, you can access the palettes, for processing your photos, by shortcuts, or by moving the mouse to specific screen borders) - highly recommended!

Again with regards to FS, their FS capture, even in their latest paid version, is unable to switch the target of the screenshot back and forth, from clipboard to file and back, with a keyboard shortcut, which would be a very simple thing for the developer, but he just doesn't do it, and in order to toggle by mouse, you have to display the program window, to begin with, when in fact the big interest of such a program lies in it being available and ready for use even when it's minimized - the same applies to multiple other commands in FSC neither available but by heavy mouse movements and screen clutter. So, ergonomics-wise, FS products are catastrophical, and FS developers don't take any advice whatsoever.

As for photo viewers, I suppose DO isn't bad at all here, and XYplorer did a really good job with their special viewer pane last year (except for hiding it within the file (!) menu...) - I'm VERY pleased with my XY lifetime license now! (Ok, that's not free when FSV is, but as said, FIV's free too if you don't need it for special file formats.)

"I never progressed further than writing text adventures in BASIC on a Commodore Pet..." - that's why I so strongly recommend AHK: It's easy, and its returns are tremendous, meaning just some lines of code will so strongly facilitate your tasks! It's like VB for Applic, but much easier, and for your whole system, not just for MS things!

Any experience with the MindGems thing? Will ask again next week-end. ;-)

Pages: [1] 2 3next