topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday March 28, 2024, 7:56 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - helmut85 [ switch to compact view ]

Pages: [1] 2 3next
1
Ath, I don't understand this symbol. All I can say is that Paul Keith is my friend. His way of presenting things is terrible, but very rare are those unique people who try to THINK instead of repeating common "truths" / unison. I'll continue this thread in "They're still standing."

2
"I created this analogy so that you would understood too why full web clipping has a unique future over partial web clipping." Paul, again, most of the time, I clip the whole text of a web page but then quickly bold important passages, or do it later on. It's not so much about cutting out legit material there (but cutting out all the "crap" around), but of FACILITATING FURTHER ACCESS upon that same material: You read a text, you form some thoughts about it or simply find some passages more important than others, and so you bold them or whatever, the point being, your second reading, months later perhaps, will not start from scratch on then, but will concrentrate on your "highlighted" passages, and also, these are quickly recognizable - now for simply downloaded web pages: you'll start from zero, and even will have some crap around your text. I think it RIDICULOUS that these pim's do their own, sub-standard "browsers" (e.g. in MI, in UR...), but don't think about processing nor of clearly distinguishing of bits within these "original" downloaded pages. All this is so poor here, and in direct comparison, the pdf "system" is much more practical indeed. This being said, I hate the pdf format, too, for its numerous limitations. But downloaded "original" web pages are worse than anything - and totally useless; as said, in years, I never "lost" anything by my way of doing it.

An example amoung thousands: You download rulings. You quickly bold the passages appearing important for your current context. Then you clip, from this whole text body, some passages - probable, you'll do this after some days, i.e. after having downloaded another 80 or 120 rulings in your case, i.e. you won't do this after really knowing what's decisive here, hence your need to read, and to "highlight passages within", many more such rulings. So what I'm doing here then, I trigger my clipping macros on some passages within these bold text blocks (or even between them if in the meantime it occured to me that my initial emphasis was partly misplaced), and I paste them, together with the origin, url, name of item and such, into the target texts.

What do YOU do (rhetorical question here), with your "downloaded original web pages" - you all must read them a second time, before doing any clipping. That's what I'm after here: The original web pages becomes a hindrance as soon as you quickly have read it once: After this primal vision, it should become something more accessible than the original web page is. Pdf is much better here, and my system of downloading plain text, then re-format it to my means, is best... if you have just a few pictures, tables, formulas that is - hence the ubiquity of pdf in sciences, and rightly so.

But mass downloading of web pages in their original format is like collecting music tunes (incl. even those you don't even like). It's archaic, it's pueril, it's not thought-out, and it's highly time-consuming if done for real work. And that's why I'm not so hot about progs that do this downloading "better" than other pim's do, but which ain't among the best pim's out there.

It's all about choices - but today's consumers do lots of wrong choices, that's for sure. Hence my "educational stance" I get on so many people's nerves with. But then, somebody explain to me why it would be in anybody's interest to have a replica as much faithful to the original within your Ultra Recall browser (! and which probably is much better in your Surfulater browser), months after downloading that web page, the real prob being that both force you to begin at zero with its respective content then.

This way of doing things is simply, totally crazy, and 95 p.c. of people doing it this way is no reason to imitate their folly. Oh yeah, such pim's sometimes present "commentary" field, in order for your entering your thoughts about that web page or such. Ridiculous, as said, far worse even than so-called "end notes".

Do you realize that in the end, it's again about the "accessibility of information"? Let's say you read all these things, and stored all these things. Then, in order to really have them available, in their bits, for your work, I have to browse 100 rulings for these important passages (remember this was done by first reading, so perhaps my reading time there was about 120 p.c. of yours at the same time) - whilst you will read all these 100 rulings again, which makes more than double reading time, since your second reading will be slowed down by your fear to not have "got" all the important passages here (when, in my work flow, it's probably time for underlining sub-passages now), and then, as I do, you'll export your clips.

Ok, in one single situation your way of doing things would appear acceptable: When you download pages without even reading them first in the slightest possible way. But then, is it sensible to do this?

There's some irony here, too: There's a pim user fraction who says, I'm happy with plain text. I'm reverting web pages to plain text, whilst many people probably fear loss of information when they don't "get" the formatted text of the web page in its original form. And then, I need formatting capabilities in my pim or whatever. So, "do your own formatting". Once more, most of the time, I get the text in whole, then "highlight" by formatting means. It's rare that I just clip a paragraph or so, because indeed I fear my changing clipping considerations. And indeed, it's very probable you need some ruling for a certain aspect now, but for another aspect in the future, so it's sensible to download it in full, then clip parts from this whole text body - but even then, it's of utmost utility to have important passages "highlighted", from which you'll clip again and again.

And in general, please bear in mind that you choose at any given moment. Ok, it's the whole page you download, but from a site containing perhaps 150 pages or more, i.e. even when you try to NOT "choose beforehand" in order to preserve the material in its entirety, your choice will have made upon which pages you download in full, and which you didn't download.

So, put a minimum of faith into your own discernment: When choosing web pages to download, AND when choosing the relevant part of these web pages you'll download.

And Paul, it's of course about bloating the db or not, as you rightly said: It's about response times and such (whilst some db's get much bigger now, cf. Treepad Enterprise, the higher-priced version of it). But the real point is, try to not bloat your own data warehouse, with irrelevant material, even when technically, you're able to cope with it: Your mind, too, will have to cope with it, and if you have too much "dead data" within your material, it'll outgrow the available processing time of your mind.

And as I said elsewhere, finding data after months is greatly helped by tree organization, in which data has got some "attributed position", from a visual pov. Trees are so much more practical than relational db's only are, for non-standardized data. Just have 50,000 items in an UR db, and then imagine the tree was missing but you had to exclusively rely upon the prog's (very good) search function.

All the worse then that there'll be never a REALLY good pim, i.e. that UR and all its competitors will remain with their innumerable faults and missings and flaws. And don't count on Neville to change this - I'd be the happiest person alive if that happened, but Neville won't do it, it's as simple as that.

EDIT : It just occured to me that I never tried to download web pages into a pim, but when you do, you will probably never get them out and into another pim, so even when downloading them, having them in a special format or special application, then just linking to them, seems to be preferable independently of the number you tend to download... And that would be .mht for just some pages, in my case, or WebResearch for pages in numbers, in your case - that'd be my advice at least if you cannot leave this web page collecting habit behind you. Stay flexible. Don't join the crowd "I've got some data within this prog and that I otherwise don't touch anymore" - I read lots of such admissions, and it's evident these people did something wrong, at some point.


3
Paul, I know I hadn't answered some points you made, and it's my lack of time these days (will be better in Feb when I recoup some of these, promised - it's just I have to read you 5 times VERY SLOWLY before having a chance to get some points, in a minimum of order, and I miss this time

"There's still SO MUCH lacking in PIMs it's crazy.", you say. Oh yes, and that's about CHOICES. So back to Neville for once.

Neville, you made your choices, which is your right, but to be frank, I'm unhappy with your choices. Fact is, from the price of your editor, and from the number you claim to have sold your editor, anybody can do the maths, and even over the years, it's evident you got millions for your work in your editor. This is very fine, and I'm happy for you.

Problem is, you didn't invest much of this money, time-wise, in the perfection of your editor, where I would have exactly done this. I know a little bit about editors, as I know a little bit about pim's, and as far as I can say - I trialled your editor -, you stopped development of it at a rather early stage, i.e. I know lots of functionality available in other editors, even for much lesser price, that's missing in yours. So it's a FACT when I state your editor is overpriced, and I'm too "dumb" to see where the "real" quality of your editor might hide. Stability? Lots of good editors are stable? Ease of use? Not so much. So I don't get it but acknowledge your number... and then I ask myself, with such numbers to back further and farther development, why stop development?

Because, I assume, you've got marketing considerations within your way: Instead of developing the perfect editor - which yours is rather far away from, in spite of its price -, you saw (I assume) that traditional editors are "dead", so further and farther development would have "cost" you big time, without procuring any substantial returns, especially when comparing them to those you got already with your editor as-it-is.

And then, surfulater. Some very good ideas, some real brilliance, and then, instead of developing a really good pim which, on top of being a really good pim, is the best pim-like web page downloader (well, one by one, manually, don't let's too much start to dream here!) - instead of developing such a beast, for marketing reasons, you stop development, and you do it from scratch again, more or less web-based, and in that proprietary fashin I wrote above. All this is up to you, it's your product line, and for your purse, I don't have the slightest doubt that your decisions to stop development on your editor in order to make gains from Surfulater, and then again, to stop development on desktop Surfulater in order to bring out a new product, is highly beneficial.

But then, it's developers like you, Neville, brilliant developers but whose eyes are too much on their purse instead of the excellence of their product, who are responsible for what Paul says: "There's still SO MUCH lacking in PIMs it's crazy."

There will never be a brilliant editor, never be a brilliant pim, never be anything really outstanding in any general sw category - because those brilliant developers who could it, at a given moment stop and then do something else from scratch instead because of the financial gains they see there laying, and they want them, There's sociologists who say, everything above 5,000 euro / 6,500 bucks will not get you any more happy than those 5,000 euro, so there should be lots of room for successful developers to develop a product further than economic "reason" will tell them. But it's simply not done, nowhere.

And don't say, "hold it slick, people want it slick" - people want to have easy access to elaborate functionality, they want intuitive ways to do their work, and the simpler this gets, the work the developer has to do. But no, they don't do their work, they have too much to do in their strive for the dollar. (It's similar for file commanders and many other sw categories: Nothing REALLY good out there, they all stop by that "further work wouldn't be enough return" for me point.

And that's why 35 years after the intro of the pc, and the pc "dying" now, pc sw never has reached a mature state - not even Word which doesn't become a good text processor but by applying lots of internal scripting to its core functionality (but which at least allows for such internal scripting - most pim's do not).

And then again: Where's Surfulater's functionality to smoothly PROCESS what you got from the web by it, and that's my point. Developers do have the right to stop development early on, but I then have the right to not be happy about what I see, and let developers know (not that this made any difference on them: that indeed I've seen a long time ago).

4
I

"There has been some debate here about political and related topics. And many at DC (including our host) feel this is not really an appropriate venue for it."

Thank you, 40hz, that explains a lot. On the other hand, this way, the owners (the owner and his "men") of this forum have to ask themselves, at what side of the table do we place ourselves by this stance?

Just today, there's press coverage of an adjacent subject I missed covering in my "essay" above, which is package identification and paying for some packages, payment by the sender (here Google) for the "infrastructure" of the web provider (here: Orange, in France), in order for the customer (= you and me) to receive the content in question.

It's obvious that these Google vids with their lots of traffic constitute a prob for those "providers", but then, in my country, you pay them 50 bucks a months for a "flatrate", and some of these "providers" don't offer a real "flatrate", but impose a limit of 50 GB / giga per month.

So the real problems here are, soon there will be a time where with one provider you'll get "everything", whilst with another, you'll get "anything but Google vids", and then, there is, "anything but a, b, c...z" in the end, and you cannot change your provider each month, you have minimum contract terms, and periods of notice. That's prob 1.

Prob 2 is, more and more it will become accepted to have inspected these packages, and eventually, they could even refuse to transport encrypted packages on the pretext these could contain not even illegal content, but simply non-contractual content.

Thus, I thought that DC was a "users'" forum, and does not represent the "industry".

( Ironic here, in "Die Zeit" site, today, they speak about a possible "perfume" or such that will enhance your natural body odour, and somebody leaves the commentary, well, this is new indeed, for the first time, they will sell you something you've already got by nature! Somebody else, a good work-out could enhance this natural body odour as well (the point in the article being that females would like to smell this odour in order to feel attracted (or not), a case of "biological matching", by "matching genetic material". Why I'm speaking about this? Because above, I said the "industry" sells academic papers, with horrible prices on top of that, to the general public who's already the owner of these academic findings, having financed them all to begin with. )

I acknowledge I shouldn't perhaps have posted these "political" things here, in "sw", but in the "general" part of the forum, but then, I also wanted to explain the mutual reverberations between scraping sw (Surfulater, WebResearch) and pim's, AND then the web in general and content in general - at the end of the day, we're speaking of external content here, and even when we speak about simple pim's here, we're speaking of their ability to handle content original belonging to third parties, so it's all some mix-up where everything I'm discussing belongs to something else within this lot.

II

"I think the "Of course Surfulater can also grab entire webpages was what lead to helmut85 saying it was for web collectors."

Thank you, Paul, that was my point in this respect. In fact, whenever you clip bits only, any such pim will be more or less apt (and certainly will with some external macro boosting, whilst those two "specialist" offerings are there in order to render whole page pages (much?) better than the task is executed by your ordinary pim. On the other hand, if it's not about whole web pages, I don't see the interest of these "specialists", since as pim's, both ain't as good as the best pims out there are.

This addresses to nevf = Neville, the developer, and I perfectly understand that you defend your product, but then, there have been lots of customers or (in my case, prospects) who eagerly awaited better pim functionality in your product but which never came, and fact today is, as a pim, it's not within the premier league, and that's why I call it a specialist for special cases, but I don't see much of these special cases, because for downloading web pages for legal reasons - I said this elsewhere -, neither your product nor your competitor, WebResearch, are able to serve for this special purpose either.

You've made a choice, Neville, whis is, have the best scraper functionality in pim's, together with WebResearch - it seems Surfulater is not as good as WR here, but then, as a pim-like, it seems to be much better than WR, so it might be the best compromise for people wanting to download losts of web pages in full, but as said, then you have two probs, not enough good pim functionality here (since it was your choice to not develop this range of functionlity to the fullest), and - I repeat my claim here, having asked for info about possible mistakes in what I say, but not having received such info yet -, for annotating these downloaded web pages, what would there be? (Just give me key words for me searching your help file for these, and the url of that help file, and hopefully there are some screenshots, too.)

As soon as you do clips both from web pages in the web, or from downloaded web pages, there's much very different functionality needed, and where some pim's are much better than others, and where any pim isn't that good in the end, but where you can add some functionality with external macros, especially when your pim offers links to items (which Surfulater does if I remember well, so it's not my claim that Surfulater can't be used for such a task, my claim being, lots of other pims are equally usable here, and they offer more pim functionality on top of this.

Paul, as for pains with pdf's, you should know that most sciences today have lots of their stuff in pdf format, and certainly more than 90 p.c. of their "web-available" / "available by electronic means" stuff in this format, hence the interest of pim's able to index pdf's, hence the plethora of alternative pdf "editors" and other pdf-handling sw, allowing for annotating, bookmarking, etc., so your claim (if I understand well) that pdf is a receding format, is not only totally unfounded, but the opposite is true.

Neville, this brings me to the idea that any "specialised sw", specialised in the very best possible rendering of web pages as-they-are (since, as said, it's uneconomical to download lots of web pages, just because, with your sw, it's "possible"), should go one step further and also do pdf M, by this blurring the discrimination between downloaded web pages, and downloaded pdf's - but then, it should also offer lots of, and easy = half-automated web pages annotation / bookmarking features, too.

Paul, with lots of your writings, I have a recurrent problem, and please believe me that my stance isn't a denigrating one, neither a condescending one: I mix up lots of aspects, but then try to have a minimum of discernment there, by numbering / grouping. In your texts, every idea stays mixed up with every else, and so, most of the time, for perhaps about 80 p.c. of your text bodies, I just simply don't get what you try to express, and as said, this is a recurrent problem I have with these texts of yours. I'm not a native speaker, as we all know, but then, I get your English words, but I don't get the possible meaning behind them, and very often, I have the impression (as a non-native speaker, as said) that your sentence construction is in the way, so perhaps, after posting, could you re-read your texts, and then partially revise (as I do, and be it just for typos, in my case)? I repeat myself here: My "criticising" your texts has the only objective to "get" better texts from yours I'd then better and more fully understand, since I suppose up to now I don't get many good ideas buried in them, and staying buried even when I try to read you, and that's a pity.

You must see that when Ath does apply condescendance to us both, giving us advice to write in blogs, insteads, i.e. telling us to be silent here, it's, for one, that most people in "chat rooms" prefer short postings between they then can jump as a bee or other insect would between many different flowers, and also because they don't want to think a lot: Here, it's spare time, it's not meant for getting new insights except when they come very easy - but it's also because the effort of reading some people doesn't seem rewarding enough - a question of formatting texts, of inserting sub-headers, of trying to offer "one-bit-at-a-time", and so on. And when, in your case, there's also a debatable sentence structure and ideas not developed one after another but thrown together, and then perhaps discussions by fractions of them, these discussions thrown together again, and introducing new sub-ideas, then "re-opened" many lines below... well, we can't blame people refusing to read us when we wouldn't like to read ourselves in the end, can we?

III

Some other off-topic theme that has got some connections, though:

STOP 'EM THINKING!

I don't have a television set anymore for ages: I couldn't bear them stealing my time anymore. I always thought - it's different with good films where you dive into the atmosphere of the film in question, instead of wanting it to hurry up, but there ain't many good film in European television's programming being left these days - that they slow down your time on purpose. They do some news, which costs you 10 minutes. Instead, they could have done it by presenting you a "magazine article" or something in which you could have read the same info in 3 or 4 minutes if not in 2, very often. Much worse even, anything that is "entertainment there": They always slow down what's going on there, it's absolutely terrible, and at the same, you might be interested in what will follow, so they force you to do "parallel thinking": You try to not spend these moments exclusively on the crap you're seeing, but at the same time, that very crap there is interrupting any other thinking you're doing, at any moment (since it IS continuing, but at a pace that virtually kills you).

Hence my thinking that tv is meant for stealing time, for making people not think, for filling up the spare time of people in a way that their thinking processes are slowed down to a max - they call this "suspense". Of course, you can remind me of tv "being for everybody", so it "has" to be somewhat "slow" - but to such a degree? Just a little bit slower yet, and our domestic animals could probably follow! This is intellectual terrorism.

Where's the connection? Well, my topic is fractionizing and then re-presentation of information / content, and this "tv way" of doing it, needing 1 minute for presentation of a fact that should need 8 or 13sec., at the opposite of what Paul seems doing, i.e. mixing up 5 different things in 5 sentence, then mixing 3 of them up in the following one, then mixing up again 2 from the first and 1 from the second with another 2 ones, is another apotheosis in information rape.

EDIT:

The French legislator has postponed the subject of a proposed a law on these "data expeditor having to pay, too" issues ad infinitum, meaning they want to see first how it all goes wrong in every which way, then perhaps they'll do something about it. Bear in mind that authorities, and especially the French ones, have historically highly been interested in data content, so they certainly rejoice of this move by Orange / France Télécom (or told them to do this move in the first place: acceptance is everything, so they have to play it cool, first).

Paul, I fully understood your very last post after 5 or 6 times reading now. As for the preceding one, I'm always trying. My prob here being, I didn't ever have similar probs with posts of somebody else, not here, not elsewhere. So it should partly be a prob in your writing, as in my writing conception, there's certainly some flaws, too.

5
I'm very sorry you didn't find any idea applicable to your own workflow here, and indeed I was just a little bit disappointed that Aaron Swartz' premature death didn't give rise to any obituary here, before mine, and which didn't trigger any thought about that guy and his mission expressed here. As for my wearing out the servers, I'm not so sure that some text takes so much more web space than lots of unnecessary and often rather silly graphics adult men regularly post here just for fun, so I hope that I will not attract too much wrath on my head too early, by trying to share some ideas in a constructive way. Thank you.

6
I recently had a good web look into such sw, and I'm afraid most prof. offerings were on "quote" only, i.e. you can be happy if they are not more than 500 bucks. So 90 bucks IF that sw works well, seems very reasonable.

This vid is good, just scroll down a little bit beneath the article before (which says a 67-year-old woman stole meat for 2,000 bucks (which she didn't eat but stored in her fridge), hundreds of hair coloring kits, 400 silk stockings, and much more): It shows a burglar who forgot his mask so put on a waste basket as his makeshift hat:

http://www.welt.de/v...t-von-1500-Euro.html

7
Radio Erivan to Ath: You'd be right at the end of the day. But these are the appetizers only.

Or more seriously: I'm always hoping for good info and good counter-arguments, both for what's expressed in the saying, "Defend your arguments in order to armour them.", and for finding better variants, and there are better chances for this to happen in a well-frequented forum than in a lone blog lost out there in the infinite silent web space (Kubrick's 2001 - A Space Odyssey of course).

In the end, it's a win-win situation I hope, or then my arguments must really be as worthless as some people say.

Since nobody here's interested in French auteur cinema, something else here: Today, they announce the death of the great Michael Winner, from "Death Wish", and somewhere I read that back in 1974, advertising for this classic had, entre autres,

"When Self-Defence Becomes a Pleasure."

Can anybody confirm this? (It's from Der Spiegel, again, in German: "Wenn Notwehr zum Vergnügen wird." - So perhaps they hadn't this one-liner but in Germany over there?) Anyway, I have to admit I couldn't stop laughing about this, and it pleases me so much that it'll be my motto from this day on.

8
ad 11 / 12 supra

Sometimes, some things are so natural for me that I inadvertently omit mentioning them. In the points above, I presented my very exotic concept of stripping web pages that most people would download instead (hence the quality problems they face in non-specialised, pim, sw, other than WebResearch or Surfulater or similar).

Above, I spoke of condensing, by doing a first choice here what you clip to your pim, and what you'll omit. I also spoke of relevance, and of bolding, and of underlining, i.e. of bolding important passages within the unformatted text, then of underlining even more important passages within these bolded passages, and of course, in rare cases, you could even have yellow background color and such in order to hightlight even more important passages within these bolded-underlined parts of your text.

I should have mentioned here that this "first selection" almost never lets to "passages that are not there", i.e. in years, I never had a situation where I would have remembered, hadn't there not been something more, and shouldn't I go back to the original web (or other) page, in order to check, and download further if it's hopefully yet there? So this is rather theoretic situation not really to be feared.

Of course, whenever in doubt, I download the whole text, hence the big utility of then bolding passages and perhaps underlining the most important keywords there.

But there is another aspect to my concept which I have overlooked to communicate: It's annotations in general. For pdf's, many people don't use the ubiquitous Acrobat Reader but (free or paid) alternative pdf readers / editors that allow for annotations, very simple ones or more sophisticated ones, according to their needs.

But what about downloaded, original web pages, then?

Not only, you download crap (alleviated perhaps by ad blockers), around the "real stuff" there, but also, this external content stays within its original form, meaning, whenever you re-read these pages, you'll have to go thru their text in full, in order to re-memorize, more or less, the important passages of this content, let alone annotations, which in my system are also very easy: I enclose them in "[]" within the original text, sometimes in regular, sometimes in bold type.

So my system is about "neatness", "standardization", "visual relief", but its main advantage is, it allows for my just re-reading the formatted passages when re-reading these web pages in work context, just as many people do with their downloaded pdf's. Now you with downloaded web pages: It's simply totally uneconomical, and the fact that out of 20 people, perhaps 19 do it this way, doesn't change this truth.

So, "downloading web pages" should not just be about "preserve, since the original content could change / vanish", but it's even more about facilitating the re-reading. (Of course, this doesn't apply to people who just download "anything", "in case of", and who then almost never ever re-read these downloaded contents, let alone work with these.)

Hence my assertion that sw like Surfulater et al. is for "web page collectors", but of not much help in real work. I say this under the provision that these progs, just as pim's, don't have special annotation functionality for the web pages they store; if I'm erroneous about this, I'd be happy to be informed about these in order to then partially review my stance; partially because the problem of lacking neatness would probably persist even with such pdf-editor-like annotation functionality.

And finally, I should have added that I download tables as rectangular screenshots, and whenever I think I'll need the numbers in some text, afterwards, I also download the chaotic code for the table in order to have these numbers ready - in most cases, I just need 2, 3, 4 numbers there later on, and then, copying them by hand from the screenshot is the far easier way to get these into my text. (For pages with lots of such data, I do an .mht "in case of". We all know that "downloading tables" from html is a pain anyway if ever you need lots of the original data, but if you do, and frequently, there is special sw available for this task.)

9
Almost every newspaper / weekly journal and such have brought an article on variant 1 in the meantime, so it seems that subject intrigues LOTS of people.

While my interest is only in variant 2 (since some of such commercial subcontractors are able to do excellent programming work as we see here, but as for M or law, I'd prefer brilliant local people, and that necessarily means students since you couldn't pay them otherwise, from your own income), the "original" one if I dare say, this applies to both:

It's very ironic that corporate myths ask their staff for "entrepreneurship", i.e. for managing their tasks as if they were independent business people, as much as possible, and not in the traditional way of doing them more or less as do public servants (and which even today is the big flaw of most corporations, hence the incredible advantage of corporations like Apple and such over this public service style applied by their competitors) -

and then, when somebody really tries to organize his own work in this entrepreneur way to the fullest, their's legions of people who'll tell you that's not possible, not allowed, not legal hence not possible...

In fact, the real prob is elsewhere: Most people in corporations simply don't have the guts to be entrepreneurs, and even when they try, they fail: Their psyche is simply much too similar to the one of public servants.

There are some exceptions to this rule only, and it's these people alone who are psychically able to spend 4,000 bucks monthly on a 6,000 bucks regular income for personal staff, whilst your usual public servant, serving your usual public authority or some corporation, will never ever do this, for financial reasons, i.e. he prefers spending "his" money for "his" "needs", i.e. cars, furniture, travel, instead of "giving it away".

So, any "risk" consideration here is pretextual, since there is no "risk", this scheme being properly executed. It's simply the ordinary miserliness of the ordinary public servant, not "giving away "his" money to anybody else" that aborts this scheme early in the consideration stage.

Most people just ain't entrepreneurs.

So any fear that if many people do this, "everybody" will need to follow, is totally unfounded: Those who dare will get big benefit - if they are able to appear a little enigmatic so that their easygoing "I'll have a look into this", and then fast producing first-rate results, remain credible and are not devaluated by obvious stupidity when in discussion taking dumb positions too early, and such: hence the need to appear enigmatic all the time, which will give you time to check and have checked first, then produce results that'll show superior quality.

But then, I said it's possible and it'll work - I didn't pretend it'd be easy for any limited mind.

10
Ok, the heavenly tale (that I wouldn't call a joke really) was on variant 1 of the scheme: Here you take big chances, and whilst our cat video viewer was very lucky to have such competent contractors, his own technical way to handle this got him in, i.e. he should at least have applied the care to get the outsorced work "delivered" not into the corp system, by his ID, but by vpn to his private smartphone or such, then enter the data / steps of work manually into the corp system from his seat, perhaps pasting this or that, here and then, by usb stick. We see here that such a scheme would have been much more work for him, personally, so he avoided this completely, as described above, and from then on, it was only a matter of time he'd get caught.

As for variant 2, as I said above, in most countries it would be "illegal" to outsource work containing "real data" or "giving away info", and in many of them, this would even apply without any such data brought to risk, i.e. it's even in the legislation that you are expected to do your work exclusively on your own, without any terms of your employment contract needed in order to implement this, i.e. your employer could indeed bring such a clause in order to further confirm what's stipulated in the labour law anyway.

Hence my emphasis, above, on minimizing your risks here, and I thoroughly explained how to choose somebody who will NOT give you in, and by reading me again, anybody can see that my idea is NOT about having this outsourced help for cheap: brilliant students don't have much time, and still, I'm speaking of about 1,500 bucks for each of them each month, which means that in some professions (where you wouldn't get 6,500 net as a beginner), you even had to pay them, for the hour, more or less the same amount you get from your employer for the same time: It's all about producing superior results, and nowhere I suggested to get make some immediate financial benefit on your private staff; on the contrary, I even said, if they have got problems with declaring this revenue from you, don't even deduct them from your own expenses, even if that means you'll lose money, because of your higher tax rate.

"Illegal" meaning, they would have the right to "set you free", as said, but it doesn't mean there will be a criminal offence or such, and as for damages, there will be no damages without damage to begin with! If you deem this necessary, you could give "cleaned papers" to your private staff, i.e. with names and addresses changed in legal papers, or with numbers changed in the corporate world - as far as this won't affect the work your private staff is expected to do on it, of course. And then, there is lots of much less problematic work to do, e.g., in legal work, find antecedents, condense commentators' points of views and current juridiction on special legal problems (without your contractor even needing to know the "case"), and I'm sure in other fields, too, there'll be plenty of room for having much outsourced work done each week, without giving away your corporation's secrets for this.

It's 2) all about your smart selecting what you can have done by others, and what which are the parts you better do yourself, but as a young, high-paid executive, you're supposed to KNOW how to do smart delegation, so let's take this part of your task for granted. And it's 1) all about your smart selection of the kids working for you - am I expected to repeat here the details from above for sceptics who wouldn't ever dare try such a scheme (i.e. finding the right contractor(s) for it), considering those same sceptics will probably not even get they should NOT exploit their staff? (My point being, you'll get your reward tenfold from your employer and further employers, by your career; it's not about searching collaborators-for-cheap.)

When on the other hand, as said, having written your academic work by third parties, in most countries, very well IS a criminal offense, and is been done at any time, by millions of rich people, from whom nobody ever is being exposed. Theoretically, these academics would be expelled and get criminal prosecution; in real life it's them who get the best employments, by family background, and by perhaps better exam results, both for brilliant papers as for having had the time (= not spoiled with writing papers) to better prepare the exam by learning.

So, what the rich people do from start on, non-rich people can replicate from the moment on they get some substantial income - and the smart will do it, being very happy that the masses can only think of exploiting their aides and just see the "problems" they get into because they ain't smart enough.

At the same time, the born-rich, and then the really smart, do what I describe here (but they don't speak about it: there might be people eager to do the same).

Btw, people who tell you, it's not possible, some people might not like it (even when the gain big from what you do), have their role in this society, just as my role is presenting slightly off-beat stuff: I provoke some people to do some things valuable but hors norme, and those big-prohibitors are there in order to prevent the masses from joining the smart few.

And the real joke here is, they ain't even paid by the rich to make you back off from their playfields: They too much love doing it even for free.

11
Years ago, I've had a really good idea, together with some necessary legal and practical considerations, all of which didn't constitute a real obstacle, nor do they constitute today: you've got just to be smart in the execution of the scheme I advocate.

The legal side is quite simple, you don't have the right to have third parties have some knowledge about your firm's details (so they'd have the "right" to flip you, but would they do it, after all? I doubt this, and no harm will be done anyway, except if you try to get some work for really cheap, from somebody without a brilliant future on his own - just don't go for desperate people with lots of time, easy is dangerous), so you must select your contractors with care, but then, it's not a crime to delegate just work to people you pay for this, contrary to academic work you are supposed to have concocted all alone - whilst we all know that most rich people's children have their academic work written by ghost writers paid by their parents, to begin with their homework in the very first year (since you cannot spend 3 months on legal homework and cruise the Mediterranean on the family yacht at the same, everybody will understand this), and up to their doctoral "dissertation". So...

Today, Der Spiegel publishes a variant of my original idea, so I think I better present both, together with publishing mine for the first time, in order also to retain some moral rights on the latter.

I

In this link, http://www.spiegel.d...edigen-a-877990.html

they tell you a high-paid (big six figures) programmer / IT executive in a U.S. corp didn't do anything on his pc, within the corp, except surfing the web all day long, especially looking at cat videos! (Can I blame him? Not really.)

At the same time, some Chinese regularly accessed the highly-secured computer network of this corp and did do all sorts of things, nothing harmful though, and getting access with a special ID chip card, via a reader, having been issued to the cat video viewer.

So eventually, an external IT security service provider dug these facts: The employee in question (then fired) had privately outsorced all his work to a Chinese IT service corporation, and they did all his work for him, from China; he paid them high in the 5-digits, so this scheme was financially highly profitable for the man in question who was considered the very best IT man within this corporation (I don't know if this was so even before, when he did his own work yet, or if this favorable appreciation of him was the direct result of his contractors doing so well their work).

Now, the question is, why - especially if he was so good at it even by his own means - this man so entirely avoided to do any of his work, i.e. why he was obviously unable to win the slightest satisfaction out of it.

II

Years ago, I had not at all that very same idea, since my idea was, why wouldn't a young executive (or a programmer, as we have got here, but at the time, I had executives or young lawyers in mind) not outsource parts of his work, in order to be considered a very promising young executive. I didn't think about China, but in a very conservative way, I thought of brilliant students he'd hire, in law or business administration - the same should be possible in the sciences, etc. - , and who would work for him, executing parts of the tasks he himself had been assigned. I mused, he perhaps would be paid 10k a month, 6.5k net, and why not, instead of wasting his money on travel, furniture, cars, giving 1.5k each, net, to 2 young students working for him on week-ends, etc.

This way, our young executive would be able, not to see cat videos instead, but to deliver almost 2 times what he was expected to work on, in his office, and I thought that such an investment on his part could be extremely beneficial for his career. I thought myself, there are employees with higher IQ, and who work much faster than their collegues, so they will "make it". Ok, there's also, and very importantly, that "way with people", called "emotional intelligence / EQ" today, and with this career aspect, some "private collaborators" will not help, but I also thought, with equal EQ, with equal IQ, and with equal work measured by time (let's say 40, 45, 50, 55 hours a week), PLUS two student collaborators, our young executive should be able to have very quickly a career that paid back tenfold for those 3,000 bucks he spent on his monthly income, for the very first years of his career.

I was aware that this couldn't work but if his superiors thought it was him who did this exceptional work load (and with correct results, of course); I was aware that the "alternative" to "work more", individually, wouldn't work out, since smarter people than he was, could - and would - work longer hours, too, and thus assure the lead they had on him grew even bigger: secret delegation, AND hard work, seemed to be a viable solution, though, especially since there certainly would be some intimidation effect on his peers, and even on collegues smarter than him, erroneously assuming that it was him the superior intelligence.

I was also aware of the risks of such a scheme: First, our man should be smart enough in order to not appear really stupid in the office or law office; it should be ok that his peers and his superiors are astonished by what he's capable of, but they shouldn't be outright incredulous at what he delivers, for too much inconsistency with what he's orally capable of.

Then, if our man delegated work to inferior students who for themselves didn't make it, there was a risk of extortion: "Give me more share, or your superiors will know who did the work." On the other hand, brilliant students would never do this, since their own career would be put at risk by such a move. Of course, brilliant students don't have so much time, hence my idea to not rely on just one such student, but to take two (or even three - it depends on your risk perception: with 6,500 net each month, you could finance 4 such students and live yourself on 500 bucks a month, with your own income exploding, after two years, to 500,000 bucks a year; a very risk-averse person would just have one such student, and still have 5,000 bucks a month for his expenditures, but would rise his income by perhaps just 20 or 25 p.c. (but then could get a second private contractor)).

I also was aware that our man should delegate with caution, and should do the really difficult parts himself, without, on the other hand, relegating his private staff to menial tasks only - and he should control this work, perhaps with some cross-control also, student 1 checking tasks executed by student 2, and vice versa.

Also, I thought by myself that all this should be organized in a very private way, our man getting work out of the office in order to "work on it at home", then passing parts of it to his contractors, i.e. I was aware of him not being well advised to have any phone or mail conversation with them in the office or by his office pc; about giving them access to the corporate network, I didn't even consider such outrage. And, of course, in order to prevail such discretion, I was aware our man would have to collect the necessary data himself, within the office, if data wasn't available but there, e.g. (this situation is much better now for students in the university itself, so today they can search for this data there) specialised / too expensive db's, available in the office / corporation, but not in the university (as said, today it's probably the other way round, the students having access to data himself will not have access to (which in some cases might even bring up the problem: "But this is brilliant! Where did you find it?")

In countries where there is a tax secret, you could try to deduct your expenses from your own income, the interest here laying in your collaborators' fewer tax rate. On the other hand, this will complicate things for them (their parents deducting them from their income, social security, your paying them more because of their tax / social security expenses), so that sometimes, it could be preferable to just pay them net, and all the worse with your higher tax rate (I know in some countries, this will then create the legal problem of "illegal employment" or such, but after all, you don't really employ them): See your tax advisor in case, but in countries like Sweden, e.g., your superiors could ask you, why do you declare an income of just 2,000 bucks, we paying you 6,000 net?!" So my advice is, beware of unnecessary complications, don't be too stingy-smart-alecky here; the same applies to your treating your subcontractors.

So this is my idea from some years ago, and I think it holds steady. The core element here is, don't tell your superiors you have contractors: It's not your investment of 5,000 bucks out of the 6,500 you get from the corporation that will make them promote you in an exceptional way, but only their misconception that you're incredibly gifted will make your fortune.

Later on, they will assign you so many collaborators in-house that nobody will ever discover the little secret of your early years if you continue to delegate in a smart way.

12
I

1. Just today, the community has received news of its recent loss, two days ago, of a prominent data-availability activist, Aaron Swartz. Interesting here, the criminal prosecution authorities seem to have been much more motivated for him to be treated as a big-criminal than even his alleged victim, MIT. (Edit: Well, that doesn't seem to be entirely true: JStor is said to not having been insisting on him being prosecuted, but M.I.T. wanted him to to be made "pay" - so without them, he'd probably be alive. And it's ironic, that M.I.T., a VICTIM of these "resell them their own academic papers, and at outrageous price" scheme, made themselves prosecutors of Aaron, instead of saying, we're not happy about what he tried to do, but he tried for all of us. M.I.T. as another Fifth Column representative, see below.) So there is cloud for paid content, and your getting access without paying, big style, then perhaps even re-uploading gets you 20 or 30 years in jail if the prosecutors have their way, and that's where it goes (cf. Albert Gonzalez).

(Edit: First he made available about 20 p.c. of an "antecedents rulings" db, absolutely needed for preparing lawsuits in a common-law legal system based on previous, similar cases' rulings (they charge 8 cent a page, which doesn't seem that much, but expect to download (and pay for) perhaps 3,000 pages in order to get those 20, 30 that'll be of relevance), then he tried to download academic journal articles from JStor, the irony here being that the academic world is paid by the general public (and not by academic publishers or JStor) for writing these articles, then pays high prices (via their university, so again the general public pays at the end of the day (university staff and overall costs being public service on the Continent anyway, and in the U.K. and the U.S., it's the students who finance all this, then charge 500 bucks the hour in order to recoup this investment afterwards). The prosecutor asked for 35 years of imprisonment, so Swartz would even have been to be called "lucky", had the sentence stayed under 25 years. (From a competitor of JStor, I just got "offered" a 3-page askSam review from 1992 or so, in .pdf, for 25 euro plus VAT if I remember well...))

(Edit - Sideline: There is not only the immoral aspect of making pay the general public a second time for material it morally (and from financing it to begin with) owns already, there is also a very ironic accessibility problem now that becomes more and more virulent: Whilst in their reading rooms, universities made academic journals available not only to their staff and their students, but also to (mildly) paying academics from the outside, today's electronic-only papers are, instead of being of ubiquitous access now, in most universities, not even available anymore to such third-parties, or not even bits of those texts can be copied and pasted by them, so in 2013, non-university academics sit before screens and are lucky to scribble down the most needed excerpts from the screen, by hand: The electronic "revolution" thus makes more and more people long for the Seventies' university revolution: the advent of photocopiers - which for new material, in most cases, ain't available anymore: Thanks to the greediness of traders like JStor et al, we're back to handwriting, or there is no access at all, or then, 40 bucks for 3 pages.)

2. On the other hand, there's cloud-as-storage-repository, for individuals as for corporations. Now this is not my personal assertion, but common sense here in Europe, i.e. the (mainstream) press here regularly publishes articles convening about the U.S. NSA (Edit: here and afterwards, it's NSA and not NAS, of course) having their regular (and by U.S. law, legal) look into any cloud material stored anywhere in the cloud on U.S. servers, hence the press's warning European corporations should at least choose European servers for their data - whilst of course most such offerings come from the U.S. (Edit: And yes, you could consider developers of cloud sw and / or storage as sort of a Fifth Column, i.e. people that get us to give away our data, into the hands of the enemy, who should be the common enemy.)

3. Then there is encryption, of course (cf. 4), but the experts / journalists convene that most encryption does not constitute any prob for the NSA - very high level encryption probably would but is not regularly used for cloud applications, so they assume that most data finally gets to NSA in readable form. There are reports - or is it speculations? - that NSA provides big U.S. companies with data coming from European corporation, in order to help them save cost for development and research. And it seems even corporations that have a look upon rather good encryption of their data-in-files, don't apply these same security standards to their e-mails, so there's finally a lot of data available to the NSA. (Even some days ago, there's been another big article upon this in Der Spiegel, Europe's biggest news magazine, but that wasn't but another one in a long succession of such articles.) (Edit: This time, it's the European Parliament (!) that warns: http://www.spiegel.d...ie-usa-a-876789.html - of course, it's debatable if anybody then should trust European authorities more, but it's undebatable that U.S. law / juridiction grants patents to the first who comes and brings the money in order to patent almost anything, independently of any previous existence of the - stolen - ideas behind this patent, i.e. even if you can prove you've been using something for years, the patent goes to the idea-stealing corporation that offers the money to the patent office, and henceforward, you'll pay for further use of your own ideas and procedures, cf. Edit of number 1 here - this for the people who might eagerly assume that "who's nothing got to hide shouldn't worry".)

4. It goes without saying that those who say, if you use such cloud services, use at least European servers, get asked what about European secret services then doing similar scraping, perhaps even for non-European countries (meaning, from GB, etc. straight to the U.S., again), for one, and second, in some European countries, it's now ILLEGAL to encrypt data, and this is then a wonderful world for such secret services: Either they get your data in full, or they even criminalize you or the responsible staff in your corporation. (Edit: France's legislation seems to have been somewhat lightened up instead of being further enforced as they had intended by 2011. Cf http://rechten.uvt.n...ryptolaw/cls2.htm#fr )

5. Then, there are accessibility probs, attenuated by multi-storage measures, and provider-closing-down-the-storage, by going bankrupt or by just commercial evil: It seems there are people out there who definitely lost data with some Apple cloud services. (Other Apple users seem to have lost bits of their songs bought from Apple, i.e. Apple, after the sale, seem to censor unwanted wording within such songs - cannot say for sure, but read some magazine articles about such proceeding from them - of course, this has only pittoresque value in comparison with "real data", hence the parentheses, but this seems to show that "they" believe to be the master of your data, big-style, AND for the little, irrelevant things - it seems to indicate their philosophy.

(Edit: Another irony here: Our possible data is generally deemed worthless, both from "them", and from some users (a fellow here, just weeks ago: "It's the junk collecting hobby of personal data."), whilst anything you want or need access to (and even a 20-years-old article on AS), deemed "their data", is considered pure gold, 3 pages for 40 bucks - so not only they sell, instead of just making available to the general public its own property, but on top of that, those prices are incredibly inflated.

But here's a real gem. Some of you will have heard of the late French film <i>auteur</i>, Eric Rohmer, perhaps in connection with his most prominent film, <i>Pauline at the Beach</i>. In 1987, he did an episodic film, <i>4 aventures de Reinette et Mirabelle</i>, from which I recommend the fourth and last episode to you which on YT is in three parts, in atrocious quality but with English subtitles, just look for "Eric Rohmer Selling the Painting": It's a masterpiece of French Comedy, and do not miss the very last line! (For people unwilling to see even some minutes of any French film, you'd have learned here the perfect relativeness of the term "value" - it's all about who's the owner of the object in question at any given moment.) If you like the actor, you might want to meet him again in the 1990 masterpiece, <i>La discrète</i> (There's a <i>Washington Post</i> review in case you might want to countercheck my opinion first. And yes, of course there's some remembrance of the very first part of Kirkegaard's Enten-Eller to be found here.)...)

II

6. There is the collaboration argument, and there is the access-your-data-from everywhere, without juggling with usb sticks, external harddisks and applics like GoodSync Portable and - I'm trying to be objective - where there is a data loss problem, and thus a what-degree-of-encryption-is-needed prob too: Your notebook can be lost or be stolen, and the same goes for these external storage devices. But let's assume the "finder" / thief here will not be the NSA but, on most cases, not even your competitor, but just some anonymous person dumping your data at least when it's not immediately accessible, i.e. here, except for special cases, even rudimentary encryption will do.

7. I understand both arguments under 6, and I acknowledge that cloud services offer much better solutions for both tasks than you can obtain without these. On the other hand, have a look at Mindjet (ex-MindManager): It seems to me that even within a traditional workgroup, i.e. collaborators physically present in the same office, perhaps in the same room, collaboration is mandatorily done by cloud services and can't be done just by the local workgroup means / cables - if this is correct (I'm not certain here), this is overly ridiculous or worse, highly manipulative on part of the supplier.

8. Whenever traditional desktop applications "go cloud", they tend to lose much of their previous functionality within this process (Evercrap isn't but ONE such, very prominent example, but there are hundreds), and the arguments, "we like to hold it simple" and such idiotic excuses, and even when there's highly profession developer, as is in this case of Neville, it seems that the programming effort for the cloud functionality at least heavily slows down any traditional, "enhancement" or even transposition programming of the functionality there has been - of course, how much transposition is needed, depends on the how-much-cloud-it-will-be part of the venue of that particular sw. As a general rule though, users of traditions sw going cloud use a lot of functionality and / or have to wait for years for their sw to recoup afterwards, from this more or less complete stalling of non-cloud functionality. (Hence the "originality" of Adobe's CS's "cloud" philosophy where the complete desktop functionality is preserved, the program continuing to work desktop-based, with only (? or some additional collaborative features, too, hopefully?) the subscription functionality laid off cloudwise.

III

9. Surfulater is one of the two widely known "site-dumping" specialist applics out there, together with the German WebResearch, the latter being reputed "better" in the way, it's even more faithful to the original for many pages being stored, i.e. "diffult" pages are rendered better, and in the way that it's quicker (and quicker perhaps especially with such "difficult" pages), whilst the former is reputed to be more user-friendly in the way of everyday handling of the program, sorting, accessing, searching... whatever. I don't have real experience (i.e. over short trial) with either program, so I the term here is "reputation", not "facts are...". It seems to be common knowledge, though, that both progs do this web page dumping much better than even the heavyweights in traditional pim world, like Ultra Recall, MyInfo, MyBase, etc.

10. Whilst very few people use WebResearch as a pim (but there are some), many people use Surfulater as a general pim - and even more people complain about regular pim's not being as good for web page dump, as these two specialists are. For Surfulater, there's been going on that extensive discussion on the developer's site that has been mentioned above, and it seems it's especially those people who use the prog as a general pim, are very affected by their pim threatening to go cloud, more or less, since it's them that would be most affected by the losing-functionality-by-going-cloud phenomenon described in number 8. Neville seems to reassure them, data will be available locally, and by cloud, which is very ok. But then, even Surfulater as it is today, is missing much functionality that would be handy for making it a real competitor within the regular pim range, and you'll be save in betting on these missing features not being added high-speed too soon: the second number 8 phenomenon (= stalling, if not losing).

11. So my personal opinion on Surfulater and WebResearch is, why not have a traditional pim, with most web content in streamlined form, i.e. no more systematic dumping web pages into your pim or into these two specialist tools, but selecting relevant text, together with the url and the download date/time "stamp", and pasting these into your pim, as plain text you will then format according to your needs, meaning right after pasting, you'll bold those passages there that will have motivated you to download the text to begin with, and this way, instead of having your pim, over the years, collect an incredible amount of mostly crap data, you'll constitute yourself a valid respository of neat data really relevant to your tasks. Meaning, you'll do a first data focussing / condensing of data right on import.

12. If your spontaneous reaction to this suggestion is, "but I don't have time to do this", ask yourself if you've been collecting mostly crap up so far: If you don't have 45 or 70 sec. for bolding those passages that make the data relevant to you (the pasting of all this together should take some 3 sec. with an AHK macro if really needed, or better, by an internal function of your pim, which could even present you with a pre-filled entry dialog to properly name your new item here), you probably shouldn't dump this content into your pim to begin with. Btw, AHK allows for dumping even pictures (photos, graphics) from the web site to your pim, 1-key-only (i.e. your mouse cursor should be anywhere in the picture, and then, it'd be one key, e.g. a key combination assigned to a mouse key), and eventually, you pim should do the same. Of course, you should dump as few such pictures as is absolutely necessary, to your pim, but technically (and I know that in special cases this would be almost necessary, but in special cases only), it's possible to have another AHK macro for this, and also your pim could easily implement such functionality.

13. This resets these two specialists to their specialist role: Dumping complete web pages in those rare cases this might be necessary, e.g. mathematicians and such who regularly download web pages with lots of formulas, i.e. text and multiple pictures spread all over the text - but then, there are less and less such web pages today, since most of them, for such content, have links to pdf's today instead, and of course, your pim should be able to index pdf's you link to from within (i.e. it should not force you to "embed" / "import" them to this end). Also, there should be a function (if necessary, i.e. if absent from your pim, by AHK) that does this downloading of the pdf, then linking to it from within your pim, 1-key style, i.e. sparing you the effort to first download the pdf and then do the linking / indexing within your pim, which is not only an unnecessary step but also will create "orphaned" pdf's, that will not be referenced to within you pim).

IV

14. This means we need better pim's / personal and small workgroup IMS / information management systems, but not in the way, "do better web page import", but in the way, "enhance its overall IM functionality, incl. PM and incl. half-automatted web page CONTENT import (incl. pdf processing)". Please note here that while a "better" import of web-pages-as-a-whole is an endless tilt at windmills that blocks the programming capacity of any pim developer to an incredible degree, ever one, and worse today than it ever did, such half-automation of content dumping / processing is extremely simple to implement on the technical level. This way, pim developers wouldn't be blocked by such never-ending demands (and never-ending almost independently to their respective efforts to fulfill them) to better reproduce imported web pages (and to quicker import them) anymore, but could resume their initial task, which is to conceive and code the very best IMP possible.

15. Thus, all these concerns about Surfulater "going cloud", and then, how much, are of the highest academic interest, but of academic interest only: In your everyday life, you should a) adopt better pim's than Surfulater is (as a pim), and then enhance, by AHK, core data import (and processing) there, and then b) ask for the integration of such feature into the pim in question, in order to make this core data processing smoother. Surfulater, and WebResearch, have their place in some special workflows, but in those only, and for most users, it's certainly not a good idea to constitute web page collections, be it within their pim, or be it presumably better-shaped collections within specialized web page dumpers like Surfulater and WebResearch, whose role should be confined to special needs.

V

16. And of course, with all these (not propanda-only, but then, valid) arguments about cloud for collaboration and easy access (point 6), there's always that aspect that "going cloud" (i.e. be it a little bit, be it straightforward) enables the developer to introduce, and enforce, the subscription scheme he yearn so much for, much better than this would ever be possible for any desktop application, might it offer some synch feature on top of being desktop, or not. Have a look upon yesterday's and today's Ultra Recall offer on bits: Even for normal updates, they now have to go bits-way, since there's not enough new functionality even on a major upgrade, so loyal, long-term users of that prog mostly refuse to update halfprice (UR Prof 99 bucks, update 49 bucks, from which, after payment processor's share, about 48,50 bucks should go to the developer), and so, some of them at least "upgrade" by bits, meaning a prof. version is starting starting price 39 bucks, from which 19,50 go to bits, 50 cent or something to the payment processor, leaving about 19,00 bucks for the developer; it's evident that with proper development of the core functionality (and without having to cope with constant complaints of the web page dump quality of his prog (Edit: prices corrected)), the developer easily could get 50 bucks for major upgrades of his application, instead of dumping them for 19 bucks: And that'd mean, much more money for the developer, hence much better development quality, meaning more sales / returns, hence even better development...

17. As you see here, it's easy to get into a downward spiralling, but it's also easy to create a quality spiral upwards: It's all just a question of properly conceiving your policy. And on the users' side, a rethinking of traditional web page dumps is needed imo. It then would be a faux prob to muse about how to integrate a specialist dumper into your workflow: rethink your workflow instead: Three integral dumps a years don't ask for integration, but for a link to .mht.

18. And yes, I got the irony in downloading for then uploading again, but then, I see that's for preserving current states, while the dumped page will change its content or even go offline. But this aspect, over the policy of "just dump the content of real interest, make a first selection of what you'll really need here", in most possible cases could only prevail for legal reasons, and in these special cases, neither Surfulater nor WebResearch are the tool you'd need.

VI

19. As said, the question of "how Neville will do it", i.e. the distribution between desktop (data, data processing) and cloud (data, data processing again), and all the interaction needed and / or provided, will be of high academic interest, since he's out to do something special and particular. But then, there's a real, big prob: We get non-standardization here, again. Remember Dos printer drivers, as just one but everyday example of the annoyances of non-standardization? Remember those claims, for this prog and that, "it's HP xyz compliant"? Then came Windows, an incredible relief over all these pains. On the other hand, this buried some fine Dos progs since soon no more drivers for then current printers, and other probs; just one example is Framework, a sw masterpiece by Robert Carr et al. (the irony here being that Carr's in cloud services today).

20. Now, with the intro of proprietary cloud functionality, different for many such applications going cloud today, we're served pre-Windows incompatibility chaos again, instead of being provided more and more integration of our different applications (and when in a traditional work group, you at least had common directories for linked- and referenced-to shared files). That's "good" (short-term only, of course) for the respective developers, in view of the subscription advantage for them (point 16), but it's very bad for the speed of setting-in-place of really integrated workflow, for all of us, i.e. instead of soon providing a much better framework for our multiplied and necessarily more and more intervowen tasks (incl. collaboration and access-from-everywhere, but not particular to this applic or that), and for which from a technical pov, "time's ripe", we have to cope with increasing fractionization in what developers in search of a steady flow of income (cf. the counter-example in point 16, and then cf. Mindjet) think what particular offerings are "beneficial" for us.

All the more so you should discard any such proprietary "solution" from your workflow when it's not necessarily an integral part of that. Don't let them make you use 5 "collaborative" applications in parallel just because there's 5 developers in need of your subscription fee.

13
I don't want my analysis of what Surfulater and other such offerings will probably bring to us, to be buried within a thread titled with "Surfulator", so I open up a new one.

14
rjbull, thanks for the warm re-welcome!

I never bought dtsearch, but it's decidedly the best of those search tools: It's all about finding things or not, in proprietary file formats, and especially with accented characters like ü and é and ù, here, dtsearch excels whilst Copernic and X1 are very bad (for standard file formats, X1 seems first-rate though). (For Archivarius, I know many people are fond of it; in my special case, it didn't work well, then crashed...) - As said here or elsewhere, the problem with external search tools is, you then have to go back to your "db" / "pim" / text program, etc., and do another, now more specific search, in order to get to the real "hit", in your application (it occurs to me at this moment that some search tools might be able to have you "go" right to that "hit", from a mouseclick in their hit table, when it's standard progs like Word - but forget this for more exotic file formats).

Both Ultra Recall and MyInfo allow for Boolean search, as does IQ and only SOME other pim's: I remember one which had it, but without "no", and no hit table then; another had the hit table, but no Boolean search, and so on - but it's no wonder that many people use UR or MI in spite all any respective problems of each they otherwise cause.

I once stumbled upon DT/TextWorks, and would be willing to pay 1,200 bucks for a prog that really "has it all", but I discarded it then because of their "ask us for a trial (instead of just downloading it) and for a quote (instead of giving the price) - so I never even got to a screenshot of it, let alone a trial. Then, it's a db, which means it's not a tree superposed upon such a db, as UR and MI and IQ and others are, and even the later AS got trees-on-the-fly (by first line, or by field content - very smart thing, the only prob being that with 5-digit record numbers, this regularly took minutes or even crashed (they dumped their forum because it really become much too much negative feedback from almost everybody). As I today said in my KEdit thread, lately it's MI that seems to leave UR trailing, not because MI was so good suddenly, but because there is steady if slow development, whilst UR don't do much upon their roadmap ("not much" being an euphemism for "nothing" here).

In the web, we use Boolean search all the time, in google (and let alone ebay or specialized sites like Dialog you mention), and most people do it even without knowing: In google, they enter two or three search terms, in order to refine their search from start on: a b is a AND be: people do it intuitively, there. (It's for OR that google asks for some knowledge, since that is an (a,b), far from intuitive but some of us know. Not a is -a, etc., and so it's possible to find things.

Whilst in a non-Boolean pim, you CAN'T search for a b, entering a b there would search for "a b", but not for records with a and b, so these desktop pims are mostly really three steps back from what we use, in the web, all time, even without paying attention.

And there's another thing, many such "basic" desktop pim's do not even allow for searching "just in the tree" / "text only" / both, but invariably search everywhere - but then, it's evident that a search "tree only" will perhaps render 5 hits, from which you choose the right one, whilst the same search "everywhere" will get you 200 "hits" in which then you'll have big problems to identify the one you need, without any possibility to refine your search with a second term that also must be within the same record, since in such progs, as said, a b will not work this way - so you are lucky when you remember a second here that also might be in that record you need, but which is only in 120 "hits"... so no discussion here, if a pim only allows for "normal search and then everywhere", it's to be qualified CRAP, whatever it other qualities might be.

As for Notefrog, if I understand this prog well (without ever having trialled for more than just 2 or 3 minutes or so), it relies exclusively upon searching, since there is no tree: at the left, it's the hit table!

Since I used askSam for almost 20 years or so, from early DOS on, and it got its tree-on-the-fly from version 6 only, I know both worlds: search-only (but in the spectacular AS way), and trees, and I must say, I function with trees, holding together related info, and also offering "a dedicated place" for your info, i.e. within a big tree, you remember, more or less (depending also on the good construction of your tree), "wherearound it must be", and I rely very heavily on this feature, i.e. I "search" for my contents by approaching them physically, by opening up headers, then sub-headers: for me, this is an extremely natural way of getting to info.

On the other hand, my memory for real searching often fails me, and even Boolean search doesn't help too much: I remember a search term: hundreds of hits; I suppose another one should also be in those records (but if I'm mistaken here, I'll inadvertently exclude the record I'm searching for!), and even with the combination, I get too many hits, and then I don't really see a third search time that might have been around there - but perhaps not? And then, with all those more-or-less-synonyms no such current program handles equal!

So I must say that with searching, even with good searching, I've got some problems, hence my interest in sophisticated trees. But of course, searching is of the highest interest wherever you have put something OUT of its tree-heading-subheading "way": somewhere else! There, with a hit table and Boolean search, it's 100 times better than with "just normal search and everywhere": I have to spend several minutes on such a search, sometimes, but I find the thing, in the first case; with only basic searches an 100 hits to then be accessed one by one...

But my point here is, even Boolean search isn't good enough, it should include "semantic search", i.d. half-automatic synonym provision. Meaning: You search for dog, and before searching, the program would list up breeds, "puppy", "cute", "ferocious", whatever, in order for you to decide which of these terms should be searched for (and it's even possible to have some of these in different OR groups).

Couple this with an index, and the prog would only present you such breeds (in our example) that really are present somewhere in your texts, and not unnecessarily clutter this first "what to search table" with search terms, taken from a dictionary but which ain't in your text!

Then, this, for several languages, and for combinations of languages. And finally, you could give the program hit numbers when you work within the "what to work" window, meaning your processing the search terms there will give you real-time results how many hits you'd then get.

I know of a list one very early Dos text db which offered some semantic search (but not in the sophisticated I describe here) - askSam was a best-selling program then and "killed" it, by way of most people then buying AS instead. One of the big ironies here: After having got its then really comfortable market position it then held for years, AS was NOT able to implement any semantic search functionality. So today, we're worse off than we were 20 years before, except for visuals: Of course, Windows ( /Mac ) is pleasant to the eye when Dos gets quickly unbearable because visually at least, we get so much "more" today.

Thank you very much, also for the free Dos progs link. I've encountered another such link, with defunct sw, Windows and Dos combined, like early Wordstar versions, 1-2-3 and such, but wasn't really enthousiastic about these. Your link is for defunct progs that are much more special and much more interesting, incl. Inmagic Plus there, citation: "These are not trivial products." - right they are. Will have a good look into this site!

Btw, this semantic search, google does it all the time for you, and even without taking your advice upon them doing so. Hence the interest of having such a system at home, but with you controlling what's found here, and what's discarded from the results. But no, even those specialised tools, incl. dtsearch, don't do semantic search, let alone let you control it. And this, 35 years after the intro of personal computing. /rant

15
The intro to the New Yorker speaks of "macro-driven" or something - of course, if his special version even gets enhancements from within...

Even with other editors - and I should have mentioned this particular problem of all editors - it's all about "wrap lines" vs. "long lines", the latter being the paragraph of your text set to one long line in order to work on that paragraph, as a "line" - problem here is the length of such lines; if you filter long lines by some term, these "hits" then will NOT be in the center of your screen, with equal amounts of "context" before and behind, but there will then be a lot of horizontal scrolling, which is awful. (But this horizontal centering of "lines with hits", around the hit, is what several dedicated search tools do, as well as some specialized translator tools.)

That's why you "flatten out" your paragraphs within your "word processor", then export, and then, in KEdit or such, you should do ("soft") word wrap, and THEN only you filter "lines", which would not constitute paragraphs, but more or less aleatoric parts of your paragraphs - I hope that KEdit be able to filter such "sub-lines" after doing "soft" word wrap there? (I tried it not for such texts, but for data mining, where it "failed" for me for the above-mentioned reasons.)

THE (= TheHesslingEditor) is mentioned by some, but in order to play around with such an editor, KEdit is fine since there is no trial period here, just absence of storage for files bigger than a few lines, so you can load files of any size into KEdit and then play around with them, and even paste the final results back into any other editor / text program; of course, abusing the trial this way in a systematic way, to do work with it, would be illegal.

As with many other sw, KEdit should have been further developed: All the above-mentioned negative points could and should have been exterminated over the years, incl. the formatting prob. In fact, some time ago, I searched very seriously for an .rtf-capable editor, but didn't find any. It has been only afterwards that I understood the interest of html export even when you do not bring your text to the web afterwards, html export being much more macro- and editor-friendly, for further processing (and even for further html-"upgrading") than .rtf export, so .rtf is more or less a defunct format: much too complicated in practice, and not stable enough - it hasn't been after I had written complete macros in order to clean up such .rtf exports, that I got aware that it wasn't even stable enough, let alone all the fuss, whilst html is much less chaotic, and this is an .rtf problem, whilst the lack of stability could be the fault of my exporting program of course.

I could write similar things about the only available Warnier-Orr sw, called b-liner: There also, we've got tremendously good ideas, but end of development, with many details never worked out (and, in b-liner's case, bugs that'll remain forever); it's a pity so many sw outstanding from the crowd ain't developer further: Development stops when the point of no (further financial) "return" for possible further development is reached, and so they never attain real maturity.

But xtabber, if you use KEdit on a regular basis, why not share some tips and tricks? Perhaps KEdit's possibilities go further than I discovered with my playing around with it.

As said, it's a very intriguing concept, and then you realize it can't do all these task you thought it could when you first read its description. I'm not calling them liars, it's just that such features trigger some wishful thinking that then is not fulfilled, because the real sophistication of which such features are theoretically capable of, is then not implemented. (And yes, I know that the last 20/30 p.c. of realization of good ideas take as much work as the realization of the previous 70/80 p.c. - but why everywhere we look in sw, we find ourselves with just "promising", instead of oustanding, sw, and this applies to every field in sw (and also when there's enough money to realize the missing 30, 20 p.c.: Cf. my rant re MindManager for an example, so this is not a 1-developer-house-only phenomenon).)

16
General Software Discussion / Text post processing with KEdit, etc.
« on: January 10, 2013, 08:20 AM »
I started this thread as a derivate from this Smart Edit thread, since indeed, KEdit is a very interesting (but not too much known) thing: https://www.donation...ex.php?topic=33574.0

EDIT: I just found this intro pdf: http://www.wolffinfo...hes%20(DATABASE).pdf - and they ain't called "Russian editors" but "Eastern Orthodox editors"

They don't develop it further and say so - no unicode / UTF8 -, but some weeks ago, a minor update has been released (1.61., from 1.60). It costs 129 bucks (not the update!).

It's the commercial version of the "Russian editor" type (XEdit / Rexx and their emulators), i.e. the one that permits to work on "subsets" of your lines (so you can have the same functionality for free, with some competitors, never trialled those though) - it's similar to, but not identical with, a "folding editor", or more precely, it IT a folding editor, but which on top of folding, allows for folding cascades.

I trialled KEdit, was very intrigued by this concept. Let me explain:

- you say you just want all lines with "abc" in them
- then you get a result in which you work as if the whole text didn't comprise but these lines (so it's different from a search results table as in askSam or in an editor like TSE): it IS a search results table, but you work within the "search results"

- now you say you just want all lines withOUT "xyz" (with the "more" command, cf. the "less" command below)
- so you'll get a SUBSET of the previous thing, i.e. since you didn't revert to "whole text body" before, you're now with the subset "+abc -xyz"

- you can do this in cascading style many times (don't know if there is a limit to this cascade)



Now for the problems (and that's why I didn't buy it), PLEASE CORRECT ME IF I'M WRONG:

1) You can't easily get back in this cascade: first result, "all abc lines", second result, "all abc lines without xyz in them" - now, in order to go back to "all abc lines", you have to revert to "all lines", then you opt anew for "all abc lines", or you do the "less" command.

Of course, in this example, this is not a real problem, but then, if within a cascade of 5 or 6 such subsequent subsets, you want to go back from step 6 to step 4 or step 3 (e.g. for then doing different subsets from that point on): helluva!

2) Similarly, within such a cascade of refining your "search" / "selection", even without going back, you'll get lost, early on, i.e. with KEdit, you always will have a legal pad near your keyboard, on which you'll write down your refining cascade by hand - that's really a pain in the youknowwhere. Ok, some people might be able to memorize their 5 or 6 consecutive steps here; I get lost in step 3, or even in step 2 if my multitasking capabilities had been in demand in-between.

3) You can only do one subset at a time, meaning, you cannot enter "(+abc -cde) / (OR) ( (+ ijk) OR (+lmn) ) or such, whilst you can do exactly this in a prog like askSam.

Such a feature would have resolved both the above-mentioned problems 1 and 2, since you'd have your search code within the command line, and you'd made adjustments of your "search" command there: would be good both for "going back", as for "remembering the current state". Cf. the askSam search line which works exactly this way.

In fact, you CAN do it by command line, in KEdit, in the style, e.g.

"ALL ~WORD /abc/"

which means, "all lines not containing "abc" as a word (irrevantly of "abc" as part of a word). But you CAN'T COMBINE such commands in your command line, and that's big problem.

4) As with every "folding editor", you only see the lines in question, whilst for many tasks, you'd need the text beneath those lines. In askSam, e.g., you can put the hit table into one window, select hits there, edit the "lines beneath" (= the respective records) within your main window, switch back to your hit table, select another line / record there, etc. (Of course, you can have your hit table beneath your records, or to the right of them, but in this latest variant, buy a large screen in order to "get" the context of these lines, so a second screen for this is perfect for making heavy use of this feature.)

It's MyInfo 5 that had found a really very clever solution to this problem: In fact, MI, in version 5 - well, that was 2 years ago... -, it had a unique feature, allowing for appearing, by option, two more subsequent lines (regular style) beneath each search result line (bold). Thinking about it, you'll quickly grasp that this not only made MI 5 rather suitable for programming needs, but especiall for client db's, and such: Knowing you've got such a fine feature on your fingertips, you'll do the very first 3 lines of your clients' / prospects' records in a special, so a to search for content within line 1 of these records, then have specific content even within your hit table, and not only after going from there into specific records.

So this was a (rather hidden) gem of MI 5 (that the developer didn't even advertize, whilst it was unrivalled (is there similar function in other sw? please tell me!) - and from MI 6, it's absent, the developer saying it will be back some day. (So much for using sw from a 1-developer venue for your business.)

MI has got other details that are much better than the corresponding solution (if there is any) in UR, and people have understood this lately: UR's forum is as dead (if you do abstraction from my posts there) as MI's forum has been a year ago: With absence of UR development, and MI going steady (if slow indeed), more and more people go to MI, and I certainly don't blame them. (UR will - AGAIN - be on bits in some days: it's becoming ridiculous: with a little development on their side, they could easily sell UR full price instead.)

5) Minor problem, but a very ironic one: For its outstanding "subset of lines" feature, i.e. its "USP", KEDit does NOT have any keyboardcut, so you must do a macro activating the menu. This would have been really easy to implement, and this absence is ridiculous: Yeah, hide your best feature, make it difficult to reach. (Ok, not really difficult for us macro / AHK users, but you see what I mean.)



So where's the real advantage of KEdit? It lies in its facility to directly EDIT these "hits", as said above, and this means, it's not so much suited for programming and data mining (whilst that's a little bit its false "promise" though (I mean, seeing that feature, your first idea is to use it in this big way, then realize it's not possible, for the problems listed above), but for text editing, for writers (!!! hence your find!) and for translators, editors of texts written by others, etc.:

In fact, you search for some search term, or even for some synonyms ("abc OR def OR ghi"...), and then, you edit some of them, right within the "hit table" you'll get, no need for switching back and forth between search results and real text, as in programs of the other kind (AS, TSE, UR (Ultra Recall), MI (MyInfo) and many more). At the end of the day, THAT's the real purpose of this program, and in this task it excels.

Again, if there are errors in my description, please correct me, since that would mean that KEdit's range of usefulness would be far greater than described above.

(If you trial KEdit, know that you can filter the lines in the form hitline, hitline, hitline, or in the form hitline, "x lines not displayed", hitline... - the latter is default, and it makes it ugly (and I don't have the slightest idea in what task this feature might be of use) - but you can do without and then have much cleaner results.)

(You also can do subsets by "selection level", but it doesn't seem to be possible to combine this with the filtering above, so I don't think you'll get specific headers, together with their text body, as you (partly and cleverly made) got in MI 5 (point 4 above), and as for "selection level alone, I didn't find a task in which this feature'd make sense for me. Theoretically, the "selection level" command is very clever, but I couldn't find a way to assign a certain selection level to a certain "more" command, i.e. you do a "more" command, then could do a "set selection level" command, but not, it seems, just for these "more" hits / lines, i.e. the "selection level" setting then would mix up those "new" lines with the old ones, i.e. assign the same selection level to all of them, which is not what we would want. AND the "selection level" is line-specific, not group-specific, meaning you cannot use it to have "+abc" = sl1, "more def" = sl2, and so on - such a function could come handy, but doesn't seem to be there - so in the end, I never understood what they think "sl's" might be meant for.)

(And yes, instead of filtering lines, you can have marked them yellow, i.e. see the hits within the global context.)



Can't read John McPhee's New Yorker 1/2013 article on KEdit, since the link is for subscribers only, and buying the issue, here in Europe, wouldn't cost me 6,99 but 25 euro or something, magazine prices triple and quadruple over the Atlantic  - but perhaps he explains even better than I did why for writers / editors, i.e. certainly for specific POST-PROCESSING tasks of long texts (technical as well belles lettres), KEdit is indeed a hidden gem.



P.S.: A last hint: Such plain text editors make you lose any traditional previous formatting, which certainly is not acceptable. So there are two solutions: Working with mark-up codes from the beginning, or doing the export to html (from Word / .rtf / any formatted text), then working on this intermediate text from then on. "Post-processing", I said. Btw, it's easy to do a macro that will re-code your ugly html codes "back" into some more pleasant to the eye, e.g.

from <b>bold words</b> or such (sorry this line resolved into bold text here, but you know the <> coding)

to |bold words|| or such,

then work on this text, then have another macro doing the back-to-html transposition (the same would apply to traditional publishing: from formatted text to html (since that export is often very stable when export to .rtf isn't necessarily stable, not to speak of either's product's respective neatness), then macro translation into "intermediate mark-up" (with which you can live visually), then post-processing, then macro translation to the respective codes for PageMaker, InDesign, Framemaker...), but this formatting issue is certainly the reason why goodies like KEdit remain exotic and ain't further developed.

17
xtabber, very interesting mention, so I started a new thread.

18
- I expected the guitar thing, with a Porsche, now it's a very expensive bass guitar, but the argument was in the air. Well, there's net profit in the price of such physical goods (in the European price of a Porsche, that would be 25,000 euro; perhaps it's less in the U.S.), but there's also lots of cost in order to produce them - now compare with sw where any additional copy only produces very low marginal cost.

- It's right that you can't compare Photoshop with Aids drugs for South Africa, hence the intervention of the authorities for the latter, whilst we won't get same for the former.

- I personally am only very mildly interested in Photoshop, but this is the piece in the package that the masses want (no pun intended), so I spoke mainly about that.

- We're not speaking of forcing Adobe to give it out, at a "social" price (hence no authority action needed or asked for), we just discuss the INTEREST of Adobe to do the best here, with the situation given, whatever might have lead to this new situation for them: And that's an update / upgrade scheme without questions asked for codes previous to the given ones here.

- Why do I say they it's in their interest, now? The situation is a fact, anybody wanting to use Photoshop CS2 is henceforward technically able to do so, with tweaking his modern comp or buying an older comp for this. So, many amateurs will do exactly this, and they will not upgrade to a subscription scheme but use their (probably illegal, but even that's not sure, see below) copy of Photoshop CS2 for long years. So many of them would not have bought the CS5 or a subscription anyway, but some of them would have; now, with their "free" CS2 version, they will NOT do: since, from CS2-free to CS5/subscription-very-expensive, it's a jump most amateurs will never make. Here, my point is: With the availability of "free" (if legal or not) CS2, Adobe BLOCKED a big part of its amateur market, for years, hence their interest to DE-BLOCK it by luring a max of these possible prospects of a future day but which are now blocked into their old-version use, into a new product, by correctly pricing this for them.

- Thus, a special update / upgrade for these special codes / CS2 is of high interest, since for many of those happy, "served" amateurs, it would not be a jump from old-and-free to new-and-very-expensive, but from free-but-old, to something brand-new and decently priced (for them). Here, many amateurs would make the jump indeed (depending on the upgrade price: I'd say Adobe should not take more than 40 p.c. of the full version's street price if they want to generate big sales here).

- It's not a valid argument imo that somebody else just program something similar, then sell it for 200 bucks instead of 800 / 1,000 bucks: We all know this will never occur, since nobody else can make up for the advance in expertise Adobe has got over the years. No, it's the other way round, we're speaking of a de-facto monopoly here, with the monopolist realizing his abusive price policy by way of his monopol power.

- As for tech support for amateurs, the tech support I, as a 3,000-or-more bucks customer (as said), has been abysmal (and thus very cost-effective for them), and then, it should be possible to couple the special upgrade scheme with a special tech support scheme where you would have to buy service vouchers, say 25 bucks apiece.

- And now my legal AND marketing core argument (no morals here). We all know there are special sites giving away illegal (activated) copies, or illegal codes (in order to activate copies): You download the sw here, you get your illegal code there, and you're done. I've never even visited such sites, and all my sw in any comp I have is legal / paid for (that's way quite some of them are quite old).

- But I never ever have seen such illegal downloads, such illegal codes ON THE SITE OF THE DEVELOPER HIMSELF, NEVER EVER IN 25 YEARS OF PC. Have you? Where? From whom? What sw house ever published themselves, their own illegal codes? Now you. Not by accident, not deliberately.

- And here, it's deliberate now. Whatever their possible accident might have been, one day later the "illegal" (= illegal for non-CS2 users with previously bought code) page was on again, together with the download links, together with the codes, and not even any question for "your Adobe ID" anymore.

- Whatever might have happen on January 7 and before, there's a NEW SITUATION from January 8 on: It's deliberate now, and that changes a lot.

- Here, we get, for the very first time in the history of the personal computer (incl. Mac), a developer who publishes himself the "illegal" codes, which means, he incites people to "steal" his sw - if really he wants it to qualify this way, and not by accident, but because he pretends this to be a "normal" way of sending codes to paying customers,

- when in fact this way of doing things

(instead of e.g. sending them by mail to paying customers upon request, or even in a mass sending, and afterwards, upon request, in cases paying customers had changed their mail address, so didn't get the code, etc.)

- IS A WORLD PREMIERE,

- as would be a home owner who does not only put the key of his front door prominently on the top of his letterbox (= the download links), but also put a sign there, on which he wrote the code of his safe (= the activation code).

- From this DELIBERATE way of doing things, I deduct that in many jurisdictions, and perhaps even in the U.S., they would LOSE their case if ever they brought an action against some NO-bona-fide downloader-activator.

- And they know this, and they do it this way all the same. So they do it on purpose, or because they've got delusional. They're smart people, so I opt for the former. You know what they invariably say on bits when they prolong a sale for the second day?

Enjoy!

19
Having thought about it, I think we have just a few standard "formats", meaning, just a few very standardized ways of encoding any possible 2-dimensional sequence of color and brightness values, from which is no way out, by way of screenshots or something, since the moment you save your screenshot or any "processed" photo, it will again be re-encoded the very same way, producing very similar results, by way of the encoding algorithm to be followed without any "fuzzy" deviations. You change colors: stays similar. You change brightness values: stays similar. It's only when you change forms that you'll get a really different, new encoding, but then your photo will not remain similar. And as for details, similar, they're just sub-part of the code. Which means the internal encoding of bitmaps (photos are (perhaps a special form of) bitmaps, I suppose) is much more similar to the internal encoding of vector graphics than I'd ever thought. And from that on, photo search becomes "easy": No ai needed to look upon the photo from the outside, they recognize it from the inside, so to speak, by its particular (and astonishingly little variable) coding structure and the sub-parts hereof.

20
It's an interesting subject indeed, as soon as we presume they use, more or less / more from (than we thought) of the underlying code of pictures, and not so much ai checking those pictures from the outside. But if we were totally right (which we are probably not: they'd do a combination of both, I suppose), once a picture is "screenshot" and restored from there, they wouldn'd recognize it anymore, and I think they're a lot smarter than that.

Also, when manipulating the underlying "text", the question is, will the thing you'll have got after that manipulation, be a real picture, or will it be a pseudo-picture, i.e. recognized as a picture, by google, but not open anymore in a viewer (since it's become "defective" in the process, or then not?).

Any google insiders here willing to speak?

21
General Software Discussion / Re: A new harddisk for my old notebook?
« on: January 08, 2013, 11:55 AM »
Remember Laura Nyro.

22
"apparently that is a problem for a lot of users. No help from Nuance, as usual."

That's right two times, Paperport isn't worth much, and Nuance support is worth nothing. (On the other hand, I once bought, for several hundred euro then, believe it was 270 euro = 380 bucks, or was it even 340 euro?, an early version of Omnipage - terrible! Now, I own a free and crippled Omnipage version 16 (never bothered updating, in view of my big loss), and I'm rather with it: It's much, much better than the outrageously-priced early version had been!

In general, living in Europe and never buying from amazon.com (but often from amazon.co.uk/.de/fr, all with no probs whatsoever), I don't buy much anymore without first, and systematically, check for the product both in the web and especially in amazon.com - and of course, there you'd have been informed on both counts before spending your money. It's amazon.com, not .co.uk, not .de, not fr. - I want the sheer numbers: with 300 reviews, there's fewer chances of them all being manipulated than with just 12 or so on amazon.de/fr, for the same product.

On the other hand, if you have a recent version of Omnipage, re-install it and try to tweak it a little bit, it's not that bad at all for ordinary originals.

23
Exactly, "Ultimate" they call it, and the Enterprise versions. OEM for 49 euro or something? That's not that a good I think, but you're right, these OEM version, in Germany, bec./of their BGH (hightest court there) travel to another comp - in theory. Since they phone home, I don't know if MS Germany will give you the needed code when they know it's not the original pc anymore. So I wouldn't buy, but buy a regular version for under 50 euro as well, but that travels without problems. Especially Win7 could become highly valued in some years in case even Win9 isn't much better than the 8 flavor.

On the other hand, it's always one Win, one pc, whilst for other sw, those OEM version often don't install but on a single pc, when the ordinary version installs on three, recent example for me, was wondering if to buy Adobe Acrobat 9 OEM or, for the same price, 8 for up to 3 pc's... well, that's probably become a defunct prob for many people now, and at least old Acrobat ebay prices will go down, I suppose.

And then, my posts except the very first here prove that these language settings ain't as important as I once thought since then you can tweak any language a little bit. Ok, not thoroughly: German pops up in my system for dialog boxes and such, so I'd clearly prefer a genuine English Windows.

Btw, for English Windows, the simplest solution (for Continental Europa) is to order them from British vendors: Postage is from 2 to 5 pounds, prices are decent, delay is a week or so - I've bought lots of sw in GB during the last years, never had problems (but bying from the U.S. often means problems: customs (not for customs, but for vat, calculated on the price you pay - they even look at the corresponding ebay page in order to know what you really paid!), delay (up to 1 month is ok, but once, I waited 3 months, had been sure by then the seller was a crock, which he wasnt - after arrival, I verified he had sent the parcel 3 months ago!), transportation cost (often 30 bucks and even more, and some commercial sellers do not even send to Europe), and on my latest try, last year, I got student sw = an "educational" version, for having bought the ordinary version (and paid accordingly) - that's ebay, the trans-Atlantic variety... (After he got my red warning then, he even contacted me he did this (= stealing?) for a living, so I please would I take away the negative score in order to not further harm his business, oh yeah.)

So, buying in GB is always a good idea if the price is right (and it mostly is) - I would have bought there English Windows version, wasn't it for the need to then re-install all your stuff, so I live with German pop-up boxes...

Vistalizator? We all become acquainted with a lot of formerly unknown things here, that's good! But the bad news is, "It still requires you to download the official language packs from Microsoft though." means there's high probability it'll be illegal! (= the download and use of the language packs, I mean, outside of their legitimate use framework) But tell them to do the same for XP (and in a live system, please), and I'll install it, notwithstanding! (Just dreaming.) ;-)

24
Interesting info! Somebody with sufficient time and motivation could open up pictures with a text / hex editor and change various lines there, in order to identify "header parts" or such that will prevent google's finding them again, whilst perhaps preserving the original picture, more or less. Variant: don't open the original files, but blow parts of them up, then take a screenshot of details, and save these details as completely new files: This part should work for sure. Then, do the same with bigger parts of the original picture, and see at which size google's recognition function will take in. Two additional probs here: Between tries, you must wait for google to process the thing, to begin with, a very big annoyance here (Upload intermediate results in groups, then wait, then the next: booh!). And, you won't get good quality by blowing up to almost original size, then screenshot the thing. (Or do the screenshot from some of these new high-resolution Apple screens.)

Results could make a good journal article, though.

25
Remember Laura Nyro.

Pages: [1] 2 3next