topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Monday March 18, 2024, 10:42 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: On data storage and applications going cloud (Surfulater, Mindjet et al.)  (Read 18155 times)

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
I

1. Just today, the community has received news of its recent loss, two days ago, of a prominent data-availability activist, Aaron Swartz. Interesting here, the criminal prosecution authorities seem to have been much more motivated for him to be treated as a big-criminal than even his alleged victim, MIT. (Edit: Well, that doesn't seem to be entirely true: JStor is said to not having been insisting on him being prosecuted, but M.I.T. wanted him to to be made "pay" - so without them, he'd probably be alive. And it's ironic, that M.I.T., a VICTIM of these "resell them their own academic papers, and at outrageous price" scheme, made themselves prosecutors of Aaron, instead of saying, we're not happy about what he tried to do, but he tried for all of us. M.I.T. as another Fifth Column representative, see below.) So there is cloud for paid content, and your getting access without paying, big style, then perhaps even re-uploading gets you 20 or 30 years in jail if the prosecutors have their way, and that's where it goes (cf. Albert Gonzalez).

(Edit: First he made available about 20 p.c. of an "antecedents rulings" db, absolutely needed for preparing lawsuits in a common-law legal system based on previous, similar cases' rulings (they charge 8 cent a page, which doesn't seem that much, but expect to download (and pay for) perhaps 3,000 pages in order to get those 20, 30 that'll be of relevance), then he tried to download academic journal articles from JStor, the irony here being that the academic world is paid by the general public (and not by academic publishers or JStor) for writing these articles, then pays high prices (via their university, so again the general public pays at the end of the day (university staff and overall costs being public service on the Continent anyway, and in the U.K. and the U.S., it's the students who finance all this, then charge 500 bucks the hour in order to recoup this investment afterwards). The prosecutor asked for 35 years of imprisonment, so Swartz would even have been to be called "lucky", had the sentence stayed under 25 years. (From a competitor of JStor, I just got "offered" a 3-page askSam review from 1992 or so, in .pdf, for 25 euro plus VAT if I remember well...))

(Edit - Sideline: There is not only the immoral aspect of making pay the general public a second time for material it morally (and from financing it to begin with) owns already, there is also a very ironic accessibility problem now that becomes more and more virulent: Whilst in their reading rooms, universities made academic journals available not only to their staff and their students, but also to (mildly) paying academics from the outside, today's electronic-only papers are, instead of being of ubiquitous access now, in most universities, not even available anymore to such third-parties, or not even bits of those texts can be copied and pasted by them, so in 2013, non-university academics sit before screens and are lucky to scribble down the most needed excerpts from the screen, by hand: The electronic "revolution" thus makes more and more people long for the Seventies' university revolution: the advent of photocopiers - which for new material, in most cases, ain't available anymore: Thanks to the greediness of traders like JStor et al, we're back to handwriting, or there is no access at all, or then, 40 bucks for 3 pages.)

2. On the other hand, there's cloud-as-storage-repository, for individuals as for corporations. Now this is not my personal assertion, but common sense here in Europe, i.e. the (mainstream) press here regularly publishes articles convening about the U.S. NSA (Edit: here and afterwards, it's NSA and not NAS, of course) having their regular (and by U.S. law, legal) look into any cloud material stored anywhere in the cloud on U.S. servers, hence the press's warning European corporations should at least choose European servers for their data - whilst of course most such offerings come from the U.S. (Edit: And yes, you could consider developers of cloud sw and / or storage as sort of a Fifth Column, i.e. people that get us to give away our data, into the hands of the enemy, who should be the common enemy.)

3. Then there is encryption, of course (cf. 4), but the experts / journalists convene that most encryption does not constitute any prob for the NSA - very high level encryption probably would but is not regularly used for cloud applications, so they assume that most data finally gets to NSA in readable form. There are reports - or is it speculations? - that NSA provides big U.S. companies with data coming from European corporation, in order to help them save cost for development and research. And it seems even corporations that have a look upon rather good encryption of their data-in-files, don't apply these same security standards to their e-mails, so there's finally a lot of data available to the NSA. (Even some days ago, there's been another big article upon this in Der Spiegel, Europe's biggest news magazine, but that wasn't but another one in a long succession of such articles.) (Edit: This time, it's the European Parliament (!) that warns: http://www.spiegel.d...ie-usa-a-876789.html - of course, it's debatable if anybody then should trust European authorities more, but it's undebatable that U.S. law / juridiction grants patents to the first who comes and brings the money in order to patent almost anything, independently of any previous existence of the - stolen - ideas behind this patent, i.e. even if you can prove you've been using something for years, the patent goes to the idea-stealing corporation that offers the money to the patent office, and henceforward, you'll pay for further use of your own ideas and procedures, cf. Edit of number 1 here - this for the people who might eagerly assume that "who's nothing got to hide shouldn't worry".)

4. It goes without saying that those who say, if you use such cloud services, use at least European servers, get asked what about European secret services then doing similar scraping, perhaps even for non-European countries (meaning, from GB, etc. straight to the U.S., again), for one, and second, in some European countries, it's now ILLEGAL to encrypt data, and this is then a wonderful world for such secret services: Either they get your data in full, or they even criminalize you or the responsible staff in your corporation. (Edit: France's legislation seems to have been somewhat lightened up instead of being further enforced as they had intended by 2011. Cf http://rechten.uvt.n...ryptolaw/cls2.htm#fr )

5. Then, there are accessibility probs, attenuated by multi-storage measures, and provider-closing-down-the-storage, by going bankrupt or by just commercial evil: It seems there are people out there who definitely lost data with some Apple cloud services. (Other Apple users seem to have lost bits of their songs bought from Apple, i.e. Apple, after the sale, seem to censor unwanted wording within such songs - cannot say for sure, but read some magazine articles about such proceeding from them - of course, this has only pittoresque value in comparison with "real data", hence the parentheses, but this seems to show that "they" believe to be the master of your data, big-style, AND for the little, irrelevant things - it seems to indicate their philosophy.

(Edit: Another irony here: Our possible data is generally deemed worthless, both from "them", and from some users (a fellow here, just weeks ago: "It's the junk collecting hobby of personal data."), whilst anything you want or need access to (and even a 20-years-old article on AS), deemed "their data", is considered pure gold, 3 pages for 40 bucks - so not only they sell, instead of just making available to the general public its own property, but on top of that, those prices are incredibly inflated.

But here's a real gem. Some of you will have heard of the late French film <i>auteur</i>, Eric Rohmer, perhaps in connection with his most prominent film, <i>Pauline at the Beach</i>. In 1987, he did an episodic film, <i>4 aventures de Reinette et Mirabelle</i>, from which I recommend the fourth and last episode to you which on YT is in three parts, in atrocious quality but with English subtitles, just look for "Eric Rohmer Selling the Painting": It's a masterpiece of French Comedy, and do not miss the very last line! (For people unwilling to see even some minutes of any French film, you'd have learned here the perfect relativeness of the term "value" - it's all about who's the owner of the object in question at any given moment.) If you like the actor, you might want to meet him again in the 1990 masterpiece, <i>La discrète</i> (There's a <i>Washington Post</i> review in case you might want to countercheck my opinion first. And yes, of course there's some remembrance of the very first part of Kirkegaard's Enten-Eller to be found here.)...)

II

6. There is the collaboration argument, and there is the access-your-data-from everywhere, without juggling with usb sticks, external harddisks and applics like GoodSync Portable and - I'm trying to be objective - where there is a data loss problem, and thus a what-degree-of-encryption-is-needed prob too: Your notebook can be lost or be stolen, and the same goes for these external storage devices. But let's assume the "finder" / thief here will not be the NSA but, on most cases, not even your competitor, but just some anonymous person dumping your data at least when it's not immediately accessible, i.e. here, except for special cases, even rudimentary encryption will do.

7. I understand both arguments under 6, and I acknowledge that cloud services offer much better solutions for both tasks than you can obtain without these. On the other hand, have a look at Mindjet (ex-MindManager): It seems to me that even within a traditional workgroup, i.e. collaborators physically present in the same office, perhaps in the same room, collaboration is mandatorily done by cloud services and can't be done just by the local workgroup means / cables - if this is correct (I'm not certain here), this is overly ridiculous or worse, highly manipulative on part of the supplier.

8. Whenever traditional desktop applications "go cloud", they tend to lose much of their previous functionality within this process (Evercrap isn't but ONE such, very prominent example, but there are hundreds), and the arguments, "we like to hold it simple" and such idiotic excuses, and even when there's highly profession developer, as is in this case of Neville, it seems that the programming effort for the cloud functionality at least heavily slows down any traditional, "enhancement" or even transposition programming of the functionality there has been - of course, how much transposition is needed, depends on the how-much-cloud-it-will-be part of the venue of that particular sw. As a general rule though, users of traditions sw going cloud use a lot of functionality and / or have to wait for years for their sw to recoup afterwards, from this more or less complete stalling of non-cloud functionality. (Hence the "originality" of Adobe's CS's "cloud" philosophy where the complete desktop functionality is preserved, the program continuing to work desktop-based, with only (? or some additional collaborative features, too, hopefully?) the subscription functionality laid off cloudwise.

III

9. Surfulater is one of the two widely known "site-dumping" specialist applics out there, together with the German WebResearch, the latter being reputed "better" in the way, it's even more faithful to the original for many pages being stored, i.e. "diffult" pages are rendered better, and in the way that it's quicker (and quicker perhaps especially with such "difficult" pages), whilst the former is reputed to be more user-friendly in the way of everyday handling of the program, sorting, accessing, searching... whatever. I don't have real experience (i.e. over short trial) with either program, so I the term here is "reputation", not "facts are...". It seems to be common knowledge, though, that both progs do this web page dumping much better than even the heavyweights in traditional pim world, like Ultra Recall, MyInfo, MyBase, etc.

10. Whilst very few people use WebResearch as a pim (but there are some), many people use Surfulater as a general pim - and even more people complain about regular pim's not being as good for web page dump, as these two specialists are. For Surfulater, there's been going on that extensive discussion on the developer's site that has been mentioned above, and it seems it's especially those people who use the prog as a general pim, are very affected by their pim threatening to go cloud, more or less, since it's them that would be most affected by the losing-functionality-by-going-cloud phenomenon described in number 8. Neville seems to reassure them, data will be available locally, and by cloud, which is very ok. But then, even Surfulater as it is today, is missing much functionality that would be handy for making it a real competitor within the regular pim range, and you'll be save in betting on these missing features not being added high-speed too soon: the second number 8 phenomenon (= stalling, if not losing).

11. So my personal opinion on Surfulater and WebResearch is, why not have a traditional pim, with most web content in streamlined form, i.e. no more systematic dumping web pages into your pim or into these two specialist tools, but selecting relevant text, together with the url and the download date/time "stamp", and pasting these into your pim, as plain text you will then format according to your needs, meaning right after pasting, you'll bold those passages there that will have motivated you to download the text to begin with, and this way, instead of having your pim, over the years, collect an incredible amount of mostly crap data, you'll constitute yourself a valid respository of neat data really relevant to your tasks. Meaning, you'll do a first data focussing / condensing of data right on import.

12. If your spontaneous reaction to this suggestion is, "but I don't have time to do this", ask yourself if you've been collecting mostly crap up so far: If you don't have 45 or 70 sec. for bolding those passages that make the data relevant to you (the pasting of all this together should take some 3 sec. with an AHK macro if really needed, or better, by an internal function of your pim, which could even present you with a pre-filled entry dialog to properly name your new item here), you probably shouldn't dump this content into your pim to begin with. Btw, AHK allows for dumping even pictures (photos, graphics) from the web site to your pim, 1-key-only (i.e. your mouse cursor should be anywhere in the picture, and then, it'd be one key, e.g. a key combination assigned to a mouse key), and eventually, you pim should do the same. Of course, you should dump as few such pictures as is absolutely necessary, to your pim, but technically (and I know that in special cases this would be almost necessary, but in special cases only), it's possible to have another AHK macro for this, and also your pim could easily implement such functionality.

13. This resets these two specialists to their specialist role: Dumping complete web pages in those rare cases this might be necessary, e.g. mathematicians and such who regularly download web pages with lots of formulas, i.e. text and multiple pictures spread all over the text - but then, there are less and less such web pages today, since most of them, for such content, have links to pdf's today instead, and of course, your pim should be able to index pdf's you link to from within (i.e. it should not force you to "embed" / "import" them to this end). Also, there should be a function (if necessary, i.e. if absent from your pim, by AHK) that does this downloading of the pdf, then linking to it from within your pim, 1-key style, i.e. sparing you the effort to first download the pdf and then do the linking / indexing within your pim, which is not only an unnecessary step but also will create "orphaned" pdf's, that will not be referenced to within you pim).

IV

14. This means we need better pim's / personal and small workgroup IMS / information management systems, but not in the way, "do better web page import", but in the way, "enhance its overall IM functionality, incl. PM and incl. half-automatted web page CONTENT import (incl. pdf processing)". Please note here that while a "better" import of web-pages-as-a-whole is an endless tilt at windmills that blocks the programming capacity of any pim developer to an incredible degree, ever one, and worse today than it ever did, such half-automation of content dumping / processing is extremely simple to implement on the technical level. This way, pim developers wouldn't be blocked by such never-ending demands (and never-ending almost independently to their respective efforts to fulfill them) to better reproduce imported web pages (and to quicker import them) anymore, but could resume their initial task, which is to conceive and code the very best IMP possible.

15. Thus, all these concerns about Surfulater "going cloud", and then, how much, are of the highest academic interest, but of academic interest only: In your everyday life, you should a) adopt better pim's than Surfulater is (as a pim), and then enhance, by AHK, core data import (and processing) there, and then b) ask for the integration of such feature into the pim in question, in order to make this core data processing smoother. Surfulater, and WebResearch, have their place in some special workflows, but in those only, and for most users, it's certainly not a good idea to constitute web page collections, be it within their pim, or be it presumably better-shaped collections within specialized web page dumpers like Surfulater and WebResearch, whose role should be confined to special needs.

V

16. And of course, with all these (not propanda-only, but then, valid) arguments about cloud for collaboration and easy access (point 6), there's always that aspect that "going cloud" (i.e. be it a little bit, be it straightforward) enables the developer to introduce, and enforce, the subscription scheme he yearn so much for, much better than this would ever be possible for any desktop application, might it offer some synch feature on top of being desktop, or not. Have a look upon yesterday's and today's Ultra Recall offer on bits: Even for normal updates, they now have to go bits-way, since there's not enough new functionality even on a major upgrade, so loyal, long-term users of that prog mostly refuse to update halfprice (UR Prof 99 bucks, update 49 bucks, from which, after payment processor's share, about 48,50 bucks should go to the developer), and so, some of them at least "upgrade" by bits, meaning a prof. version is starting starting price 39 bucks, from which 19,50 go to bits, 50 cent or something to the payment processor, leaving about 19,00 bucks for the developer; it's evident that with proper development of the core functionality (and without having to cope with constant complaints of the web page dump quality of his prog (Edit: prices corrected)), the developer easily could get 50 bucks for major upgrades of his application, instead of dumping them for 19 bucks: And that'd mean, much more money for the developer, hence much better development quality, meaning more sales / returns, hence even better development...

17. As you see here, it's easy to get into a downward spiralling, but it's also easy to create a quality spiral upwards: It's all just a question of properly conceiving your policy. And on the users' side, a rethinking of traditional web page dumps is needed imo. It then would be a faux prob to muse about how to integrate a specialist dumper into your workflow: rethink your workflow instead: Three integral dumps a years don't ask for integration, but for a link to .mht.

18. And yes, I got the irony in downloading for then uploading again, but then, I see that's for preserving current states, while the dumped page will change its content or even go offline. But this aspect, over the policy of "just dump the content of real interest, make a first selection of what you'll really need here", in most possible cases could only prevail for legal reasons, and in these special cases, neither Surfulater nor WebResearch are the tool you'd need.

VI

19. As said, the question of "how Neville will do it", i.e. the distribution between desktop (data, data processing) and cloud (data, data processing again), and all the interaction needed and / or provided, will be of high academic interest, since he's out to do something special and particular. But then, there's a real, big prob: We get non-standardization here, again. Remember Dos printer drivers, as just one but everyday example of the annoyances of non-standardization? Remember those claims, for this prog and that, "it's HP xyz compliant"? Then came Windows, an incredible relief over all these pains. On the other hand, this buried some fine Dos progs since soon no more drivers for then current printers, and other probs; just one example is Framework, a sw masterpiece by Robert Carr et al. (the irony here being that Carr's in cloud services today).

20. Now, with the intro of proprietary cloud functionality, different for many such applications going cloud today, we're served pre-Windows incompatibility chaos again, instead of being provided more and more integration of our different applications (and when in a traditional work group, you at least had common directories for linked- and referenced-to shared files). That's "good" (short-term only, of course) for the respective developers, in view of the subscription advantage for them (point 16), but it's very bad for the speed of setting-in-place of really integrated workflow, for all of us, i.e. instead of soon providing a much better framework for our multiplied and necessarily more and more intervowen tasks (incl. collaboration and access-from-everywhere, but not particular to this applic or that), and for which from a technical pov, "time's ripe", we have to cope with increasing fractionization in what developers in search of a steady flow of income (cf. the counter-example in point 16, and then cf. Mindjet) think what particular offerings are "beneficial" for us.

All the more so you should discard any such proprietary "solution" from your workflow when it's not necessarily an integral part of that. Don't let them make you use 5 "collaborative" applications in parallel just because there's 5 developers in need of your subscription fee.
« Last Edit: January 13, 2013, 02:57 PM by helmut85 »

Paul Keith

  • Member
  • Joined in 2008
  • **
  • Posts: 1,989
    • View Profile
    • Donate to Member
Point 11 can be difficult because that is precisely what many are and did do but what Evernote and Surfulator (my apologies I know you hate that spelling but I gave my explanation in the previous thread for why) market and position as something you do not require in doing.

The question is not why not but whether Neville will see profit in doing this when Pocket is a more well known service than Thinkery.me

These do not have desktop equivalents so I hope you do not see this as a direct overall comparison to Surfulator but more as a way for you to realize that what you are talking about has happened and it has not only failed but Neville's current development, as most current software development process, simply do not work on a genie mentality nor even an agile mentality but on a tracker mentality.

Quick links:

https://getpocket.com/

http://thinkery.me/

(As you can see above, thinkery.me is even superior at providing a free no registration demo but Pocket is highly advertised even outside of the blogging community.)

Beyond these, you can also check out these two Firefox add-ons:

https://addons.mozil...ddon/scrapbook-plus/

https://addons.mozil...x/addon/grabmybooks/

They exist. They are praised by people who try them.

The problem is what product developers end up failing to realize: You need marketers for people to care more for your products against the competition and in favor of both the customers and the devs themselves regardless whether they realize it or not.

In the heyday of free software this simply meant "you need to share more to the public for them to them realize that the capability exists and they want that capability hence they want not only that software but they want to be receivers in feeling they own a piece of one of a kind software that only a few others know and benefit from".

Unfortunately that's how the software providing industry has been trapped since then.

While Evercrap as you say is smart at marketing and partnering with others, every other software developers try to ride on the coat tails on the desktop...then later the cloud...then later the tablet market.

Even those who do marketing, ride on the coat tails first of blog...then on fad blog words...then on scam site blog concepts like product launches...then on...nothing. Just more Facebooks, just more tweets, just more content marketing, just more inferior stuff to Evercrap while Evercrap improves by partnering and being able to ride on the coat tails of moleskine, Samsung Galaxy Notes, printers, bloggers, etc.

It may seem like I'm jumping off-topic but this has been the great dilemma marketers have in delivering marketing. It's simply easier to trick customers and clients than it is to tell customers...what you have is in this name and not in that name.

It's also easier to tell a dev, I want this feature rather than for the dev to say...I want to hire a system or a person that will open up this feedback and not just present this as a list of features or be the person that makes my decision for me but I want something who can transform and make me not only want to do this..but make the customer feel as if I've offered something that is not only exactly what they want but even better than they want.

That's the first part of the overall reason why it's not done this way but the second part is that most individual skilled devs with their own businesses simply do not realize how much spec writing management works for the sequel of the software including software updates but also how much they can boost their customer base if they price and sell and cliche-market tactic their pitch not at the product but at the SCRUM-based spec development stage of their pitch.

Quick link:

http://programmers.s...ing-management#40538

I do want to add an important disclaimer though: I am not a marketer nor am I a coder nor am I passing down expert knowledge.

I am not saying I can do better than developers, I am simply saying developers tend to not want to do better than Evercrap developers when they can be far superior. Developers are developers often because they simply want to develop and treat everything else as second class citizens especially when they can already profit well from a software and especially when they already can satisfy their needs from the default software and are simply improving the model as an upgrade.

The truth is, they simply do not care for the concept except if you can prove to them that you are a gazillionaire who would pay them this amount of money if you prioritize this feature while they let all the other stuff like sales, website design, etc. flow towards someone else who's a better sales person than the dev.

This creates two negative blockade for following through with your point:

1) That people are raised to believe the fluffiest part of marketing and so they create a self-fulfilling magnifying lens on not only giving more due respect to the fluffy trend competitor but in cases where they slow down and claim to do the right and slow aspect of marketing: they simply connect the finished product's features with the accounting aspect of marketing which is it's weakest part and something that accountants or even number crunchers can do so long as a product is great and functioning already which of course, in the hands of a great developer, guarantees that the customer will want the product and all the marketing aspect has to do is to do the sale aspect and then the sale numbers self-fulfill the direction the dev want in the form of the customers that have come forth which as it goes on over time, will be the sounds the dev hears most rather than the wider unheard of opportunities being provided by the free software that also gain some ramblings but due to a missing part or a lack of speed in updating could not yet gain a much larger piece of the pie to be a notable competitor.

2) The second blockade is that even with a constantly replying dev like Neville, you create a negative customer base who upon being slighted by other products view Neville's transparent attitude as a godsend and not a base requirement.

This is good for the product both short and mid term and even long term provided the profits keep coming in but for the long term, it does not build a slope towards the concept but builds a concept towards the sale. When things build up towards that paradigm, feedback go through a natural process of being more about customer service and dev listening rather than concept manufacturing.

What happens then become a case of eating one's own tail where the exemptions who get this point excel and through their success stories, the idea then becomes some form of elite marketing rather than regular marketing. With this comes the changed baseline that accomodates not only the inferior marketing but the inferior success of the inferior products. (or the superior but more heartless products)

When you add that certain people just do not get the concept of the internet much less the structure behind webpages to begin with, something like "why does Dropbox succeed over their competitor" which can be obvious to both a dev and a non-dev becomes this sort of secret recipe to both groups as opposed to being a clear observation of how they simply focused on delivering a concept rather than delivering a feature and that concept is what keeps them ahead in not just delivering and marketing the same features but also what keeps them ahead in pricing their services the way they do due to exclusivity.

It's not even a concept that started with marketing. It's a concept that goes back to why competition is good for improving things. When a person is way ahead in the race, they can slow down and even score a few extra naps or in this case bucks. When a race is close, the temptation differs from sales to getting ahead.

The only difference with software development is that, first, devs do not like to compete in concept development. Their race is found in delivering the most unique features or the most useful features rather than building the Porsche of programs.

Add the complexity of software development along with the base tutorial necessity for getting a software off as a beginner (the whole start small or abuse plugins and don't reinvent the wheel thing) then even those who work on a Porsche and finish it don't focus on a Porsche. They tend to focus on the theme of the software which is why it leads to an EverCrap.

Since the line has been moved to accompany this lesser expectations and since the line for successful reception has been raised due to the rapid rise of technology, it's no surprise why software ends up getting ahead but why software ends up getting ahead through the formation of EverCrap rather than the formation of delivering more "truer to the heart" concepts.

This is even further compounded by the fact that for software, simplicity is good. If simplicity is good then what incentive does a developer have of working towards usability if good usability can simply be offering their customers the latest in technological fads such as tags, GTD specializing needs, web clipping with buttons for sharing, clones of common cloud style interface toolbars. Even devs who want to buck the trend don't fully realize their own irrationality on these concepts such as first hating on the Ribbon and then later not just liking but advertising the Ribbon through word of mouth while, thinking inside, they're just sharing their view point.

It's so easy so why would they care about the concept at all esp. if it's not their concept but your concept?.

Again, it's not just why would they care that they would lose a buyer but rather why would they care if you buy the software that you are offering concepts on anyway?

People hated Vista, a name and skin change later with some predictable maintence: Even smart techies don't just do not hate Vista anymore but they love Windows 7. You're a dinosaur for sticking to WinXP.

This is not to say all devs think like a huge corporation like Microsoft but all devs have this in the back of their mind whenever their upgrading features on a software and whenever they are marketing their software. It's just too easy to fall into.

You're not a developer or not a good enough developer to prove their profitable idea wrong so to you the concept seems why not, to them the concept seems like a huge time sink when what they are selling is a web clipper.

...and mind you: I, myself or you, yourself won't view these concepts as why not once you have to actually implement the feature. It's easy for you to type PIM and even if we say it's easy for them to vomit a PIM at a thought's notice, what interface do you like? What do you yourself actually want to have and are willing to waste years of life of your own time to create?

Again, this is not pity the developer. This is experience the pain of the developer.

When you can experience it, point 12 is not only not as applicable from a manufacturer's mindset but it weighs on you hour after hour until it no longer becomes notable. You're no longer thinking AHK macro, you're thinking how do these dumps interpret itself into the concept and as you grow more tired, you stop thinking of it anymore and the feature becomes more of a simple plug-in dump. You say it's so easy to do with AHK, AHK it.

Problem is this is where the heart of the developer is more important than the heart of the remarker. This is the meta of why customers tend to be wrong and are not meant to be listened to.

Nowadays popular web articles just like to excuse that customers who become complainers are poor metrics because the ones who like the product tend to be silent until a problem rises up but in truth it does not matter.

The problem is that even those devs who can empathize with the actual dev tend to throw remarks and with the ease of online communication, remarks can seem more profound than they truly are.

In the first bridge, you might not be listened to because the developer might view your suggestion as a remark rather than a suggestion however in the second stage, it is more often the one suggesting i.e. your AHK example that is least interested in improving the concept and more interested in remarking on a feature cause you just want it released where as the devs don't want it to be this way or else they would have simply allowed a plugin to do this.

This is the paradox of point 13 and it's often why over time internal and online feedback tend to fall apart.

You started with wanting to improve the concept, you ended up with resetting the concept.

It's not so bad now because it's just a post but days and weeks and stress passes by and you won't even have much want for delivering the concept. You just want to deliver some feature, and then if that fails, reset the feature to an inferior form and then when that fails point 13 is not even concerned about chapter I.

It does not matter if you didn't even intend for them to be connected but rather the issue is that there's only one software but you want this one software to have 3-4 different stories and yet you want it to stay consistent to all those stories. If you have to actually develop these 3-4 different stories, that's when point 13 starts to connect to chapter I and hurt both because the inconsistency of point 13 to chapter I's goals ends up morphing point 13 into a virus against chapter I even if they are not meant to deal with the same subjects. Again, because there's only one software.

As you are not yet being broken down by the demands of the hourly development stage, it's easy to make point 14 seem like a reasonable conclusion when now you are essentially sending the message that Surfulator's task is not even supposed to build upon Surfulator's previous mechanic but instead let's just randomly add all these different dynamics without specifying them.

By specifying them, I meant narrowing them in a site and a certain direct feature so that the developer no longer has to think about what you mean and he can compare this with what others want because right now, even as a code ignorant person, I understand where you're getting at but I'm also reading "screw my needs, just follow your needs".

It's not wrong in that everyone thinks like that, it just does not scale to the concept. You are essentially writing a long post where you think you are saying you are concerned for the concept but when it's time to develop the concept, it all reads...you want people to follow only your feature.

This is not to say you are being rude but for a person who knows how to code and understands AHK, you are offering a dev ignorant suggestion with some merit when from the tone and effort of your post, what you wanted is to offer a suggestion full of merit through your own knowledge of development.

This is also not to say Neville won't consider your suggestion but the question is, will he consider the dilemma of others who hold the same suggestion as you do but do not want the process to be the same as you do?

This also does not mean Neville can provide exactly what would convince you to acquire and support the product for eternity even as it increased in price simply because you're offering a mock-up problem without a mock-up so even the button and the hotkey is essentially guesswork unless you happen to also be working in Neville's business or you had sex with him and he wants to solely develop your needs without considering the needs of others...including what specific color you want Surfulator to look by default.

Point 15 is the same as point 14 only you are essentially saying Surfulator is a bad product and users who think Surfulator is a good product should not be convinced that Surfulator is a good product cause the products belong only in a specific workflow when in fact it's the opposite.

Other clippers like Evernote belong in a specific workflow because I just can't be sure it clips the web well but it still creates some copy that is essentially a bastardized save as mhtml plus auto-Dropbox.

Surfulator just clips. You'll be surprised how many people secretly just want that including you.

Unfortunately these programs don't quite just clip so we have these gamut of concepts being thrown.

This is not so much a problem as it is a non-sequitur and a sadly timed one as no one wants to read your post only to get this near the end.

IMO you're better off deleting it.

Data storage is data storage and cloud is the cloud and the thread title is "On (not off) applications going cloud and on (not off) data storage going cloud.

I don't mean this in any antagonist way despite the way it sounds. Simply that you can't be anti-the concept if you are for the concept. It would actually keep you from presenting your concept.

As a horrible communicator, believe me it can be tough to know the difference.

It's like offering everything you think you can to a reader and the thing that the readers can see is that it has no pictures and it's too long before they go jumping jack on what you say.

It's tough not only because you have a hard time knowing it but sometimes even if you know and did provide the difference, you find out that by adding images, not only do people sometimes feel you have to add something else now besides that but you recreate your message into something that's less than what you intended to write.

In this case the problem is that it's holding you back from presenting your case.

Point 15 has substance and it's substance is built from the previous points but by being weighed down by who's an academic and who's not an academic and which entity is which and which entity want which, you fixed your own concept from growing.

People who have read your post at this point don't want to read of course. If it's of course to you then it's of course to them at this point. They want your opinion on data storage and applications going cloud. Save point 15-point 20 for a thread or a section called The Dark Side of Desktop Web Clippers going Cloud. The reader still have not gotten at the center of your previous points nor your entire thread. Give them that. Unleash the content if you are going to write this much.


« Last Edit: January 14, 2013, 12:37 AM by Paul Keith »

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
ad 11 / 12 supra

Sometimes, some things are so natural for me that I inadvertently omit mentioning them. In the points above, I presented my very exotic concept of stripping web pages that most people would download instead (hence the quality problems they face in non-specialised, pim, sw, other than WebResearch or Surfulater or similar).

Above, I spoke of condensing, by doing a first choice here what you clip to your pim, and what you'll omit. I also spoke of relevance, and of bolding, and of underlining, i.e. of bolding important passages within the unformatted text, then of underlining even more important passages within these bolded passages, and of course, in rare cases, you could even have yellow background color and such in order to hightlight even more important passages within these bolded-underlined parts of your text.

I should have mentioned here that this "first selection" almost never lets to "passages that are not there", i.e. in years, I never had a situation where I would have remembered, hadn't there not been something more, and shouldn't I go back to the original web (or other) page, in order to check, and download further if it's hopefully yet there? So this is rather theoretic situation not really to be feared.

Of course, whenever in doubt, I download the whole text, hence the big utility of then bolding passages and perhaps underlining the most important keywords there.

But there is another aspect to my concept which I have overlooked to communicate: It's annotations in general. For pdf's, many people don't use the ubiquitous Acrobat Reader but (free or paid) alternative pdf readers / editors that allow for annotations, very simple ones or more sophisticated ones, according to their needs.

But what about downloaded, original web pages, then?

Not only, you download crap (alleviated perhaps by ad blockers), around the "real stuff" there, but also, this external content stays within its original form, meaning, whenever you re-read these pages, you'll have to go thru their text in full, in order to re-memorize, more or less, the important passages of this content, let alone annotations, which in my system are also very easy: I enclose them in "[]" within the original text, sometimes in regular, sometimes in bold type.

So my system is about "neatness", "standardization", "visual relief", but its main advantage is, it allows for my just re-reading the formatted passages when re-reading these web pages in work context, just as many people do with their downloaded pdf's. Now you with downloaded web pages: It's simply totally uneconomical, and the fact that out of 20 people, perhaps 19 do it this way, doesn't change this truth.

So, "downloading web pages" should not just be about "preserve, since the original content could change / vanish", but it's even more about facilitating the re-reading. (Of course, this doesn't apply to people who just download "anything", "in case of", and who then almost never ever re-read these downloaded contents, let alone work with these.)

Hence my assertion that sw like Surfulater et al. is for "web page collectors", but of not much help in real work. I say this under the provision that these progs, just as pim's, don't have special annotation functionality for the web pages they store; if I'm erroneous about this, I'd be happy to be informed about these in order to then partially review my stance; partially because the problem of lacking neatness would probably persist even with such pdf-editor-like annotation functionality.

And finally, I should have added that I download tables as rectangular screenshots, and whenever I think I'll need the numbers in some text, afterwards, I also download the chaotic code for the table in order to have these numbers ready - in most cases, I just need 2, 3, 4 numbers there later on, and then, copying them by hand from the screenshot is the far easier way to get these into my text. (For pages with lots of such data, I do an .mht "in case of". We all know that "downloading tables" from html is a pain anyway if ever you need lots of the original data, but if you do, and frequently, there is special sw available for this task.)
« Last Edit: January 21, 2013, 01:50 PM by helmut85 »

Ath

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 3,610
    • View Profile
    • Donate to Member
TL;DR;

Shouldn't you guys be writing a blog (or 2) or something? :huh:

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
Radio Erivan to Ath: You'd be right at the end of the day. But these are the appetizers only.

Or more seriously: I'm always hoping for good info and good counter-arguments, both for what's expressed in the saying, "Defend your arguments in order to armour them.", and for finding better variants, and there are better chances for this to happen in a well-frequented forum than in a lone blog lost out there in the infinite silent web space (Kubrick's 2001 - A Space Odyssey of course).

In the end, it's a win-win situation I hope, or then my arguments must really be as worthless as some people say.

Since nobody here's interested in French auteur cinema, something else here: Today, they announce the death of the great Michael Winner, from "Death Wish", and somewhere I read that back in 1974, advertising for this classic had, entre autres,

"When Self-Defence Becomes a Pleasure."

Can anybody confirm this? (It's from Der Spiegel, again, in German: "Wenn Notwehr zum Vergnügen wird." - So perhaps they hadn't this one-liner but in Germany over there?) Anyway, I have to admit I couldn't stop laughing about this, and it pleases me so much that it'll be my motto from this day on.

Ath

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 3,610
    • View Profile
    • Donate to Member
Radio Erivan to Ath: You'd be right at the end of the day. But these are the appetizers only.
<snippet stopped>
When writing forum posts that look like essays, who do you think hope is going to read it?

Probably I don't understand...
I'm here mostly because of the technical nature of this 'beast', not to read 2 pages of text about some subject that isn't clear from the title (my fault, probably), and I usually fall asleep/stop reading about 3 sentences, because of TL;DR;
I am known (by colleagues) for long, side-tracked, technical talks, but this...

Even your answer is quite side-tracked, as in being not an answer to the question for about 90% of what you wrote...

PS, not a personal grudge against any of you, but just checking to see if you know what's happening in other people's heads. (that's where I sometimes hit just beside the nail :-[)


helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
I'm very sorry you didn't find any idea applicable to your own workflow here, and indeed I was just a little bit disappointed that Aaron Swartz' premature death didn't give rise to any obituary here, before mine, and which didn't trigger any thought about that guy and his mission expressed here. As for my wearing out the servers, I'm not so sure that some text takes so much more web space than lots of unnecessary and often rather silly graphics adult men regularly post here just for fun, so I hope that I will not attract too much wrath on my head too early, by trying to share some ideas in a constructive way. Thank you.

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,857
    • View Profile
    • Donate to Member
I was just a little bit disappointed that Aaron Swartz' premature death didn't give rise to any obituary here, before mine, and which didn't trigger any thought about that guy and his mission expressed here.

two-persons-screaming-at--010.jpg

There has been some debate here about political and related topics. And many at DC (including our host) feel this is not really an appropriate venue for it.

So if the membership doesn't more quickly jump on some of the topics you find interesting and important, please consider that some of us here (who do have very strong social consciences and often outspoken and highly political opinions on many tech related issues) have been making a conscious effort not to get into as much of this sort of thing as we have in the past.

Considering there are numerous other web venues where political discussions are both welcome and encouraged, it's not particularly burdensome for most of us to take much of it elsewhere.

With apologies for the silly graphic posted by an adult man up above. ;) ;D

Paul Keith

  • Member
  • Joined in 2008
  • **
  • Posts: 1,989
    • View Profile
    • Donate to Member
So this is rather theoretic situation not really to be feared.
-helmut85

It's not theoretic.

It is at heart somewhat of the old dream of the advanced clipboard managers.

OneNote, RedNotebook, Makagiga, Knowsy Notes, ConnectedText...even the online version of Netvibes have this at heart.

But there is another aspect to my concept which I have overlooked to communicate: It's annotations in general. For pdf's, many people don't use the ubiquitous Acrobat Reader but (free or paid) alternative pdf readers / editors that allow for annotations, very simple ones or more sophisticated ones, according to their needs.

But what about downloaded, original web pages, then?
-helmut85

Send to Kindle for the rich guys.

For the others, it depends on the HTML but for straightforward attachments, this is where Sticky Notes Software claim to be the best.

I personally worry less about how the annotations are made (I dislike both the PDF and paper way of doing it) and worry more about how the thing would create a holistic PersonalBrain but one with more room for notes. (I prefer Compendium).

Once you figure out the latter, the former follows.

So my system is about "neatness", "standardization", "visual relief", but its main advantage is, it allows for my just re-reading the formatted passages when re-reading these web pages in work context, just as many people do with their downloaded pdf's. Now you with downloaded web pages: It's simply totally uneconomical, and the fact that out of 20 people, perhaps 19 do it this way, doesn't change this truth.
-helmut85

Does not work that way.

Out of 20 out of 20 people, they think they don't need a full webpage until they lost a copy.

They just don't realize it until they lose it but it's only 1 out of 20 that would really read that much to lose a critical data piece from a web clipper like Surfulator.

If you do not believe me, compare this to the resurgence in e-reader bookstores.

When my Kobo broke, I thought that I was just losing the hardware but because the experience honed in me the idea of mass sending the e-books inside, even when I could still restore the e-books (it works fine except load the books or sync with the software) I could not get used to reorganizing these sets of e-books in Calibre or manually.

My tolerance level has simply disappeared but I did not have this tolerance before I got an e-ink reader. Not even when I was viewing 1 or 2 e-books via a tablet e-reader software and I was using the first version of Kobo which was so slow at just flipping through the libraries of books.

Everyone has these tolerance levels but they just don't realize it until it hits them like a tornado destroying the habitat that you created outside of nature. What you view as collectors are in fact a sub-breed for the non-collectors to exist. It is like a torrent sharing ecology in which in order to have the poorer person torrent a bunch of books or download a game, first you have to have a selfish sub-breed that simply collects and shares and in turn replaces your rss feed with a curating feed until corps like Valve make idiots out of the sharing community by motivating some to say that Steam convinced them to stop pirating cause now the games are cheap...blah, blah, blah.

This is the evolution of the audience for these software and the difference they have over Evernote and plain webclippers. You have to be hit by the internet or you have to be hit by the fact that most webclipping social curation sites are online or you have to be hit by the fact that you need to quickly store manuals instead of just copies of receipts because you're not even average at remembering troubleshooting software links. You have to be hit by lots of things to understand a "gift".

I'm one of those people who can backup to text files and forgot those text files or even be reluctant to reload Scrapbook Plus files out of an irrational fear for a lack of a one click restore and merge button that's as friendly as logging changes as online sites like Dropbox do, for example.

It's hitting that one range that really widens your eyes. Until then it's like saying I don't need an Antivirus, I'm careful with my PC or I'd rather be able to learn Linux cause it has a legal way to slipstream LiveCDs until something breaks in Linux that does not have a .exe where as Windows has it.

It's so peculiar because it's so interfacey. Lack of one button...even a complete webpage is gone. Lack of full webpage image design...the desire to read the text is gone. Lack of torrent under government pressure then no bridge towards UseNet.  

Something that full web clippers have over bare bones text clippers with some images will never understand is this accidental clipping of a sites primary web design imagery into our brain that simply can't be replicated by mere data unless you work at the web clipper like it's a full time PIM but that's where most people don't. That's what's creating the 19 out of 20 illusion.

That illusion can be broken but it has to eventually get to a point where content producers meet content archivers meet curators and sharers who are willing to perform a new database of absorbing web information and go beyond what pay services like HyperInk do for blogs.

...and it can only march on so long as these remnants like Surfulator continue marching on when there are other services like Instagram that can do less and earn more or these services like Quora make it harder and harder to just capture the public web.

Hence my assertion that sw like Surfulater et al. is for "web page collectors", but of not much help in real work. I say this under the provision that these progs, just as pim's, don't have special annotation functionality for the web pages they store; if I'm erroneous about this, I'd be happy to be informed about these in order to then partially review my stance; partially because the problem of lacking neatness would probably persist even with such pdf-editor-like annotation functionality.
-helmut85

It's not about being erroneous so much as it's being ahead. The PDF editors way are phased out. They are an archaic way of doing things that only work out because PDF is such a tricky thing to edit to begin with.

The landscape now is all about combining the feel of Google Docs collaboration but into a full blown personal webpage that's not so much preserved as extended.

So far as special annotation functionality...PIMs have these for awhile but it's web clipping they lack and a polished overall professional output.

Just look at how Knowsy Notes can deal with csv creation. Just look at how dotEpub can convert webpages into e-books.

What's not being brought to the casual user is that type of conversion without requiring any editor knowledge.

What's worse is that some of the actual experiments are web services and not personal files.

(For pages with lots of such data, I do an .mht "in case of". We all know that "downloading tables" from html is a pain anyway if ever you need lots of the original data, but if you do, and frequently, there is special sw available for this task.)
-helmut85

...and that's one of the crux of it.

If you can't work towards preserve, then these things will always require some kind of software or only fulfill one kind of need.

The more a software can just preserve, the more things can be built off of those preservations.
« Last Edit: January 22, 2013, 10:26 AM by Paul Keith »

nevf

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • Posts: 115
    • View Profile
    • Clibu, accessible knowledge
    • Donate to Member
Hence my assertion that sw like Surfulater et al. is for "web page collectors", but of not much help in real work.


I don't and have never seen Surfulater as a tool to collect web pages. On the contrary it's focus is retention of selected content from web pages. ie. Select content and then either create a new Surfulater article containing it or append the selected content to an existing article. Of course Surfulater can also grab entire web pages, but you typically get too much extraneous content and bloat the database.

My work flow with Surfulater is precisely this: select the bits of content I want to keep and capture those, add appropriate tags and cross-references to related articles. Add my own notes and possibly highlight specific content. Next ...  The evolution of Surfulater into a Web/Cloud app (1, 2, 3) retains this same focus.

-Neville
Neville Franks, Clibu a better way to collect, use, manage and share information across all devices.

Paul Keith

  • Member
  • Joined in 2008
  • **
  • Posts: 1,989
    • View Profile
    • Donate to Member
I think the "Of course Surfulater can also grab entire webpages was what lead to helmut85 saying it was for web collectors.

You both have the same conclusions: "but you typically get too much extraneous content and bloat the database."

Only helmut85 thinks if you remove that option/stop developing that feature you can better streamline Surfulater into doing a better job of "retaining selected content from web pages".

The quote was more directed at me as a form of "ok, let me summarize my post in one sentence and treat it as a reply to your post without actually responding to the content of your post".

I recently had an issue in the Basement following a similar frustrating theme and I think for the good of a long thread, it's much better to not treat it as any direct linkage to the original long thread. It does not hold anywhere close to being a summary, a description of the poster's remarks nor is it very close emotion wise to what the poster desires.

There's nothing to be gleamed from the vague implications of that sentence. It's in a similar route to the old Opera lite vs. Opera bloat thread.

One side will say it's just an option and you don't have to use it.
The other side will say they still want a lite version of Opera.

This type of discussions does not go anywhere because neither side is implementing the request of the other side until Google Chrome got released and stole most of the "in-between-the-lines" points of the Lite crowd and from there on, Opera just continued working with features that the developers were comfortable in releasing such as Widgets, Unite, Modified Keyboards, Speed Dial extensions, etc.

The features really don't matter. It's the speed and the robustness and the expectations from web clippers that these types of statements imply when they say it's for web page collectors.

Btw had I bought Surfulater, I think I would have used it for collecting webpages rather than specific content. The program is not as light weight as a text editor to satisfy my need for specific content. I'm also one of those who can't use OneNote properly as a notetaker and Evernote as a clipper. I just have too many other apps opened at one time to really trust that I want those applications opened for light text that I can copy paste out of anyway. Not so much that it slows down my PC if I open Surfulater but enough that it would bury me in info confusion.

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
I

"There has been some debate here about political and related topics. And many at DC (including our host) feel this is not really an appropriate venue for it."

Thank you, 40hz, that explains a lot. On the other hand, this way, the owners (the owner and his "men") of this forum have to ask themselves, at what side of the table do we place ourselves by this stance?

Just today, there's press coverage of an adjacent subject I missed covering in my "essay" above, which is package identification and paying for some packages, payment by the sender (here Google) for the "infrastructure" of the web provider (here: Orange, in France), in order for the customer (= you and me) to receive the content in question.

It's obvious that these Google vids with their lots of traffic constitute a prob for those "providers", but then, in my country, you pay them 50 bucks a months for a "flatrate", and some of these "providers" don't offer a real "flatrate", but impose a limit of 50 GB / giga per month.

So the real problems here are, soon there will be a time where with one provider you'll get "everything", whilst with another, you'll get "anything but Google vids", and then, there is, "anything but a, b, c...z" in the end, and you cannot change your provider each month, you have minimum contract terms, and periods of notice. That's prob 1.

Prob 2 is, more and more it will become accepted to have inspected these packages, and eventually, they could even refuse to transport encrypted packages on the pretext these could contain not even illegal content, but simply non-contractual content.

Thus, I thought that DC was a "users'" forum, and does not represent the "industry".

( Ironic here, in "Die Zeit" site, today, they speak about a possible "perfume" or such that will enhance your natural body odour, and somebody leaves the commentary, well, this is new indeed, for the first time, they will sell you something you've already got by nature! Somebody else, a good work-out could enhance this natural body odour as well (the point in the article being that females would like to smell this odour in order to feel attracted (or not), a case of "biological matching", by "matching genetic material". Why I'm speaking about this? Because above, I said the "industry" sells academic papers, with horrible prices on top of that, to the general public who's already the owner of these academic findings, having financed them all to begin with. )

I acknowledge I shouldn't perhaps have posted these "political" things here, in "sw", but in the "general" part of the forum, but then, I also wanted to explain the mutual reverberations between scraping sw (Surfulater, WebResearch) and pim's, AND then the web in general and content in general - at the end of the day, we're speaking of external content here, and even when we speak about simple pim's here, we're speaking of their ability to handle content original belonging to third parties, so it's all some mix-up where everything I'm discussing belongs to something else within this lot.

II

"I think the "Of course Surfulater can also grab entire webpages was what lead to helmut85 saying it was for web collectors."

Thank you, Paul, that was my point in this respect. In fact, whenever you clip bits only, any such pim will be more or less apt (and certainly will with some external macro boosting, whilst those two "specialist" offerings are there in order to render whole page pages (much?) better than the task is executed by your ordinary pim. On the other hand, if it's not about whole web pages, I don't see the interest of these "specialists", since as pim's, both ain't as good as the best pims out there are.

This addresses to nevf = Neville, the developer, and I perfectly understand that you defend your product, but then, there have been lots of customers or (in my case, prospects) who eagerly awaited better pim functionality in your product but which never came, and fact today is, as a pim, it's not within the premier league, and that's why I call it a specialist for special cases, but I don't see much of these special cases, because for downloading web pages for legal reasons - I said this elsewhere -, neither your product nor your competitor, WebResearch, are able to serve for this special purpose either.

You've made a choice, Neville, whis is, have the best scraper functionality in pim's, together with WebResearch - it seems Surfulater is not as good as WR here, but then, as a pim-like, it seems to be much better than WR, so it might be the best compromise for people wanting to download losts of web pages in full, but as said, then you have two probs, not enough good pim functionality here (since it was your choice to not develop this range of functionlity to the fullest), and - I repeat my claim here, having asked for info about possible mistakes in what I say, but not having received such info yet -, for annotating these downloaded web pages, what would there be? (Just give me key words for me searching your help file for these, and the url of that help file, and hopefully there are some screenshots, too.)

As soon as you do clips both from web pages in the web, or from downloaded web pages, there's much very different functionality needed, and where some pim's are much better than others, and where any pim isn't that good in the end, but where you can add some functionality with external macros, especially when your pim offers links to items (which Surfulater does if I remember well, so it's not my claim that Surfulater can't be used for such a task, my claim being, lots of other pims are equally usable here, and they offer more pim functionality on top of this.

Paul, as for pains with pdf's, you should know that most sciences today have lots of their stuff in pdf format, and certainly more than 90 p.c. of their "web-available" / "available by electronic means" stuff in this format, hence the interest of pim's able to index pdf's, hence the plethora of alternative pdf "editors" and other pdf-handling sw, allowing for annotating, bookmarking, etc., so your claim (if I understand well) that pdf is a receding format, is not only totally unfounded, but the opposite is true.

Neville, this brings me to the idea that any "specialised sw", specialised in the very best possible rendering of web pages as-they-are (since, as said, it's uneconomical to download lots of web pages, just because, with your sw, it's "possible"), should go one step further and also do pdf M, by this blurring the discrimination between downloaded web pages, and downloaded pdf's - but then, it should also offer lots of, and easy = half-automated web pages annotation / bookmarking features, too.

Paul, with lots of your writings, I have a recurrent problem, and please believe me that my stance isn't a denigrating one, neither a condescending one: I mix up lots of aspects, but then try to have a minimum of discernment there, by numbering / grouping. In your texts, every idea stays mixed up with every else, and so, most of the time, for perhaps about 80 p.c. of your text bodies, I just simply don't get what you try to express, and as said, this is a recurrent problem I have with these texts of yours. I'm not a native speaker, as we all know, but then, I get your English words, but I don't get the possible meaning behind them, and very often, I have the impression (as a non-native speaker, as said) that your sentence construction is in the way, so perhaps, after posting, could you re-read your texts, and then partially revise (as I do, and be it just for typos, in my case)? I repeat myself here: My "criticising" your texts has the only objective to "get" better texts from yours I'd then better and more fully understand, since I suppose up to now I don't get many good ideas buried in them, and staying buried even when I try to read you, and that's a pity.

You must see that when Ath does apply condescendance to us both, giving us advice to write in blogs, insteads, i.e. telling us to be silent here, it's, for one, that most people in "chat rooms" prefer short postings between they then can jump as a bee or other insect would between many different flowers, and also because they don't want to think a lot: Here, it's spare time, it's not meant for getting new insights except when they come very easy - but it's also because the effort of reading some people doesn't seem rewarding enough - a question of formatting texts, of inserting sub-headers, of trying to offer "one-bit-at-a-time", and so on. And when, in your case, there's also a debatable sentence structure and ideas not developed one after another but thrown together, and then perhaps discussions by fractions of them, these discussions thrown together again, and introducing new sub-ideas, then "re-opened" many lines below... well, we can't blame people refusing to read us when we wouldn't like to read ourselves in the end, can we?

III

Some other off-topic theme that has got some connections, though:

STOP 'EM THINKING!

I don't have a television set anymore for ages: I couldn't bear them stealing my time anymore. I always thought - it's different with good films where you dive into the atmosphere of the film in question, instead of wanting it to hurry up, but there ain't many good film in European television's programming being left these days - that they slow down your time on purpose. They do some news, which costs you 10 minutes. Instead, they could have done it by presenting you a "magazine article" or something in which you could have read the same info in 3 or 4 minutes if not in 2, very often. Much worse even, anything that is "entertainment there": They always slow down what's going on there, it's absolutely terrible, and at the same, you might be interested in what will follow, so they force you to do "parallel thinking": You try to not spend these moments exclusively on the crap you're seeing, but at the same time, that very crap there is interrupting any other thinking you're doing, at any moment (since it IS continuing, but at a pace that virtually kills you).

Hence my thinking that tv is meant for stealing time, for making people not think, for filling up the spare time of people in a way that their thinking processes are slowed down to a max - they call this "suspense". Of course, you can remind me of tv "being for everybody", so it "has" to be somewhat "slow" - but to such a degree? Just a little bit slower yet, and our domestic animals could probably follow! This is intellectual terrorism.

Where's the connection? Well, my topic is fractionizing and then re-presentation of information / content, and this "tv way" of doing it, needing 1 minute for presentation of a fact that should need 8 or 13sec., at the opposite of what Paul seems doing, i.e. mixing up 5 different things in 5 sentence, then mixing 3 of them up in the following one, then mixing up again 2 from the first and 1 from the second with another 2 ones, is another apotheosis in information rape.

EDIT:

The French legislator has postponed the subject of a proposed a law on these "data expeditor having to pay, too" issues ad infinitum, meaning they want to see first how it all goes wrong in every which way, then perhaps they'll do something about it. Bear in mind that authorities, and especially the French ones, have historically highly been interested in data content, so they certainly rejoice of this move by Orange / France Télécom (or told them to do this move in the first place: acceptance is everything, so they have to play it cool, first).

Paul, I fully understood your very last post after 5 or 6 times reading now. As for the preceding one, I'm always trying. My prob here being, I didn't ever have similar probs with posts of somebody else, not here, not elsewhere. So it should partly be a prob in your writing, as in my writing conception, there's certainly some flaws, too.
« Last Edit: January 23, 2013, 09:35 AM by helmut85 »

Paul Keith

  • Member
  • Joined in 2008
  • **
  • Posts: 1,989
    • View Profile
    • Donate to Member
I acknowledge I shouldn't perhaps have posted these "political" things here, in "sw", but in the "general" part of the forum, but then, I also wanted to explain the mutual reverberations between scraping sw (Surfulater, WebResearch) and pim's, AND then the web in general and content in general - at the end of the day, we're speaking of external content here, and even when we speak about simple pim's here, we're speaking of their ability to handle content original belonging to third parties, so it's all some mix-up where everything I'm discussing belongs to something else within this lot.
-helmut85

I can't speak for mouser but this is not illegal, just discouraged to keep decorum in the non-Basement civil. Especially if it can keep people from registering and joining the community.

Especially because as this forum ages, sometimes mouser can be extremely civil but we as "fans of mouser" are less civil to anyone who tries to create a polarizing argumentative stance on DC, politics or mouser as an admin.

Thank you, Paul, that was my point in this respect. In fact, whenever you clip bits only, any such pim will be more or less apt (and certainly will with some external macro boosting, whilst those two "specialist" offerings are there in order to render whole page pages (much?) better than the task is executed by your ordinary pim. On the other hand, if it's not about whole web pages, I don't see the interest of these "specialists", since as pim's, both ain't as good as the best pims out there are.
-helmut85

Thank you too. It's rare that people acknowledge each other's posts nowadays.
(Or at least it seems this way from my own life experiences.)

Lately it just feels like the key to talk to someone is to talk over someone and have them talk over you. (With the occasional oh I so and so agree or you know this subject is like this subject too that breaks up the monotony on any agreeable discussions so long as there's no indifference between whom I'm talking with.)

As far as PIMs go, it does not function as pristine as this in reality. There's still SO MUCH lacking in PIMs it's crazy. Just the move from web to tablet to desktop to browser extensions have little rhyme and logic so far as most applications go.

For example, I have not found any PIM (Surfulater included) that can replicate the web bookmarking services of Diigo and Licorize when you are accessing the list instead of just adding an article and skimming it.

These two services raise web clipping to a different layer despite not being web clippers.

From an interface standpoint, you can only replicate a MakaGiga link inside Makagiga, a Basket Notepads link inside Basket Notepads, a OneNote Link inside Onenote and a Thinkery.me link inside Thinkery.me. In the past, the yellow notepad style of Diigo was unique to the service too but nowadays it's more of a white text highlighting service when you go to it's library so it's not even a linear upwards route of innovation and improvement for everyone.

These services do cheat by not clipping the full page but they cheat also because I would assume they lack the manpower or desire to push those data back into the desktop like Dropbox and lack Surfulater's capability to fully clip the web back to the desktop. (This is where Surfulater is ahead.)

There's really a whole world out there that PIMs are not able to take advantage of yet. In fact, I often wonder how come a non-coder like me can find more diverse interface ideas than the actual PIM makers and one does not have to look long and hard. Browser extensions and PIM services are vastly different in interfaces too at a quick glance even when you are not thinking or musing about a new interface.

When you actually narrow this down to features rather than mere aesthetics this is where the world opens up. Many programs are still not apt and some of the top PIMs still fail at providing both a minimal and advanced interface, easy readable export (rather than backups of their files) and most importantly true single file products. (Like a .doc or .txt can be easily synced to Dropbox without needing the program because it's not profile data but a basic file.)

You can count on one finger the programs that can come close to having all those requirements. Especially for the web clipping world, no real service out there has really understood the vast potential for full page web clippers and only Evernote has really gone into creating a Moleskine product, a readability product and a printer product and that's the hardware route that was first established by services like RTM that had barebones easy to load lists which worked well on mobile. The true fully loaded offline non-Wikipedia database has not yet been truly released to the masses to consume.

(Some phone apps can barely provide a decent editable online to offline thesaurus service much less web clipping.)

Paul, as for pains with pdf's, you should know that most sciences today have lots of their stuff in pdf format, and certainly more than 90 p.c. of their "web-available" / "available by electronic means" stuff in this format, hence the interest of pim's able to index pdf's, hence the plethora of alternative pdf "editors" and other pdf-handling sw, allowing for annotating, bookmarking, etc., so your claim (if I understand well) that pdf is a receding format, is not only totally unfounded, but the opposite is true.
-helmut85

You misunderstood.

It's not the format but the process that I consider archaic.

Even Google has allowed for real time collaboration of word processor files with Google Doc so the same old non-real time/non-smooth mechanical process that pdf editing plays by only exist because of the format and not because it excels the annotating and bookmarking of the format.

I created this analogy so that you would understood too why full web clipping has a unique future over partial web clipping.

PDF editing has a present now because it can provide a full feature of editing a PDF.

Surfulater and other full web clippers will have a future when the day comes when it's not about developing a full web clipper for Surfulater but developing a:

"pim's able to index Full web-clipped files, hence the plethora of alternative Surfulater "editors" and other Surfulater-handling sw, allowing for annotating, bookmarking, etc."

To repeat the sentence that preceded my mentioning of pdfs:

It's not about being erroneous so much as it's being ahead.

There's so many directions this can go too which is why sometimes my post get unnecessarily long just to provide an example for each idea or sub-idea as the points of the recipient spirals around different shades of context and connotations.

Paul, with lots of your writings, I have a recurrent problem, and please believe me that my stance isn't a denigrating one, neither a condescending one: I mix up lots of aspects, but then try to have a minimum of discernment there, by numbering / grouping. In your texts, every idea stays mixed up with every else, and so, most of the time, for perhaps about 80 p.c. of your text bodies, I just simply don't get what you try to express, and as said, this is a recurrent problem I have with these texts of yours. I'm not a native speaker, as we all know, but then, I get your English words, but I don't get the possible meaning behind them, and very often, I have the impression (as a non-native speaker, as said) that your sentence construction is in the way, so perhaps, after posting, could you re-read your texts, and then partially revise (as I do, and be it just for typos, in my case)? I repeat myself here: My "criticising" your texts has the only objective to "get" better texts from yours I'd then better and more fully understand, since I suppose up to now I don't get many good ideas buried in them, and staying buried even when I try to read you, and that's a pity.
-helmut85

Oh it's a common complaint about me. Don't worry, I don't find it denigrating that you do not understand as much as I find it denigrating when people who do not understand will either not mention it or stop at the sentence they do not understand without providing any hints or clues.

For example, when I thanked you, I legitimately thanked you because it seemed like you actually took a word I said and applied it.

In this latter reply, you didn't do that because you simply said I did not revise my text (I often do more so it's tiring to hear accusations of these when you don't even address a word I wrote where as the reason my post seems mixed up was because I was trying to deal with your own long post.)

It's very easy to compare to my eyes. I look at the keywords of your OP and I look at the keywords of my reply and many words you wrote are mentioned by the reply that followed.

I looked at your reply and many of the keywords I wrote are not mentioned at all and instead the focus on how I should revise it with numbers (which wasn't possible because I was replying to "your" numbers) and so now as a reply I have to unnecessarily lengthen my reply just to play with your trails and crumbs and write a reply worthy of having understood it's holistic entirety and with a worthy feedback that leaves no stone unturned as to how I read your post, where I felt you were branching off and where I felt you were writing a different subject and where I felt I disagree and agree. It's just something a person like me with a communication problem can't afford to under-read.

Don't worry about it though. Anyone who's a regular at this forum knows I openly admit that I have a communication problem.

At least you can give me a hint when my interpretation of your words are correct. Many don't.

You must see that when Ath does apply condescendance to us both, giving us advice to write in blogs, insteads, i.e. telling us to be silent here, it's, for one, that most people in "chat rooms" prefer short postings between they then can jump as a bee or other insect would between many different flowers, and also because they don't want to think a lot: Here, it's spare time, it's not meant for getting new insights except when they come very easy - but it's also because the effort of reading some people doesn't seem rewarding enough - a question of formatting texts, of inserting sub-headers, of trying to offer "one-bit-at-a-time", and so on. And when, in your case, there's also a debatable sentence structure and ideas not developed one after another but thrown together, and then perhaps discussions by fractions of them, these discussions thrown together again, and introducing new sub-ideas, then "re-opened" many lines below... well, we can't blame people refusing to read us when we wouldn't like to read ourselves in the end, can we?
-helmut85

Nah, you are overthinking it. You'll get used to it.

Here's how the pattern goes, when you write a long forum...people will tell you to write a blog. When you write a blog, people will tell you that no one will read it because they don't understand or that you should write a thesis instead.

This is sort of like a "Web Tradition in Implementing Sarcasm".

As a foreigner I experience the phenomena well and above it's usual statement like the one that was written here so these things come off like cliches or uncreative memes at this point. Believe me, I experience it so much, I experienced interpreting and receiving the interpretations in multiple ways. Serious, sarcastic, funny, aggressive, pot shot...even the follow-up replies sometimes you'll get the faux kind "if you know about it already then maybe you should fix it" to the brutal "well then you should have learned and listened and changed it instead of being a stubborn idiot" to true kind variations of these, true sarcastic with no ill will (not even friendly ill will) variations to just forum specific and hive culture specific ways of saying the same keywords with the same meaning whenever you reply back. Even your interpretation, I considered it and have moved on past it.

For example, you say, "we can't blame people refusing to read us when we wouldn't like to read ourselves in the end". Are you kidding me?! I love reading my post, your post, other longform writers' post. That's why I use these apps to begin with!  :D

You're just reading this once, I sometimes have to go to the edit section to replace one word or edit in whole paragraphs that lead to delayed edits where the repliers are missing entire paragraphs sometimes just because I tried to edit too much. Then after that, months and years go by, and I reread and reorganize my Scrapbook Plus files, my Diigo highlights, my .txt edits of the words, my Wunderlist Notes that have whole text articles, my Opera sessions/notes, my Tabs inside TabClouds, my Taboo (especially as the service died but thankfully the backups they take images of the web and put it in a zip), I even recently synced with Pocket using DailySocial and I recently signed up to Byliner. It's a whole process with me as far as reading goes. That's why I often don't reply to forums at all for lengths of periods.

...then I constantly move around these files especially as I test run my personal productivity system against these kinds of load. Others like and are fine with using Zemanta, I dream of a Zemanta for my personal files when I'm notetaking and something like Zemanta is highly unproductive for me compared to my own notes (when I can find them).

I reread every words carefully to see if there's a short sentence I could put into PopUp Wisdom or I have an old post that I can connect with my new post or there's a post that I forgot to move to Thinkery.me as my preferred review site for my written posts. (For example, it can scrape DC forum posts except for the one in the basement)

Compare the length and depth of my reply to you in this forum to this old blog post when I was blogging and that was a blog reply blog post (with each bolded line being a section), not a real "think about this idea I'm writing about" blog post.

Believe me, I can't have written that long reply to you if I did not read and reread your post and read and reread my reply. Even with the numbers, you really made a heavy subject and the only reason I can type the reply: "IMO you're better off deleting this" is because I followed all that you wrote irregardless of the length.

DC as a community is close to my heart. I don't care which user it is, I would never in my whole life ever write a reply to any users here that I don't want to consume myself or I don't want others to figure out. Whenever I reply here, I don't just consider wanting to read the thread (including/excluding my post), I consider whether I should put a pic here or experiment with colors or experiment with sections and other variants of this and I also consider the speaker, my knowledge of the user, the topic, the things they might want or need to hear based on their post and I try to really present it up. I'm probably the only person here who might get warned for not adding anything to a topic I made and just quoting a link (cause I was trying to be concise) and then be at risk for being warned that I posted a thread that's too long and too quote heavy and that I shouldn't quote an entire webpage I'm linking when I didn't. (I just happen to quote some webpage and sometimes quote some comments because sometimes I read and open all those too with their 20 pages and other annoying loading of new comments/unpopular comments.)

So it should partly be a prob in your writing, as in my writing conception, there's certainly some flaws, too.
-helmut85

No. It's fully my problem. Everyone whom I connected in DC knows this. (Not this as in I write too long but this as in I openly acknowledge and work at fixing and editing my posts all the time regardless whether they believe me or not.)

Seriously, don't sweat and hesitate to put ALL the reason or blame or insult all on me. I don't consider that rude/negative/other adjectives and I openly cherish it because sometimes it's the only clue I have to fixing my communication problems. Those are the things that can't hurt or anger or sadden or insult me: being proven wrong, being talked with rather than talked to, being given hints as to what the user understands or don't understand. Things become un-problematic to me when communications return to being communications. Every next reply gets shorter. Every point gets moved on and clearly defined as still being up for discussions. Every examples gets scrutinized. Every ignorance has a chance of creating new enlightenment. It's just a very rare thing to receive in the web.
« Last Edit: January 23, 2013, 12:46 PM by Paul Keith »

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
Paul, I know I hadn't answered some points you made, and it's my lack of time these days (will be better in Feb when I recoup some of these, promised - it's just I have to read you 5 times VERY SLOWLY before having a chance to get some points, in a minimum of order, and I miss this time

"There's still SO MUCH lacking in PIMs it's crazy.", you say. Oh yes, and that's about CHOICES. So back to Neville for once.

Neville, you made your choices, which is your right, but to be frank, I'm unhappy with your choices. Fact is, from the price of your editor, and from the number you claim to have sold your editor, anybody can do the maths, and even over the years, it's evident you got millions for your work in your editor. This is very fine, and I'm happy for you.

Problem is, you didn't invest much of this money, time-wise, in the perfection of your editor, where I would have exactly done this. I know a little bit about editors, as I know a little bit about pim's, and as far as I can say - I trialled your editor -, you stopped development of it at a rather early stage, i.e. I know lots of functionality available in other editors, even for much lesser price, that's missing in yours. So it's a FACT when I state your editor is overpriced, and I'm too "dumb" to see where the "real" quality of your editor might hide. Stability? Lots of good editors are stable? Ease of use? Not so much. So I don't get it but acknowledge your number... and then I ask myself, with such numbers to back further and farther development, why stop development?

Because, I assume, you've got marketing considerations within your way: Instead of developing the perfect editor - which yours is rather far away from, in spite of its price -, you saw (I assume) that traditional editors are "dead", so further and farther development would have "cost" you big time, without procuring any substantial returns, especially when comparing them to those you got already with your editor as-it-is.

And then, surfulater. Some very good ideas, some real brilliance, and then, instead of developing a really good pim which, on top of being a really good pim, is the best pim-like web page downloader (well, one by one, manually, don't let's too much start to dream here!) - instead of developing such a beast, for marketing reasons, you stop development, and you do it from scratch again, more or less web-based, and in that proprietary fashin I wrote above. All this is up to you, it's your product line, and for your purse, I don't have the slightest doubt that your decisions to stop development on your editor in order to make gains from Surfulater, and then again, to stop development on desktop Surfulater in order to bring out a new product, is highly beneficial.

But then, it's developers like you, Neville, brilliant developers but whose eyes are too much on their purse instead of the excellence of their product, who are responsible for what Paul says: "There's still SO MUCH lacking in PIMs it's crazy."

There will never be a brilliant editor, never be a brilliant pim, never be anything really outstanding in any general sw category - because those brilliant developers who could it, at a given moment stop and then do something else from scratch instead because of the financial gains they see there laying, and they want them, There's sociologists who say, everything above 5,000 euro / 6,500 bucks will not get you any more happy than those 5,000 euro, so there should be lots of room for successful developers to develop a product further than economic "reason" will tell them. But it's simply not done, nowhere.

And don't say, "hold it slick, people want it slick" - people want to have easy access to elaborate functionality, they want intuitive ways to do their work, and the simpler this gets, the work the developer has to do. But no, they don't do their work, they have too much to do in their strive for the dollar. (It's similar for file commanders and many other sw categories: Nothing REALLY good out there, they all stop by that "further work wouldn't be enough return" for me point.

And that's why 35 years after the intro of the pc, and the pc "dying" now, pc sw never has reached a mature state - not even Word which doesn't become a good text processor but by applying lots of internal scripting to its core functionality (but which at least allows for such internal scripting - most pim's do not).

And then again: Where's Surfulater's functionality to smoothly PROCESS what you got from the web by it, and that's my point. Developers do have the right to stop development early on, but I then have the right to not be happy about what I see, and let developers know (not that this made any difference on them: that indeed I've seen a long time ago).

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
"I created this analogy so that you would understood too why full web clipping has a unique future over partial web clipping." Paul, again, most of the time, I clip the whole text of a web page but then quickly bold important passages, or do it later on. It's not so much about cutting out legit material there (but cutting out all the "crap" around), but of FACILITATING FURTHER ACCESS upon that same material: You read a text, you form some thoughts about it or simply find some passages more important than others, and so you bold them or whatever, the point being, your second reading, months later perhaps, will not start from scratch on then, but will concrentrate on your "highlighted" passages, and also, these are quickly recognizable - now for simply downloaded web pages: you'll start from zero, and even will have some crap around your text. I think it RIDICULOUS that these pim's do their own, sub-standard "browsers" (e.g. in MI, in UR...), but don't think about processing nor of clearly distinguishing of bits within these "original" downloaded pages. All this is so poor here, and in direct comparison, the pdf "system" is much more practical indeed. This being said, I hate the pdf format, too, for its numerous limitations. But downloaded "original" web pages are worse than anything - and totally useless; as said, in years, I never "lost" anything by my way of doing it.

An example amoung thousands: You download rulings. You quickly bold the passages appearing important for your current context. Then you clip, from this whole text body, some passages - probable, you'll do this after some days, i.e. after having downloaded another 80 or 120 rulings in your case, i.e. you won't do this after really knowing what's decisive here, hence your need to read, and to "highlight passages within", many more such rulings. So what I'm doing here then, I trigger my clipping macros on some passages within these bold text blocks (or even between them if in the meantime it occured to me that my initial emphasis was partly misplaced), and I paste them, together with the origin, url, name of item and such, into the target texts.

What do YOU do (rhetorical question here), with your "downloaded original web pages" - you all must read them a second time, before doing any clipping. That's what I'm after here: The original web pages becomes a hindrance as soon as you quickly have read it once: After this primal vision, it should become something more accessible than the original web page is. Pdf is much better here, and my system of downloading plain text, then re-format it to my means, is best... if you have just a few pictures, tables, formulas that is - hence the ubiquity of pdf in sciences, and rightly so.

But mass downloading of web pages in their original format is like collecting music tunes (incl. even those you don't even like). It's archaic, it's pueril, it's not thought-out, and it's highly time-consuming if done for real work. And that's why I'm not so hot about progs that do this downloading "better" than other pim's do, but which ain't among the best pim's out there.

It's all about choices - but today's consumers do lots of wrong choices, that's for sure. Hence my "educational stance" I get on so many people's nerves with. But then, somebody explain to me why it would be in anybody's interest to have a replica as much faithful to the original within your Ultra Recall browser (! and which probably is much better in your Surfulater browser), months after downloading that web page, the real prob being that both force you to begin at zero with its respective content then.

This way of doing things is simply, totally crazy, and 95 p.c. of people doing it this way is no reason to imitate their folly. Oh yeah, such pim's sometimes present "commentary" field, in order for your entering your thoughts about that web page or such. Ridiculous, as said, far worse even than so-called "end notes".

Do you realize that in the end, it's again about the "accessibility of information"? Let's say you read all these things, and stored all these things. Then, in order to really have them available, in their bits, for your work, I have to browse 100 rulings for these important passages (remember this was done by first reading, so perhaps my reading time there was about 120 p.c. of yours at the same time) - whilst you will read all these 100 rulings again, which makes more than double reading time, since your second reading will be slowed down by your fear to not have "got" all the important passages here (when, in my work flow, it's probably time for underlining sub-passages now), and then, as I do, you'll export your clips.

Ok, in one single situation your way of doing things would appear acceptable: When you download pages without even reading them first in the slightest possible way. But then, is it sensible to do this?

There's some irony here, too: There's a pim user fraction who says, I'm happy with plain text. I'm reverting web pages to plain text, whilst many people probably fear loss of information when they don't "get" the formatted text of the web page in its original form. And then, I need formatting capabilities in my pim or whatever. So, "do your own formatting". Once more, most of the time, I get the text in whole, then "highlight" by formatting means. It's rare that I just clip a paragraph or so, because indeed I fear my changing clipping considerations. And indeed, it's very probable you need some ruling for a certain aspect now, but for another aspect in the future, so it's sensible to download it in full, then clip parts from this whole text body - but even then, it's of utmost utility to have important passages "highlighted", from which you'll clip again and again.

And in general, please bear in mind that you choose at any given moment. Ok, it's the whole page you download, but from a site containing perhaps 150 pages or more, i.e. even when you try to NOT "choose beforehand" in order to preserve the material in its entirety, your choice will have made upon which pages you download in full, and which you didn't download.

So, put a minimum of faith into your own discernment: When choosing web pages to download, AND when choosing the relevant part of these web pages you'll download.

And Paul, it's of course about bloating the db or not, as you rightly said: It's about response times and such (whilst some db's get much bigger now, cf. Treepad Enterprise, the higher-priced version of it). But the real point is, try to not bloat your own data warehouse, with irrelevant material, even when technically, you're able to cope with it: Your mind, too, will have to cope with it, and if you have too much "dead data" within your material, it'll outgrow the available processing time of your mind.

And as I said elsewhere, finding data after months is greatly helped by tree organization, in which data has got some "attributed position", from a visual pov. Trees are so much more practical than relational db's only are, for non-standardized data. Just have 50,000 items in an UR db, and then imagine the tree was missing but you had to exclusively rely upon the prog's (very good) search function.

All the worse then that there'll be never a REALLY good pim, i.e. that UR and all its competitors will remain with their innumerable faults and missings and flaws. And don't count on Neville to change this - I'd be the happiest person alive if that happened, but Neville won't do it, it's as simple as that.

EDIT : It just occured to me that I never tried to download web pages into a pim, but when you do, you will probably never get them out and into another pim, so even when downloading them, having them in a special format or special application, then just linking to them, seems to be preferable independently of the number you tend to download... And that would be .mht for just some pages, in my case, or WebResearch for pages in numbers, in your case - that'd be my advice at least if you cannot leave this web page collecting habit behind you. Stay flexible. Don't join the crowd "I've got some data within this prog and that I otherwise don't touch anymore" - I read lots of such admissions, and it's evident these people did something wrong, at some point.

« Last Edit: January 23, 2013, 02:46 PM by helmut85 »

Paul Keith

  • Member
  • Joined in 2008
  • **
  • Posts: 1,989
    • View Profile
    • Donate to Member
Paul, again, most of the time, I clip the whole text of a web page but then quickly bold important passages, or do it later on. It's not so much about cutting out legit material there (but cutting out all the "crap" around), but of FACILITATING FURTHER ACCESS upon that same material: You read a text, you form some thoughts about it or simply find some passages more important than others, and so you bold them or whatever, the point being, your second reading, months later perhaps, will not start from scratch on then, but will concrentrate on your "highlighted" passages, and also, these are quickly recognizable - now for simply downloaded web pages: you'll start from zero, and even will have some crap around your text. I think it RIDICULOUS that these pim's do their own, sub-standard "browsers" (e.g. in MI, in UR...), but don't think about processing nor of clearly distinguishing of bits within these "original" downloaded pages. All this is so poor here, and in direct comparison, the pdf "system" is much more practical indeed. This being said, I hate the pdf format, too, for its numerous limitations. But downloaded "original" web pages are worse than anything - and totally useless; as said, in years, I never "lost" anything by my way of doing it.
-helmut85

We're directly comparing the same things in different ways.

You're directly comparing present web clipping with present PDF "system". (Although I admit I don't know what you mean by system here as it's all about editing the format. I'm not familiar with any system outside of opening a PDF in a viewer or an editor)

The picture you should consider is directly comparing future full webpage clippers with the present PDF system.

What do YOU do (rhetorical question here), with your "downloaded original web pages" - you all must read them a second time, before doing any clipping. That's what I'm after here: The original web pages becomes a hindrance as soon as you quickly have read it once: After this primal vision, it should become something more accessible than the original web page is. Pdf is much better here, and my system of downloading plain text, then re-format it to my means, is best... if you have just a few pictures, tables, formulas that is - hence the ubiquity of pdf in sciences, and rightly so.
-helmut85

The system is too slow for me. I also have sensitive and easily tired eyes.

I don't clip something I have read. That's either bookmark or copy-paste into a text editor. Any type of picture, tables, formulas (though I admit I rarely do this) are screenshot and set besides the .txt.

When I clip I plan to either read it on the software or use the clipping process (highlighting to Diigo then reading from library) as an incentive to read.

PDF is much much worse since PDF editors can be more bulky than browsers. To copy one line, you must switch from hand tool to select tool or at least some variant of that. To truly edit it, you must have an idea of how to edit it.

In contrast web clippers are much clearer. If you selection - clip, you know why you're doing it already instead of just getting the text. (Maybe the text lay-out is much more readable, maybe the imagery of the lay-out helps better with data retention.) If you highlight, you know the limitations of the actual highlighter. (Diigo will make it easy to just read the highlights in the library, Scrapbook + can have different colors for different emotions/priorities, Surfulater's data will be delegated to metadata so not all highlights need to be highlights.)

It's just more seamless even when you're stuck clip-consuming it in the traditional way.

An example amoung thousands: You download rulings. You quickly bold the passages appearing important for your current context. Then you clip, from this whole text body, some passages - probable, you'll do this after some days, i.e. after having downloaded another 80 or 120 rulings in your case, i.e. you won't do this after really knowing what's decisive here, hence your need to read, and to "highlight passages within", many more such rulings. So what I'm doing here then, I trigger my clipping macros on some passages within these bold text blocks (or even between them if in the meantime it occured to me that my initial emphasis was partly misplaced), and I paste them, together with the origin, url, name of item and such, into the target texts.
-helmut85

What you're describing is more the traditional way of doing things.

It's not bad, but it does not work for anyone who are on the victim side of unproductivity.

I can't highlight for highlight's sake. My brain is not smart enough to create the association in the future.

It's like if there's a fire and there's time to visit a webclip of what to do in a fire articles with PC, that highlighting style blinds me.

Only it does not have to be as urgent as a fire. Coding philosophies, basic keyboard shortcut purposes, what to read and why to read something... all those overwhelm me. I need those data as visible to me as possible and I need those highlights to still be copy pasted in a different interface - to-do list, different PIM, cloud service...it's not a double backup or a reference file. It's about a rotation of data consumption.

Much like opening a spreadsheet for one related issue and opening a word processor for a related manual and opening a text editor for the ReadMe that's in that format. As much as possible I cannot afford a scenario where data is just clipped and reminded. I need as big of a holistic structure or else I'd forgot my previous data associations.

I also need these lay-out things such as aesthetic that sometimes can only be captured in full webpages because again, my memory fails me. The monotony of text too often discourage me to associating and connecting datas. (I can barely finish a PDF nowadays. I can read a couple of pages but returning to it is like a task of separating what's in front of me to what I'm currently reading.)

the real prob being that both force you to begin at zero with its respective content then.
-helmut85

I don't use UltraRecall but see here's the thing. I can't start at a bookmarked content or else I have to start at zero or it paralyzes me into not reading the content.

Different people have different way of taking things in and as you said, it's all about education but I would raise you this: It's not about an educated stance, it's dispelling myths of what a proper stance is.

Do you realize that in the end, it's again about the "accessibility of information"? Let's say you read all these things, and stored all these things. Then, in order to really have them available, in their bits, for your work, I have to browse 100 rulings for these important passages (remember this was done by first reading, so perhaps my reading time there was about 120 p.c. of yours at the same time) - whilst you will read all these 100 rulings again, which makes more than double reading time, since your second reading will be slowed down by your fear to not have "got" all the important passages here (when, in my work flow, it's probably time for underlining sub-passages now), and then, as I do, you'll export your clips.
-helmut85

See, this is what smart people don't realize about dumb people like me...it's not about the accessibility but the utilization of data.

I mentioned this elsewhere here with regards to the subject of a personal productivity system, a decent person (even a failing grade person in school) can consume many underrated data provided they have the access and the desire to consume that access.

A dumb person have a hard time still utilizing it. It's more towards the genre of personal productivity systems but this is brought up because it's a much simpler analogy.

Multiple highlights can be hard to test as it's very user based and the data might as well be liquid.

A single entry to-do list like 'reformat OS quickly' can work for you but it cannot work for me.

Then the common personal productivity systems changed this issue into it not being your next action or it not being S.M.A.R.T. (Specific, Measurable, Attainable, Relevant, Timely)

They don't really understand how tough this is for a person who can't even remember a conversation in the radio as a future reference or who jots down notes but can't even return to those notes which creates more overhead.

It's very hard to empathize with someone who can only desire to clean install an Operating System so long as they download an iso rather than burning an iso or just picking up a CD.

For webpages this is more complicated. I cannot have a web clipped Wikihow page tampering or organized with a longer article on the same subject. Even when the accessibility of say just moving the mouse so that the Wikihow page is no longer visible, it traps my mind into thinking this next data have to be treated with the same light hearted mood as the Wikihow page.

Aesthetic, Dynamic and Mechanic

The more capable a user, the more they underrate Aesthetic.

The more studious-receptive a user is to a given subject, the more they can forgive the Dynamic.

Now an average person (most likely someone who does not need a personal productivity system or can tweak their own) can access past recorded data where the aesthetic, dynamic and mechanic are on close to an equal scale. If they have partners or positive relationships, it's even possible to cheat as that workflow produce rational sense by giving birth to irrational scenarios for the conscious mind to feed upon.

Now a dumb person who has no one to defer to for his data (maybe they won't understand why data so simple that it can bookmark needs to be referred to by this person as a clipping) must access data in a broken highly volatile scale of aesthetic first for motivation and aesthetic last for memory retention or data context/data location retention.

The difference between a nitpicked aesthetic and a necessary aesthetic is that if you just like a webpage to look a certain way you have to push the scale of liking it very high at the beginning and just keep being reminded of that.

Necessary aesthetic is almost nil.

The beauty may help but it's how regularly you type the url of a storage site without bookmarking it or it's how regular you think to yourself, this data is clipped here not because it's a web clipper but it's the first thing that came to mind and I recall my other data on this topic is here too.

At this point, the obvious cliche of just describing or re-describing accessibility ends here.

The dynamic of data must be extremely exportable to your head. Everything from why you are doing this to silliness such as how the window pops up will be crucial to how you do something.

This is the utility portion of the data. Every data, even if you don't want to utilize it, must have utility to a dumb person or that apathy sets up the apathy to the next data.

At this point, I would assume you still get it but maybe the confusion comes from why you think a dumb person should scale so much of that information into utilization if they are just going to skip a text. Maybe you think it's just a form of appetizer or lack of knowledge.

I'll leave you to interpret that on your own because it's not like I have studied the issue.

The last part is what I think you are ignoring the most because it is the most irrational. Just keep in mind, I'm not describing how Surfulater should be in the future to compensate this need but am attempting to paint you a picture where you're two mountains back when you ask me whether I realize or do not realize accessibility of information.

See, for someone who is capable, mechanic is not contingent with information. You don't fix a car by reading a book.

You read a book then fix a car.

Same thing with the fire analogy. You don't try to learn how to prevent a fire when there's actually a fire.

You create a space. You take advantage of that space. Then you consume the data.

For my lifestyle it just does not work that way. It does not matter whether I'm living my lifestyle or I've gone out of my lifestyle. It's just very hard to simulate mechanic.

It's irrational I know but I consider the reception of some services to be proof of how those irrationality can exist in all people but some have better adaptation than others.

Keep in mind that I'm not talking about the mechanics of software or information capture but the mechanics of information itself.

To give you a hint (but not a direct example) of this irrationality.

Why would you clip this?

Shouldn't you guys be writing a blog (or 2) or something?

...and how by clipping it would you understand that it's not as deep a subject as you first explained it?

For an average person, they can answer why they would clip this without needing to clip this and the answer can move between shades of reminders and reference. i.e. the mechanic of information is to fulfill the mechanic of reminders and reference.

When you access information like this, sometimes the text, the table data, the formula image is fine. It could be better but it's fine.

A dumb person (or at least who I am) the text, the table data, the formula image must rise up beyond utility and beyond...back to aesthetics.

Every data does not have to be this way but every information must have a particular peculiarity to it that makes it not simply about accessibility of information. Aesthetic simply replace reminders. Mechanic of information is replaced by dynamic of information. Finally, mechanic is irrationally fulfilled by being able to unlearn the information. i.e. when a highlighted sentence stops being a sentence then and only then does it start being accessible information.

Again, just to re-emphasize, it does not have any thing specifically to do with the features and direction of Surfulater and full web clippers but rather simply to simulate throwing back your question at you.

Do you realize that even before information becomes information or gets gathered as information or gets read as information there are things that can trigger information without being words? There are singular 'word' that can convey information beyond what information can do? (So much so that there's a scam book called Power of Words that's being sold on Amazon if the reviewers are to be believed)

Do you know that there are certain concepts that can only be described in their native language?

Just a reply written by someone with a communication problem can confuse you and you have an educated stance.

As you have remarked, you can read and understand the English words but some things seem to mix around each other.

Imagine how difficult it can be, even at just a monotonous level, to decipher and realize information that are sarcasm, posters that are trolling, posters that send back confusing feedback like you are throwing an ad hoc, multiple contradicting instructions...and then...and then...have a mechanic that can remember this situation even when you have no real research background nor know how to instigate a study without the backing of a university or without the safety of a group.

There's just no comparison to the  access of information that you are describing. It's like every switching of the tab or every wrongly highlighted word can act as an advertisement while watching the television.

It's like every accessed information must not be backed up but be auto-recalled up to consciousness.

It's like every consumption of information can suddenly be filled by paranoia at a certain time so it has to be a different type of storage system.

It's just...I don't know how to put it any other way...but that different people consume and absorb and read information differently and when you add that a dumb person is inherently less capable and said dumb person happens to have a communication problem that's not exactly speaking in un-decipherable jargon...consuming information just becomes different. Some traditional methods help. Some traditional methods don't. Some traditional access of information can go for as much as being so irrational yet works wonders for that person.

Ok, in one single situation your way of doing things would appear acceptable: When you download pages without even reading them first in the slightest possible way. But then, is it sensible to do this?
-helmut85

It's tough to answer your question here because I have not revealed my way of doing things so hopefully the added detail above will better narrow this question as to what way you are thinking of when you say this.

See, it's one thing to say one did not read but there are different ways of "not reading".

Speed readers do not read so they can read fast but unless you're familiar with some of the theory, it does not make sense. To me, it does not make sense either but it's a form of phonetics dodging I think. It's really hard to explain. The most common explanation I've read is how speed reading makes novel reading "not fun".

My way is more dependent on how I downloaded it. Was it from Scrapbook+, I'll read it a different way. Was it from a RSS reader? What's the topic of the RSS reader? Was it from .mht, then why do I want to associate this information with the Opera browser. There's really no fine line difference between how I bookmark links with how I download web pages.

It all comes back to expected utility, the time I am clipping the text, the potential interruption I may have...I know these all sound redundant because we all plan how to read and consume and access information but that's why I wrote what I wrote above. You don't really want to keep assuming we're consuming information the same way.

Something as simple as not reading text could mean to you simply as not reading text.

To me, it can be the equivalent of seeing the header in DC and not reading that topic but instead of bookmarking it or archiving it, I clipped it.

Another variation could be that I don't read the sentence, I read the singular words or set of words and then highlight from there. i.e. what's the emotional state this set of words are doing to me and a) do I just copy paste it to PopUp Wisdom or b) I just highlight it in Diigo and then there's the c) d) e) f) and it will increase if I ever buy Surfulater.

There's some irony here, too: There's a pim user fraction who says, I'm happy with plain text. I'm reverting web pages to plain text, whilst many people probably fear loss of information when they don't "get" the formatted text of the web page in its original form. And then, I need formatting capabilities in my pim or whatever. So, "do your own formatting". Once more, most of the time, I get the text in whole, then "highlight" by formatting means. It's rare that I just clip a paragraph or so, because indeed I fear my changing clipping considerations. And indeed, it's very probable you need some ruling for a certain aspect now, but for another aspect in the future, so it's sensible to download it in full, then clip parts from this whole text body - but even then, it's of utmost utility to have important passages "highlighted", from which you'll clip again and again.
-helmut85

Yep. All I can say to that is we humans are interesting.  :P

And in general, please bear in mind that you choose at any given moment. Ok, it's the whole page you download, but from a site containing perhaps 150 pages or more, i.e. even when you try to NOT "choose beforehand" in order to preserve the material in its entirety, your choice will have made upon which pages you download in full, and which you didn't download.
-helmut85

Sadly as I have not found a software that can provide that option, I can't even make that choice.

I have to keep compromising and it's not easy.

The one positive is while in PDFs sometimes you can't drop the afterword or the prologue or the chapter, webpages can be cheated that way by using multiple tools which provide multiple views.

But the real point is, try to not bloat your own data warehouse, with irrelevant material, even when technically, you're able to cope with it: Your mind, too, will have to cope with it, and if you have too much "dead data" within your material, it'll outgrow the available processing time of your mind.
-helmut85

Not from a software development outcome.

It's why I brought up the Opera lite and Opera bloat issue.

The data was not bloating and slowing Opera enough that they needed to fix it but to many of their users, it was an issue.

Since neither side really were invested to see it through, no one really get that the unmentioned information they were sending out was that there is a market for Google Chrome.

Google Chrome came, it was not Opera lite and it was not as lightweight as Opera bloated but first impressions last and Chrome became one of the fastest piece of software (not just browser) to be adopted by not only the general multi-browser users but websites, Linux users, etc. (Some of that was helped by Google's marketing but most of that was just being Opera lite plus extensions no different from Firefox just being ad-less Opera as an IE counterpart when it once again took advantage of replacing irrational dead data with non-data that the masses wanted i.e. no ads, no e-mail program, no rss reader, cloud sync, tab moving animations that give the illusion of speed, lightweight when it has less tabs.)

As far as dead data...let's just say it's why a recent theme with some of these Sherlock books is how Sherlock did not bother to learn to know that the Earth revolved around the sun. (This being the prime example.)

...but as the critical review states:

The unconscious mind is not under your control

but then it gets replied with:

It may be naive to presuppose the unconscious can be controlled but it in fact, in certain contexts can, over time.

As an example, when I was younger, the thought of speaking before a group or approaching an attractive group evoked an immediate anxiety response which I sought to avoid. However, today, such actions cause no apprehension ion me whatsoever. Clearly, my sense of dread was immediate, not subject to forethought, and not a matter of choice. It arouse automatically.

I have over time "moved the needle" in numerous other ways.

I believe the fallacy in your argument is that you have taken a very mysterious part of the brain and created a maxim out of what is essentially a nominalization. Can you show me the unconscious? Dissect it?

Of course not. It is an abstract concept--real in whatever way it is but certainly to say that it can not be controlled to any degree is presumptious.

It's a never ending debate and one that I would rather not raise in this thread nor this forum. I saw how apathetic some people can be to something they hold of value like the United States Constitution, no way can this irrationality of dead data help this forum even if we were both interested in discussing it. The power of irrationality requires multiple invested users to acknowledge it in order for two side to understand what irrationality's notability is about.

It's just a context people are not ready to hear as a general conversation and it's not like I have any reference except for my experiences as a human being. I've experienced irrational data that can be resurrectors of dead data and I've experienced the opposite. The trick is to communicate with people for as long a period before they say or express "Duh!" or "you are intentionally adding verbosity and turning my own wording against me by making it obtuse."

As a notable topic for this thread, it's why I based the personal productivity system that I'm writing but it has no room for web clipping software that isn't my own or your own.

Web clipping is all about the irrational and the rational of the developer. not us and how they create a process to open and insert the irrational and the rational of their potential and current customers. When they don't provide this structure, even when we just talk we're not getting anywhere until we each can provide an equally skilled prototype that acts differently and gets different receptions...and then we discuss again and then prototest again regardless if we're just interested in discussing a hypothetical. Any legitimate dead data for the brain discussion is simply a socratic-like assumption against assumption way of discussion.

And as I said elsewhere, finding data after months is greatly helped by tree organization, in which data has got some "attributed position", from a visual pov. Trees are so much more practical than relational db's only are, for non-standardized data. Just have 50,000 items in an UR db, and then imagine the tree was missing but you had to exclusively rely upon the prog's (very good) search function.
-helmut85

Sorry, you won't get any agreement from me in this subject.

I know where you're getting at but it does not scale well to someone who constantly loses backups of his data or forgets to import backup of his data when he is overwhelmed.

I do suspect that your relational db does not involve the relational db known as the brain and so that might create the confusion.  I will say however that if you were demanding me to be more concise rather than to be more clear, I would have simply written I don't agree but I don't disagree. I use Workflowy for the same mechanical purpose but I don't solely nor heavily rely on Workflowy as my system.

All the worse then that there'll be never a REALLY good pim, i.e. that UR and all its competitors will remain with their innumerable faults and missings and flaws. And don't count on Neville to change this - I'd be the happiest person alive if that happened, but Neville won't do it, it's as simple as that.
-helmut85

I disagree. Maybe to us, there won't be but to many, they don't realize it but when writers praise Scrivener...they are praising it as a REALLY good pim instead of a word processor or any other label based on what they like. I would assume the same goes for Notemap and lawyers.

It's that paradox. A really good PIM is not really a PIM anymore.

EDIT : It just occured to me that I never tried to download web pages into a pim, but when you do, you will probably never get them out and into another pim, so even when downloading them, having them in a special format or special application, then just linking to them, seems to be preferable independently of the number you tend to download... And that would be .mht for just some pages, in my case, or WebResearch for pages in numbers, in your case - that'd be my advice at least if you cannot leave this web page collecting habit behind you. Stay flexible. Don't join the crowd "I've got some data within this prog and that I otherwise don't touch anymore" - I read lots of such admissions, and it's evident these people did something wrong, at some point.
-helmut85

Yup, yup and yup.  :P

It's not just PIM, it's all software ESPECIALLY if you're not a coder and can troubleshoot or develop a comparable software. It's just a question of when do we realize it for our needs.

When does it set in and when does it set out.

For an example of setting out, if I mentioned the gamification of information, can a software dev or software conceptualist move away from this or have they even considered it before?

It's just a never-ending quicksand until a person realizes what he realizes or realizes that he has realized enough and will do great work rather than just do work from that point forward.




« Last Edit: January 24, 2013, 04:27 AM by Paul Keith »

Ath

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 3,610
    • View Profile
    • Donate to Member
 ;D

helmut85

  • Participant
  • Joined in 2013
  • *
  • default avatar
  • Posts: 59
  • When Self-Defence Becomes Pure Joy
    • View Profile
    • Donate to Member
Ath, I don't understand this symbol. All I can say is that Paul Keith is my friend. His way of presenting things is terrible, but very rare are those unique people who try to THINK instead of repeating common "truths" / unison. I'll continue this thread in "They're still standing."

clean

  • Participant
  • Joined in 2012
  • *
  • default avatar
  • Posts: 32
    • View Profile
    • Donate to Member
Ad 2 et seq. supra

Today, in German papers, they speak about MS Office 2013.

a)

It seems you can buy a non-subscription home version, and individual non-subscription versions of Access, etc., but I'm not sure these are non-cloud-synch by that, and even without such cloud synching, how much data could they transfer "home" anyway?

The main offering from MS is a subscription scheme, of course, for individuals, small business, and corporations (public authorities of course), and here, there's some info: There'll be continuous synching of your data on / with the MS servers; for small businesses, it seems the subscription, incl. the cloud storage, is 12 bucks 50 per month and per seat.

b)

It seems evident for me that this way, U.S. authorities (= NSA, etc., or more precisely, the NSA plus all the authorities and big (or specialised!) corporations that regularly get "their" data from them) will now have constant, regular access to any data processed by MS Office 2013 and further on. Perhaps there is some automatic simili-"encryption" in order to make the technical aspect of the out- and inbound transfer "safe" against your possible competitors and third parties, but I assume that in your comp, and on arrival on the MS cloud servers, the "real" data is processed, and not something encrypted MS "synch" sw cannot read and "understand".

This means I assume they've now "found" a way, by offering the cloud storage AND REAL TIME SYNCHING "themselves" (= MS plus NSA behind them), to prevent "you" (= professionals and corporations small or big) from effectively encrypting data you store within the cloud, offering perhaps some "protection" against third parties, but no protection whatsoever against the GLOBAL EVIL BODY.

Or am I mistaken here? Could you use this MS Office 2013 cloud synching system with data encrypted by your own encryption sw (and then hopefully with strong encryption keys)?

c)

As said, I doubt this system would "take" such data - and even if it pretends to do, why not assume this will double your data flow then: One flow of your encrypted data, forth and back, in order to reassure you, and then the real data, in real time, you "Word" or "Excel" or "Access" files, etc., perhaps "encrypted" the MS / NSA way, ready to be processed by these bodies.

Why this is so harmful?

d)

Even with their "patent frenzy" (cf. their allowing for "patents" for things not new at all, but just because you have the necessary money to pay for the "patent" of these processes et al. perhaps known for years; or have a look at U.S. sw "patents" which cause scandal world-wide in the "industry"), the U.S. do invent less and less, and with every year, this become more apparent. Thus, the U.S. government is highly interested in "providing" their big corporations of "nation interest" with new info about what would be suitable to make some development on (= forking findings of third parties), or simply, about what U.S. corps could simply steal: Whilst a European corp is in the final stages of preparing patents, those are then introduced by their U.S. competitors just days before the real inventors will do it.

e)

It's not only the Europeans who are harmed: Whilst the Japanese ain't not as strong anymore as they had once been, it's the Chinese who steal less and less from others but who invent more and more on their own and who risk to leave trailing the U.S. industry anytime soon.

f)

I spoke about passion in general and in programming in the forked thread, and I saw that indeed, individual passion in sw excellence is dead, speaking of a possible win-win situation where the developer has got the satisfaction of producing a work of art, thus providing functionality excellence (in the meaning "functionality in the workflow of the user", not sterile technical functionality within the sw itself) for his customer.

On the other hand, it seems that more and more developers (= individuals as in the case of Surfulater and many other sw's, and sw houses with thousands of programmers and sw "architects") more and more strive to attain excellence in features WORKING AGAINST THE CUSTOMER: No win-win situation anymore, but taking the "man who pays for it all" for a ride.

g)

And it's not just inventors, etc., abroad that are at risk: It's perfectly sensible that some innovative, small U.S. companies are spied for the benefit of big U.S. companies, be it for simple stealing their ideas alone, and / or for facilitating their taking over for cheap.

h)

Of course, it's not only and all about inventions, it's also about contracting (Siemens in South Africa? Why not these same contracts be going to General Electric, by using core info? I just made up this example and I'm not insinuating that GE might want or go to "steal" from Siemens, but yes, I'm insinuating that some people might be interested in "helping" them to do so.)

i)

I don't know collaboration sw / groupware ("IBM Lotus Notes", etc.) well enough, but I suppose you'll get similar probs here, and as we see by now, even sw for individuals, like Surfulater, tries to excel with "features" that could possibly harm the users' interests.

j)

So it might be time, about 30 years after Orwell's "1984", to store "old" comps (Win 8 and Office 2003, anyone? har, har!), "old" sw, and to divide your work between comps that are connected to the net, and those that are not, and to transfer data between them with secure USB sticks, in readable, "open" (and not proprietary) data formats (perhaps XML instead of "Word", etc.), in the end.

I suppose that using Win 7 and Word / Excel / Access / PowerPoint 2010, on non-cloud-connected pc's, could be a viable intermediate solution for the years to come. (In a corporation, you could even install "parallel networks", i.e. many such pc's but of which none is connected to the outside world.)

k)

The purpose of this post is to show that I'm not speaking out of paranoia, but that reality (here: brand-new MS Office 2013, probably the biggest impact in sw for the coming years, except for operating systems) outstrips fears by far.

You see, when it's the passion of torturers we have to speak of, instead of passion of people wanting the good for the community, we're in trouble, and this point in time seems to have been reached, sw-wise.

tomos

  • Charter Member
  • Joined in 2006
  • ***
  • Posts: 11,958
    • View Profile
    • Donate to Member
Today, in German papers, they speak about MS Office 2013.

the competition have some ideas :) (this mainly about price and number of licenses):
http://softmaker.com/english/blog/?p=435
Tom

dr_andus

  • Supporting Member
  • Joined in 2012
  • **
  • Posts: 851
    • View Profile
    • Dr Andus's toolbox
    • Donate to Member
the competition have some ideas :) (this mainly about price and number of licenses):
http://softmaker.com/english/blog/?p=435

The single machine license Office 2013 is another desperate attempt on MS's part to squeeze its lemon cash cow (sorry for mixed metaphors) and push people towards its subscription-based cloud model. I can't see how it can possibly be a good thing for them to do. Office apps are already becoming less relevant, competition such as the FREE LibreOffice is getting better all the time, and why would you want to upgrade if you already have Office 2010? To give students a single machine license will just push students (the future generation of office workers!) not to bother with MS Office at all. There are increasingly fewer reasons why they should need MS Office anyway.

cyberdiva

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 1,041
    • View Profile
    • Donate to Member
To give students a single machine license will just push students (the future generation of office workers!) not to bother with MS Office at all. There are increasingly fewer reasons why they should need MS Office anyway.

+1.  I was frankly astonished at what seems to me a very wrongheaded move on MS's part.  Then again, I decided years ago that I didn't need MS Office and moved to SoftMaker Office.  I've been very pleased.

Dormouse

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 1,952
    • View Profile
    • Donate to Member
The single machine license Office 2013 is another desperate attempt on MS's part to squeeze its lemon cash cow (sorry for mixed metaphors) and push people towards its subscription-based cloud model. I can't see how it can possibly be a good thing for them to do.

+1.  I was frankly astonished at what seems to me a very wrongheaded move on MS's part.  

+2.
Seems as if the proportion of totally wrongheaded moves by MS has increased hugely since BG shifted his attention to his charitable work.