ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

On data storage and applications going cloud (Surfulater, Mindjet et al.)

(1/5) > >>

helmut85:
I

1. Just today, the community has received news of its recent loss, two days ago, of a prominent data-availability activist, Aaron Swartz. Interesting here, the criminal prosecution authorities seem to have been much more motivated for him to be treated as a big-criminal than even his alleged victim, MIT. (Edit: Well, that doesn't seem to be entirely true: JStor is said to not having been insisting on him being prosecuted, but M.I.T. wanted him to to be made "pay" - so without them, he'd probably be alive. And it's ironic, that M.I.T., a VICTIM of these "resell them their own academic papers, and at outrageous price" scheme, made themselves prosecutors of Aaron, instead of saying, we're not happy about what he tried to do, but he tried for all of us. M.I.T. as another Fifth Column representative, see below.) So there is cloud for paid content, and your getting access without paying, big style, then perhaps even re-uploading gets you 20 or 30 years in jail if the prosecutors have their way, and that's where it goes (cf. Albert Gonzalez).

(Edit: First he made available about 20 p.c. of an "antecedents rulings" db, absolutely needed for preparing lawsuits in a common-law legal system based on previous, similar cases' rulings (they charge 8 cent a page, which doesn't seem that much, but expect to download (and pay for) perhaps 3,000 pages in order to get those 20, 30 that'll be of relevance), then he tried to download academic journal articles from JStor, the irony here being that the academic world is paid by the general public (and not by academic publishers or JStor) for writing these articles, then pays high prices (via their university, so again the general public pays at the end of the day (university staff and overall costs being public service on the Continent anyway, and in the U.K. and the U.S., it's the students who finance all this, then charge 500 bucks the hour in order to recoup this investment afterwards). The prosecutor asked for 35 years of imprisonment, so Swartz would even have been to be called "lucky", had the sentence stayed under 25 years. (From a competitor of JStor, I just got "offered" a 3-page askSam review from 1992 or so, in .pdf, for 25 euro plus VAT if I remember well...))

(Edit - Sideline: There is not only the immoral aspect of making pay the general public a second time for material it morally (and from financing it to begin with) owns already, there is also a very ironic accessibility problem now that becomes more and more virulent: Whilst in their reading rooms, universities made academic journals available not only to their staff and their students, but also to (mildly) paying academics from the outside, today's electronic-only papers are, instead of being of ubiquitous access now, in most universities, not even available anymore to such third-parties, or not even bits of those texts can be copied and pasted by them, so in 2013, non-university academics sit before screens and are lucky to scribble down the most needed excerpts from the screen, by hand: The electronic "revolution" thus makes more and more people long for the Seventies' university revolution: the advent of photocopiers - which for new material, in most cases, ain't available anymore: Thanks to the greediness of traders like JStor et al, we're back to handwriting, or there is no access at all, or then, 40 bucks for 3 pages.)

2. On the other hand, there's cloud-as-storage-repository, for individuals as for corporations. Now this is not my personal assertion, but common sense here in Europe, i.e. the (mainstream) press here regularly publishes articles convening about the U.S. NSA (Edit: here and afterwards, it's NSA and not NAS, of course) having their regular (and by U.S. law, legal) look into any cloud material stored anywhere in the cloud on U.S. servers, hence the press's warning European corporations should at least choose European servers for their data - whilst of course most such offerings come from the U.S. (Edit: And yes, you could consider developers of cloud sw and / or storage as sort of a Fifth Column, i.e. people that get us to give away our data, into the hands of the enemy, who should be the common enemy.)

3. Then there is encryption, of course (cf. 4), but the experts / journalists convene that most encryption does not constitute any prob for the NSA - very high level encryption probably would but is not regularly used for cloud applications, so they assume that most data finally gets to NSA in readable form. There are reports - or is it speculations? - that NSA provides big U.S. companies with data coming from European corporation, in order to help them save cost for development and research. And it seems even corporations that have a look upon rather good encryption of their data-in-files, don't apply these same security standards to their e-mails, so there's finally a lot of data available to the NSA. (Even some days ago, there's been another big article upon this in Der Spiegel, Europe's biggest news magazine, but that wasn't but another one in a long succession of such articles.) (Edit: This time, it's the European Parliament (!) that warns: http://www.spiegel.de/netzwelt/netzpolitik/cloud-computing-eu-bericht-warnt-vor-ueberwachung-durch-die-usa-a-876789.html - of course, it's debatable if anybody then should trust European authorities more, but it's undebatable that U.S. law / juridiction grants patents to the first who comes and brings the money in order to patent almost anything, independently of any previous existence of the - stolen - ideas behind this patent, i.e. even if you can prove you've been using something for years, the patent goes to the idea-stealing corporation that offers the money to the patent office, and henceforward, you'll pay for further use of your own ideas and procedures, cf. Edit of number 1 here - this for the people who might eagerly assume that "who's nothing got to hide shouldn't worry".)

4. It goes without saying that those who say, if you use such cloud services, use at least European servers, get asked what about European secret services then doing similar scraping, perhaps even for non-European countries (meaning, from GB, etc. straight to the U.S., again), for one, and second, in some European countries, it's now ILLEGAL to encrypt data, and this is then a wonderful world for such secret services: Either they get your data in full, or they even criminalize you or the responsible staff in your corporation. (Edit: France's legislation seems to have been somewhat lightened up instead of being further enforced as they had intended by 2011. Cf http://rechten.uvt.nl/koops/cryptolaw/cls2.htm#fr )

5. Then, there are accessibility probs, attenuated by multi-storage measures, and provider-closing-down-the-storage, by going bankrupt or by just commercial evil: It seems there are people out there who definitely lost data with some Apple cloud services. (Other Apple users seem to have lost bits of their songs bought from Apple, i.e. Apple, after the sale, seem to censor unwanted wording within such songs - cannot say for sure, but read some magazine articles about such proceeding from them - of course, this has only pittoresque value in comparison with "real data", hence the parentheses, but this seems to show that "they" believe to be the master of your data, big-style, AND for the little, irrelevant things - it seems to indicate their philosophy.

(Edit: Another irony here: Our possible data is generally deemed worthless, both from "them", and from some users (a fellow here, just weeks ago: "It's the junk collecting hobby of personal data."), whilst anything you want or need access to (and even a 20-years-old article on AS), deemed "their data", is considered pure gold, 3 pages for 40 bucks - so not only they sell, instead of just making available to the general public its own property, but on top of that, those prices are incredibly inflated.

But here's a real gem. Some of you will have heard of the late French film <i>auteur</i>, Eric Rohmer, perhaps in connection with his most prominent film, <i>Pauline at the Beach</i>. In 1987, he did an episodic film, <i>4 aventures de Reinette et Mirabelle</i>, from which I recommend the fourth and last episode to you which on YT is in three parts, in atrocious quality but with English subtitles, just look for "Eric Rohmer Selling the Painting": It's a masterpiece of French Comedy, and do not miss the very last line! (For people unwilling to see even some minutes of any French film, you'd have learned here the perfect relativeness of the term "value" - it's all about who's the owner of the object in question at any given moment.) If you like the actor, you might want to meet him again in the 1990 masterpiece, <i>La discrète</i> (There's a <i>Washington Post</i> review in case you might want to countercheck my opinion first. And yes, of course there's some remembrance of the very first part of Kirkegaard's Enten-Eller to be found here.)...)

II

6. There is the collaboration argument, and there is the access-your-data-from everywhere, without juggling with usb sticks, external harddisks and applics like GoodSync Portable and - I'm trying to be objective - where there is a data loss problem, and thus a what-degree-of-encryption-is-needed prob too: Your notebook can be lost or be stolen, and the same goes for these external storage devices. But let's assume the "finder" / thief here will not be the NSA but, on most cases, not even your competitor, but just some anonymous person dumping your data at least when it's not immediately accessible, i.e. here, except for special cases, even rudimentary encryption will do.

7. I understand both arguments under 6, and I acknowledge that cloud services offer much better solutions for both tasks than you can obtain without these. On the other hand, have a look at Mindjet (ex-MindManager): It seems to me that even within a traditional workgroup, i.e. collaborators physically present in the same office, perhaps in the same room, collaboration is mandatorily done by cloud services and can't be done just by the local workgroup means / cables - if this is correct (I'm not certain here), this is overly ridiculous or worse, highly manipulative on part of the supplier.

8. Whenever traditional desktop applications "go cloud", they tend to lose much of their previous functionality within this process (Evercrap isn't but ONE such, very prominent example, but there are hundreds), and the arguments, "we like to hold it simple" and such idiotic excuses, and even when there's highly profession developer, as is in this case of Neville, it seems that the programming effort for the cloud functionality at least heavily slows down any traditional, "enhancement" or even transposition programming of the functionality there has been - of course, how much transposition is needed, depends on the how-much-cloud-it-will-be part of the venue of that particular sw. As a general rule though, users of traditions sw going cloud use a lot of functionality and / or have to wait for years for their sw to recoup afterwards, from this more or less complete stalling of non-cloud functionality. (Hence the "originality" of Adobe's CS's "cloud" philosophy where the complete desktop functionality is preserved, the program continuing to work desktop-based, with only (? or some additional collaborative features, too, hopefully?) the subscription functionality laid off cloudwise.

III

9. Surfulater is one of the two widely known "site-dumping" specialist applics out there, together with the German WebResearch, the latter being reputed "better" in the way, it's even more faithful to the original for many pages being stored, i.e. "diffult" pages are rendered better, and in the way that it's quicker (and quicker perhaps especially with such "difficult" pages), whilst the former is reputed to be more user-friendly in the way of everyday handling of the program, sorting, accessing, searching... whatever. I don't have real experience (i.e. over short trial) with either program, so I the term here is "reputation", not "facts are...". It seems to be common knowledge, though, that both progs do this web page dumping much better than even the heavyweights in traditional pim world, like Ultra Recall, MyInfo, MyBase, etc.

10. Whilst very few people use WebResearch as a pim (but there are some), many people use Surfulater as a general pim - and even more people complain about regular pim's not being as good for web page dump, as these two specialists are. For Surfulater, there's been going on that extensive discussion on the developer's site that has been mentioned above, and it seems it's especially those people who use the prog as a general pim, are very affected by their pim threatening to go cloud, more or less, since it's them that would be most affected by the losing-functionality-by-going-cloud phenomenon described in number 8. Neville seems to reassure them, data will be available locally, and by cloud, which is very ok. But then, even Surfulater as it is today, is missing much functionality that would be handy for making it a real competitor within the regular pim range, and you'll be save in betting on these missing features not being added high-speed too soon: the second number 8 phenomenon (= stalling, if not losing).

11. So my personal opinion on Surfulater and WebResearch is, why not have a traditional pim, with most web content in streamlined form, i.e. no more systematic dumping web pages into your pim or into these two specialist tools, but selecting relevant text, together with the url and the download date/time "stamp", and pasting these into your pim, as plain text you will then format according to your needs, meaning right after pasting, you'll bold those passages there that will have motivated you to download the text to begin with, and this way, instead of having your pim, over the years, collect an incredible amount of mostly crap data, you'll constitute yourself a valid respository of neat data really relevant to your tasks. Meaning, you'll do a first data focussing / condensing of data right on import.

12. If your spontaneous reaction to this suggestion is, "but I don't have time to do this", ask yourself if you've been collecting mostly crap up so far: If you don't have 45 or 70 sec. for bolding those passages that make the data relevant to you (the pasting of all this together should take some 3 sec. with an AHK macro if really needed, or better, by an internal function of your pim, which could even present you with a pre-filled entry dialog to properly name your new item here), you probably shouldn't dump this content into your pim to begin with. Btw, AHK allows for dumping even pictures (photos, graphics) from the web site to your pim, 1-key-only (i.e. your mouse cursor should be anywhere in the picture, and then, it'd be one key, e.g. a key combination assigned to a mouse key), and eventually, you pim should do the same. Of course, you should dump as few such pictures as is absolutely necessary, to your pim, but technically (and I know that in special cases this would be almost necessary, but in special cases only), it's possible to have another AHK macro for this, and also your pim could easily implement such functionality.

13. This resets these two specialists to their specialist role: Dumping complete web pages in those rare cases this might be necessary, e.g. mathematicians and such who regularly download web pages with lots of formulas, i.e. text and multiple pictures spread all over the text - but then, there are less and less such web pages today, since most of them, for such content, have links to pdf's today instead, and of course, your pim should be able to index pdf's you link to from within (i.e. it should not force you to "embed" / "import" them to this end). Also, there should be a function (if necessary, i.e. if absent from your pim, by AHK) that does this downloading of the pdf, then linking to it from within your pim, 1-key style, i.e. sparing you the effort to first download the pdf and then do the linking / indexing within your pim, which is not only an unnecessary step but also will create "orphaned" pdf's, that will not be referenced to within you pim).

IV

14. This means we need better pim's / personal and small workgroup IMS / information management systems, but not in the way, "do better web page import", but in the way, "enhance its overall IM functionality, incl. PM and incl. half-automatted web page CONTENT import (incl. pdf processing)". Please note here that while a "better" import of web-pages-as-a-whole is an endless tilt at windmills that blocks the programming capacity of any pim developer to an incredible degree, ever one, and worse today than it ever did, such half-automation of content dumping / processing is extremely simple to implement on the technical level. This way, pim developers wouldn't be blocked by such never-ending demands (and never-ending almost independently to their respective efforts to fulfill them) to better reproduce imported web pages (and to quicker import them) anymore, but could resume their initial task, which is to conceive and code the very best IMP possible.

15. Thus, all these concerns about Surfulater "going cloud", and then, how much, are of the highest academic interest, but of academic interest only: In your everyday life, you should a) adopt better pim's than Surfulater is (as a pim), and then enhance, by AHK, core data import (and processing) there, and then b) ask for the integration of such feature into the pim in question, in order to make this core data processing smoother. Surfulater, and WebResearch, have their place in some special workflows, but in those only, and for most users, it's certainly not a good idea to constitute web page collections, be it within their pim, or be it presumably better-shaped collections within specialized web page dumpers like Surfulater and WebResearch, whose role should be confined to special needs.

V

16. And of course, with all these (not propanda-only, but then, valid) arguments about cloud for collaboration and easy access (point 6), there's always that aspect that "going cloud" (i.e. be it a little bit, be it straightforward) enables the developer to introduce, and enforce, the subscription scheme he yearn so much for, much better than this would ever be possible for any desktop application, might it offer some synch feature on top of being desktop, or not. Have a look upon yesterday's and today's Ultra Recall offer on bits: Even for normal updates, they now have to go bits-way, since there's not enough new functionality even on a major upgrade, so loyal, long-term users of that prog mostly refuse to update halfprice (UR Prof 99 bucks, update 49 bucks, from which, after payment processor's share, about 48,50 bucks should go to the developer), and so, some of them at least "upgrade" by bits, meaning a prof. version is starting starting price 39 bucks, from which 19,50 go to bits, 50 cent or something to the payment processor, leaving about 19,00 bucks for the developer; it's evident that with proper development of the core functionality (and without having to cope with constant complaints of the web page dump quality of his prog (Edit: prices corrected)), the developer easily could get 50 bucks for major upgrades of his application, instead of dumping them for 19 bucks: And that'd mean, much more money for the developer, hence much better development quality, meaning more sales / returns, hence even better development...

17. As you see here, it's easy to get into a downward spiralling, but it's also easy to create a quality spiral upwards: It's all just a question of properly conceiving your policy. And on the users' side, a rethinking of traditional web page dumps is needed imo. It then would be a faux prob to muse about how to integrate a specialist dumper into your workflow: rethink your workflow instead: Three integral dumps a years don't ask for integration, but for a link to .mht.

18. And yes, I got the irony in downloading for then uploading again, but then, I see that's for preserving current states, while the dumped page will change its content or even go offline. But this aspect, over the policy of "just dump the content of real interest, make a first selection of what you'll really need here", in most possible cases could only prevail for legal reasons, and in these special cases, neither Surfulater nor WebResearch are the tool you'd need.

VI

19. As said, the question of "how Neville will do it", i.e. the distribution between desktop (data, data processing) and cloud (data, data processing again), and all the interaction needed and / or provided, will be of high academic interest, since he's out to do something special and particular. But then, there's a real, big prob: We get non-standardization here, again. Remember Dos printer drivers, as just one but everyday example of the annoyances of non-standardization? Remember those claims, for this prog and that, "it's HP xyz compliant"? Then came Windows, an incredible relief over all these pains. On the other hand, this buried some fine Dos progs since soon no more drivers for then current printers, and other probs; just one example is Framework, a sw masterpiece by Robert Carr et al. (the irony here being that Carr's in cloud services today).

20. Now, with the intro of proprietary cloud functionality, different for many such applications going cloud today, we're served pre-Windows incompatibility chaos again, instead of being provided more and more integration of our different applications (and when in a traditional work group, you at least had common directories for linked- and referenced-to shared files). That's "good" (short-term only, of course) for the respective developers, in view of the subscription advantage for them (point 16), but it's very bad for the speed of setting-in-place of really integrated workflow, for all of us, i.e. instead of soon providing a much better framework for our multiplied and necessarily more and more intervowen tasks (incl. collaboration and access-from-everywhere, but not particular to this applic or that), and for which from a technical pov, "time's ripe", we have to cope with increasing fractionization in what developers in search of a steady flow of income (cf. the counter-example in point 16, and then cf. Mindjet) think what particular offerings are "beneficial" for us.

All the more so you should discard any such proprietary "solution" from your workflow when it's not necessarily an integral part of that. Don't let them make you use 5 "collaborative" applications in parallel just because there's 5 developers in need of your subscription fee.

Paul Keith:
Point 11 can be difficult because that is precisely what many are and did do but what Evernote and Surfulator (my apologies I know you hate that spelling but I gave my explanation in the previous thread for why) market and position as something you do not require in doing.

The question is not why not but whether Neville will see profit in doing this when Pocket is a more well known service than Thinkery.me

These do not have desktop equivalents so I hope you do not see this as a direct overall comparison to Surfulator but more as a way for you to realize that what you are talking about has happened and it has not only failed but Neville's current development, as most current software development process, simply do not work on a genie mentality nor even an agile mentality but on a tracker mentality.

Quick links:

https://getpocket.com/

http://thinkery.me/

(As you can see above, thinkery.me is even superior at providing a free no registration demo but Pocket is highly advertised even outside of the blogging community.)

Beyond these, you can also check out these two Firefox add-ons:

https://addons.mozilla.org/en-us/firefox/addon/scrapbook-plus/

https://addons.mozilla.org/en-us/firefox/addon/grabmybooks/

They exist. They are praised by people who try them.

The problem is what product developers end up failing to realize: You need marketers for people to care more for your products against the competition and in favor of both the customers and the devs themselves regardless whether they realize it or not.

In the heyday of free software this simply meant "you need to share more to the public for them to them realize that the capability exists and they want that capability hence they want not only that software but they want to be receivers in feeling they own a piece of one of a kind software that only a few others know and benefit from".

Unfortunately that's how the software providing industry has been trapped since then.

While Evercrap as you say is smart at marketing and partnering with others, every other software developers try to ride on the coat tails on the desktop...then later the cloud...then later the tablet market.

Even those who do marketing, ride on the coat tails first of blog...then on fad blog words...then on scam site blog concepts like product launches...then on...nothing. Just more Facebooks, just more tweets, just more content marketing, just more inferior stuff to Evercrap while Evercrap improves by partnering and being able to ride on the coat tails of moleskine, Samsung Galaxy Notes, printers, bloggers, etc.

It may seem like I'm jumping off-topic but this has been the great dilemma marketers have in delivering marketing. It's simply easier to trick customers and clients than it is to tell customers...what you have is in this name and not in that name.

It's also easier to tell a dev, I want this feature rather than for the dev to say...I want to hire a system or a person that will open up this feedback and not just present this as a list of features or be the person that makes my decision for me but I want something who can transform and make me not only want to do this..but make the customer feel as if I've offered something that is not only exactly what they want but even better than they want.

That's the first part of the overall reason why it's not done this way but the second part is that most individual skilled devs with their own businesses simply do not realize how much spec writing management works for the sequel of the software including software updates but also how much they can boost their customer base if they price and sell and cliche-market tactic their pitch not at the product but at the SCRUM-based spec development stage of their pitch.

Quick link:

http://programmers.stackexchange.com/questions/40536/spec-writing-management#40538

I do want to add an important disclaimer though: I am not a marketer nor am I a coder nor am I passing down expert knowledge.

I am not saying I can do better than developers, I am simply saying developers tend to not want to do better than Evercrap developers when they can be far superior. Developers are developers often because they simply want to develop and treat everything else as second class citizens especially when they can already profit well from a software and especially when they already can satisfy their needs from the default software and are simply improving the model as an upgrade.

The truth is, they simply do not care for the concept except if you can prove to them that you are a gazillionaire who would pay them this amount of money if you prioritize this feature while they let all the other stuff like sales, website design, etc. flow towards someone else who's a better sales person than the dev.

This creates two negative blockade for following through with your point:

1) That people are raised to believe the fluffiest part of marketing and so they create a self-fulfilling magnifying lens on not only giving more due respect to the fluffy trend competitor but in cases where they slow down and claim to do the right and slow aspect of marketing: they simply connect the finished product's features with the accounting aspect of marketing which is it's weakest part and something that accountants or even number crunchers can do so long as a product is great and functioning already which of course, in the hands of a great developer, guarantees that the customer will want the product and all the marketing aspect has to do is to do the sale aspect and then the sale numbers self-fulfill the direction the dev want in the form of the customers that have come forth which as it goes on over time, will be the sounds the dev hears most rather than the wider unheard of opportunities being provided by the free software that also gain some ramblings but due to a missing part or a lack of speed in updating could not yet gain a much larger piece of the pie to be a notable competitor.

2) The second blockade is that even with a constantly replying dev like Neville, you create a negative customer base who upon being slighted by other products view Neville's transparent attitude as a godsend and not a base requirement.

This is good for the product both short and mid term and even long term provided the profits keep coming in but for the long term, it does not build a slope towards the concept but builds a concept towards the sale. When things build up towards that paradigm, feedback go through a natural process of being more about customer service and dev listening rather than concept manufacturing.

What happens then become a case of eating one's own tail where the exemptions who get this point excel and through their success stories, the idea then becomes some form of elite marketing rather than regular marketing. With this comes the changed baseline that accomodates not only the inferior marketing but the inferior success of the inferior products. (or the superior but more heartless products)

When you add that certain people just do not get the concept of the internet much less the structure behind webpages to begin with, something like "why does Dropbox succeed over their competitor" which can be obvious to both a dev and a non-dev becomes this sort of secret recipe to both groups as opposed to being a clear observation of how they simply focused on delivering a concept rather than delivering a feature and that concept is what keeps them ahead in not just delivering and marketing the same features but also what keeps them ahead in pricing their services the way they do due to exclusivity.

It's not even a concept that started with marketing. It's a concept that goes back to why competition is good for improving things. When a person is way ahead in the race, they can slow down and even score a few extra naps or in this case bucks. When a race is close, the temptation differs from sales to getting ahead.

The only difference with software development is that, first, devs do not like to compete in concept development. Their race is found in delivering the most unique features or the most useful features rather than building the Porsche of programs.

Add the complexity of software development along with the base tutorial necessity for getting a software off as a beginner (the whole start small or abuse plugins and don't reinvent the wheel thing) then even those who work on a Porsche and finish it don't focus on a Porsche. They tend to focus on the theme of the software which is why it leads to an EverCrap.

Since the line has been moved to accompany this lesser expectations and since the line for successful reception has been raised due to the rapid rise of technology, it's no surprise why software ends up getting ahead but why software ends up getting ahead through the formation of EverCrap rather than the formation of delivering more "truer to the heart" concepts.

This is even further compounded by the fact that for software, simplicity is good. If simplicity is good then what incentive does a developer have of working towards usability if good usability can simply be offering their customers the latest in technological fads such as tags, GTD specializing needs, web clipping with buttons for sharing, clones of common cloud style interface toolbars. Even devs who want to buck the trend don't fully realize their own irrationality on these concepts such as first hating on the Ribbon and then later not just liking but advertising the Ribbon through word of mouth while, thinking inside, they're just sharing their view point.

It's so easy so why would they care about the concept at all esp. if it's not their concept but your concept?.

Again, it's not just why would they care that they would lose a buyer but rather why would they care if you buy the software that you are offering concepts on anyway?

People hated Vista, a name and skin change later with some predictable maintence: Even smart techies don't just do not hate Vista anymore but they love Windows 7. You're a dinosaur for sticking to WinXP.

This is not to say all devs think like a huge corporation like Microsoft but all devs have this in the back of their mind whenever their upgrading features on a software and whenever they are marketing their software. It's just too easy to fall into.

You're not a developer or not a good enough developer to prove their profitable idea wrong so to you the concept seems why not, to them the concept seems like a huge time sink when what they are selling is a web clipper.

...and mind you: I, myself or you, yourself won't view these concepts as why not once you have to actually implement the feature. It's easy for you to type PIM and even if we say it's easy for them to vomit a PIM at a thought's notice, what interface do you like? What do you yourself actually want to have and are willing to waste years of life of your own time to create?

Again, this is not pity the developer. This is experience the pain of the developer.

When you can experience it, point 12 is not only not as applicable from a manufacturer's mindset but it weighs on you hour after hour until it no longer becomes notable. You're no longer thinking AHK macro, you're thinking how do these dumps interpret itself into the concept and as you grow more tired, you stop thinking of it anymore and the feature becomes more of a simple plug-in dump. You say it's so easy to do with AHK, AHK it.

Problem is this is where the heart of the developer is more important than the heart of the remarker. This is the meta of why customers tend to be wrong and are not meant to be listened to.

Nowadays popular web articles just like to excuse that customers who become complainers are poor metrics because the ones who like the product tend to be silent until a problem rises up but in truth it does not matter.

The problem is that even those devs who can empathize with the actual dev tend to throw remarks and with the ease of online communication, remarks can seem more profound than they truly are.

In the first bridge, you might not be listened to because the developer might view your suggestion as a remark rather than a suggestion however in the second stage, it is more often the one suggesting i.e. your AHK example that is least interested in improving the concept and more interested in remarking on a feature cause you just want it released where as the devs don't want it to be this way or else they would have simply allowed a plugin to do this.

This is the paradox of point 13 and it's often why over time internal and online feedback tend to fall apart.

You started with wanting to improve the concept, you ended up with resetting the concept.

It's not so bad now because it's just a post but days and weeks and stress passes by and you won't even have much want for delivering the concept. You just want to deliver some feature, and then if that fails, reset the feature to an inferior form and then when that fails point 13 is not even concerned about chapter I.

It does not matter if you didn't even intend for them to be connected but rather the issue is that there's only one software but you want this one software to have 3-4 different stories and yet you want it to stay consistent to all those stories. If you have to actually develop these 3-4 different stories, that's when point 13 starts to connect to chapter I and hurt both because the inconsistency of point 13 to chapter I's goals ends up morphing point 13 into a virus against chapter I even if they are not meant to deal with the same subjects. Again, because there's only one software.

As you are not yet being broken down by the demands of the hourly development stage, it's easy to make point 14 seem like a reasonable conclusion when now you are essentially sending the message that Surfulator's task is not even supposed to build upon Surfulator's previous mechanic but instead let's just randomly add all these different dynamics without specifying them.

By specifying them, I meant narrowing them in a site and a certain direct feature so that the developer no longer has to think about what you mean and he can compare this with what others want because right now, even as a code ignorant person, I understand where you're getting at but I'm also reading "screw my needs, just follow your needs".

It's not wrong in that everyone thinks like that, it just does not scale to the concept. You are essentially writing a long post where you think you are saying you are concerned for the concept but when it's time to develop the concept, it all reads...you want people to follow only your feature.

This is not to say you are being rude but for a person who knows how to code and understands AHK, you are offering a dev ignorant suggestion with some merit when from the tone and effort of your post, what you wanted is to offer a suggestion full of merit through your own knowledge of development.

This is also not to say Neville won't consider your suggestion but the question is, will he consider the dilemma of others who hold the same suggestion as you do but do not want the process to be the same as you do?

This also does not mean Neville can provide exactly what would convince you to acquire and support the product for eternity even as it increased in price simply because you're offering a mock-up problem without a mock-up so even the button and the hotkey is essentially guesswork unless you happen to also be working in Neville's business or you had sex with him and he wants to solely develop your needs without considering the needs of others...including what specific color you want Surfulator to look by default.

Point 15 is the same as point 14 only you are essentially saying Surfulator is a bad product and users who think Surfulator is a good product should not be convinced that Surfulator is a good product cause the products belong only in a specific workflow when in fact it's the opposite.

Other clippers like Evernote belong in a specific workflow because I just can't be sure it clips the web well but it still creates some copy that is essentially a bastardized save as mhtml plus auto-Dropbox.

Surfulator just clips. You'll be surprised how many people secretly just want that including you.

Unfortunately these programs don't quite just clip so we have these gamut of concepts being thrown.

This is not so much a problem as it is a non-sequitur and a sadly timed one as no one wants to read your post only to get this near the end.

IMO you're better off deleting it.

Data storage is data storage and cloud is the cloud and the thread title is "On (not off) applications going cloud and on (not off) data storage going cloud.

I don't mean this in any antagonist way despite the way it sounds. Simply that you can't be anti-the concept if you are for the concept. It would actually keep you from presenting your concept.

As a horrible communicator, believe me it can be tough to know the difference.

It's like offering everything you think you can to a reader and the thing that the readers can see is that it has no pictures and it's too long before they go jumping jack on what you say.

It's tough not only because you have a hard time knowing it but sometimes even if you know and did provide the difference, you find out that by adding images, not only do people sometimes feel you have to add something else now besides that but you recreate your message into something that's less than what you intended to write.

In this case the problem is that it's holding you back from presenting your case.

Point 15 has substance and it's substance is built from the previous points but by being weighed down by who's an academic and who's not an academic and which entity is which and which entity want which, you fixed your own concept from growing.

People who have read your post at this point don't want to read of course. If it's of course to you then it's of course to them at this point. They want your opinion on data storage and applications going cloud. Save point 15-point 20 for a thread or a section called The Dark Side of Desktop Web Clippers going Cloud. The reader still have not gotten at the center of your previous points nor your entire thread. Give them that. Unleash the content if you are going to write this much.


helmut85:
ad 11 / 12 supra

Sometimes, some things are so natural for me that I inadvertently omit mentioning them. In the points above, I presented my very exotic concept of stripping web pages that most people would download instead (hence the quality problems they face in non-specialised, pim, sw, other than WebResearch or Surfulater or similar).

Above, I spoke of condensing, by doing a first choice here what you clip to your pim, and what you'll omit. I also spoke of relevance, and of bolding, and of underlining, i.e. of bolding important passages within the unformatted text, then of underlining even more important passages within these bolded passages, and of course, in rare cases, you could even have yellow background color and such in order to hightlight even more important passages within these bolded-underlined parts of your text.

I should have mentioned here that this "first selection" almost never lets to "passages that are not there", i.e. in years, I never had a situation where I would have remembered, hadn't there not been something more, and shouldn't I go back to the original web (or other) page, in order to check, and download further if it's hopefully yet there? So this is rather theoretic situation not really to be feared.

Of course, whenever in doubt, I download the whole text, hence the big utility of then bolding passages and perhaps underlining the most important keywords there.

But there is another aspect to my concept which I have overlooked to communicate: It's annotations in general. For pdf's, many people don't use the ubiquitous Acrobat Reader but (free or paid) alternative pdf readers / editors that allow for annotations, very simple ones or more sophisticated ones, according to their needs.

But what about downloaded, original web pages, then?

Not only, you download crap (alleviated perhaps by ad blockers), around the "real stuff" there, but also, this external content stays within its original form, meaning, whenever you re-read these pages, you'll have to go thru their text in full, in order to re-memorize, more or less, the important passages of this content, let alone annotations, which in my system are also very easy: I enclose them in "[]" within the original text, sometimes in regular, sometimes in bold type.

So my system is about "neatness", "standardization", "visual relief", but its main advantage is, it allows for my just re-reading the formatted passages when re-reading these web pages in work context, just as many people do with their downloaded pdf's. Now you with downloaded web pages: It's simply totally uneconomical, and the fact that out of 20 people, perhaps 19 do it this way, doesn't change this truth.

So, "downloading web pages" should not just be about "preserve, since the original content could change / vanish", but it's even more about facilitating the re-reading. (Of course, this doesn't apply to people who just download "anything", "in case of", and who then almost never ever re-read these downloaded contents, let alone work with these.)

Hence my assertion that sw like Surfulater et al. is for "web page collectors", but of not much help in real work. I say this under the provision that these progs, just as pim's, don't have special annotation functionality for the web pages they store; if I'm erroneous about this, I'd be happy to be informed about these in order to then partially review my stance; partially because the problem of lacking neatness would probably persist even with such pdf-editor-like annotation functionality.

And finally, I should have added that I download tables as rectangular screenshots, and whenever I think I'll need the numbers in some text, afterwards, I also download the chaotic code for the table in order to have these numbers ready - in most cases, I just need 2, 3, 4 numbers there later on, and then, copying them by hand from the screenshot is the far easier way to get these into my text. (For pages with lots of such data, I do an .mht "in case of". We all know that "downloading tables" from html is a pain anyway if ever you need lots of the original data, but if you do, and frequently, there is special sw available for this task.)

Ath:
TL;DR;

Shouldn't you guys be writing a blog (or 2) or something? :huh:

helmut85:
Radio Erivan to Ath: You'd be right at the end of the day. But these are the appetizers only.

Or more seriously: I'm always hoping for good info and good counter-arguments, both for what's expressed in the saying, "Defend your arguments in order to armour them.", and for finding better variants, and there are better chances for this to happen in a well-frequented forum than in a lone blog lost out there in the infinite silent web space (Kubrick's 2001 - A Space Odyssey of course).

In the end, it's a win-win situation I hope, or then my arguments must really be as worthless as some people say.

Since nobody here's interested in French auteur cinema, something else here: Today, they announce the death of the great Michael Winner, from "Death Wish", and somewhere I read that back in 1974, advertising for this classic had, entre autres,

"When Self-Defence Becomes a Pleasure."

Can anybody confirm this? (It's from Der Spiegel, again, in German: "Wenn Notwehr zum Vergnügen wird." - So perhaps they hadn't this one-liner but in Germany over there?) Anyway, I have to admit I couldn't stop laughing about this, and it pleases me so much that it'll be my motto from this day on.

Navigation

[0] Message Index

[#] Next page

Go to full version