topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday April 18, 2024, 1:33 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ital2 [ switch to compact view ]

Pages: prev1 2 3 [4] 5next
76
General Software Discussion / Re: On software pricing
« on: May 13, 2017, 03:47 AM »
(second of 2 posts immediately following each other)

InfoSelect would have been a good example though since it had been one of the very first full-fledged text databases, and thus in its time, it must have been something quite special, "empowerment" and all, it was there at the time, by lack of much competition; similar for askSam which targeted the same market, more or less, perhaps with a little bit more weight put on the "archiving" side (imported documents; it also was the perfect crm database for people who didn't want to invest in database programming: "fields"), while InfoSelect was marketed more as a personal information manager?

Both programs came with high prices at the time, with askSam within the then usual 1,000$ price range (see above), and while I don't know the InfoSelect price of the time, it's always around 300$ now (or is it down to 249?), which doesn't convey any more exclusivity but just makes many people shaking their heads in disbelief (askSam is defunct): the degree of technical superiority hadn't been upheld accordingly.

Also come to mind, but I may be mistaken here: dBase, the expensive program for "experts", vs Paradox, the "cheap" program for people who were less so?

Anyway, it's evident that today, TheBrain plays with exclusivity, the notion of "not being for everyone, but for sophisticated customers", which when successfully communicated, allows for higher pricing.

As said, the promise is about more complete and more immediate access to all your stuff, by alleged plastic/flexible access/display, with free, immediate interlinking, while the cheaper "competitors" (which all compete more within their group than with TheBrain: this is a core aspect of the exclusivity concept) are built in trees, more or less like concrete, and then the better ones must superimpose the concept of "cloning" which makes those trees a little bit more flexible.

Another concept by which developers try to break up the inflexibility inherent to trees, is done by replacing the tree by tagging (CintaNotes, Clibu), where then, very ironically and in order to preserve the accessibility of the material with rising element count, a tag tree is put up, but this thread is not for prematurely ridiculing tag trees as a possible dead-end; CintaNotes Professional now allows for inspecting how such a tag tree (not only for sub-tags (tag-hierarchy), but also and in particular for tag combinations) works in practice, and the developer of Clibu delves deep into the arising problems, too, from beta to beta.

But there is a problem with trees indeed, and I think that the solution comes with avoiding non-plastic, non-just-virtual, ephemeral trees altogether, from a rethink of a the link paradigm. (A traditional tree is nothing but ONE basic form of link: unidirectional ones of the meaning parent-child ((possibly multiple and equal) subordination links in each parent: "has as immediate child"), even "siblings" aren't but elements which have got the same ancestors, in exactly the same lineage (path identity), and their order in the list (which is also present in mindmaps and in TheBrain, just not so prominently displayed) is then determined by the order in which their links are in their parent (if you had unidirectional links in the other direction: "has as parent", the order would be lost, and you have a classic tag tree; you can of course introduce more metadata from which then the order will be re-established).

But this is a thread about software pricing and not about links and trees being too basic, just let me say here that trees should be a display, not a storage format for knowledge bases (I'm speaking of the metadata concepts within the underlying relational database here; I know that they technically aren't tree databases anymore), even less so since the full tree will be rarely of use, so it's a conceptual error to build it up to begin with. But that's another subject, also with regards to current file systems on the Windows front.

Anyway, TheBrain conceptually does a little bit more already than some traditional text databases do and thus is able, for the time being, to sell for 250 to 500 per cent of their prices, while I cannot identify the relative weights of this factor and the "it's alternative, graphic display here" one, the latter also being sort of evidence for the former one and thus reassuring the customer: it's not only "pretty", but it's also "proof" of alleged superiority, looks let aside.

For DO, my other example of choice here, it's quite similar: There, too, there is much visual plasticity integrated which is not available (to that degree at least) from its competitors, and here again, that's a promise of both technical superiority and better, more complete, more easy, more immediate access, in short:

A promise of being in better control - but without it becoming too demanding for that. - I think herein lies the secret, or at least this should be the main element of several ones playing together.


See the original thread https://www.donation....msg408619#msg408619 (Navicat Review) from which this is a spin-off, and the spin-off of this thread here, https://www.donation....msg409068#msg409068 (How NOT to conceive trials (and some new ideas about them)).


EDIT June 10, 2017: Too many giveaways for a given software
In the "Trials" thread, I also spoke of the combination paid software vs freeware versions of the same software with the respect to trial design. Here with regards to software pricing, freeware and giveaways (which are not the same marketing means of course but which sometimes go hand in hand), it's again worth discussing.

My example today is Zoom Player which sells for 40$ (or with lifetime updates for 100$); it's regularly on "sale" sites for much less, it has got a freeware version which is available all around the year, and in particular, its paid version is regularly on giveaway sites, you guessed it, for free, sometimes just a 100 licenses, and most of the time, without such restrictions, and so I now proudly own a permanent license now while the last time, the time before and the dozens of times I could have downloaded it for free from somewhere, I didn't even bother to do so: The frenzy by which it had been thrown after anyone wanting it, had sufficiently devalued it in my eyes; 1 day a year is probably ok but every three weeks or so, come on!

But then, this software has deep problems, even independently of its kamikaze marketing: Somewhere, they say, "Its GUI has been developed the non-techy user in mind." (citation from memory), and indeed, its GUI is quite terrible, not only in the free, but also in the paid version, and 40$ is not nothing, so it better had some standard functionality the competition has got, too.

Have a look at the free vs paid versions comparison table: http://www.inmatrix....layer_download.shtml - wow: That's a lot of functionality, on paper, or on screen!

Also, in the settings menu, there is some "advanced" option, and then, the same settings menu gets "on steroids", to employ that terrible expression you encounter almost daily with respect to software nowadays.

So we've got some contradiction between the 13-year-olds' GUI and the "hidden powers", on top of the fact that there are some other free alternatives, like VLC, but there are more.

And, as far as I have tried it out, Zoom Player (paid) comes with a Trump mode: it doesn't deliver ("Trump mode" isn't my find, I just like it so much): For example, for DVDs, in the "advanced mode" of the settings, you can opt for premier language English (or some other) for the sound track, the sub titles, the DVD menus, but:

- just 1 choice; NOT: "original language if English or Norvegian or Italian" or whatever, NO second choice for original soundtrack Norvegian or Italian" (or whatever), NO second choice for sub titles (first choice English, if not available: Norvegian, let alone of some third choice: if not available: Italian) or whatever: So, if you regularly see films from 2 or more countries, no way of presetting the original languages and preferred sub titles in order of precedence;

- it doesn't even work (Trump mode); instead of English (which is available on these DVDs), it gets to some other language, so it's really, really bad.

There are NO settings for DVD languages case by case, from this program, as there are in ANY real competitor, ie in paid video players, as for example in WinDVD I press a for the sound track and s for the sub titles.

Oh, but there are, probably, if you download and install some additional filters, just look into their forum, from 2004 on, but sorry, I'm too dumb to install all this, and then I don't know how to do the language choice for a given DVD: I want to see the film, not doing settings for 15 minutes every time, and I didn't find the commands for variable fast forward and all that either, the GUI's just too primitive.

So Zoom Player (paid) probably is a product with no market since as soon as you pay, there are probably much better tools around, and its special functionality is VERY special - it has got an API but that probably will not mean you just have to pay the developer 40$ and then can distribute the embedded player in your own software; of course, some people that make use of its special crafts will buy, probably pay 100 and are done with it for all time.

And other people will continue to use the freeware version or get the paid version on those around 12 occasions p.a. it's free.

So my guess is, you only give away paid software for free with such regularity if you're really desperate and have built a piece of software which is not coherent at all. To the developer: Make it, from "free vs 40$ but free 12 times a year", "free vs 20$ year-round" and discover that you'll much better results, and rename it "Quirky Player" - no, no, the latter suggestion's just a joke.

Yes, I know, "Americans don't need other languages" - but is that correct?

When other video players do the settings per DVD and you do it within your general settings, why don't you do it a meaningful manner, as described above? Even in the U.S., a choice for English or Spanish first, then English OR Spanish subs and menus would be helpful, let alone Canada with French (3 choices, by order of precedence), and not speaking of the rest of the world.

Thus: Whenever you do pricing, discover your market(s) first, and think about your software and if it appeals to its possible market(s) in its current state; if not, amend your software. (What I would do, I'd downright cut off Zoom Player into two different programs, with quite a different GUI for the "professional" version (assuming here that the advanced features it must have and which I was unable to discover are of real use for some, that is) and some 20$ enhanced version.) And: Don't give away your software so often (if at all) that anybody remotely interested in it will have plenty of occasions to get it for free. Well, that's so basic I'm almost ashamed of putting it down, but then, that's as obvious as them doing a 10-days trial for software that could seriously be trialed only after many weeks of basic (free) use.

EDIT June 11, 2017: As for ridiculous-pricing, see my today's add-on over at "Software Trials" (link above): The player software "PowerDVD" currently is available at half-price again, but according to my observations - or should I better say impressions, since I don't check daily admittedly? -, that's the case about 3/4 of the year (with that and its siblings, link over there in the thread linked here - it's just the percentages that vary a little bit, here and there), and those few people who really buy at statutury price, not knowing any better, get all my sympathy. It's like those Persian rags "85 p.c. off" where the "85 p.c. off" price is the expected price and probably three times too much paid, but what do I know.


EDIT June 13, 2017: TheBrain
More on TheBrain (in general and on its pricing) over there in the "Trial" thread (incl. external links); also, my stance on TB expressed there is more balanced and more detailed; my formula above about it "not being functional" has been way too sloppy and not correct in its acrimony.

77
General Software Discussion / Re: On software pricing
« on: May 12, 2017, 04:41 PM »
tomos, no pun was intended, and you're right about the knot. Again, I'm thankful for your personal motivational explanations; almost always motivations are entangled, not pure. No pun intended either with the following which also should play in your persevering choice of DO over its competitors; and yes, people have the right to choose the software of their choice, I'm just interested in the possible underlying motivations since I think coders could write better software if they fully understood those. So:


Is it possible to sell software by cool?

"Selling" meaning here selling more and/or selling at higher than regular prices; "cool" meaning by coolness, by a high quality image, by an image of sheer brilliance, of elegance, which of course should be conveyed by optimized user experience (interaction with the gui), but probably will be communicated by the graphical design.

I also think this was easier in the early days of Windows, since then some applications came with graphic layouts which simple weren't available from the competition, for some time at least; for example, both Word and Word Perfect had lots of prestige over WordStar; at the time, they both came with so-called proportional fonts and formatting, when WordStar had no formatting yet and only had a monospace font, on screen and on paper, which was even a big double step back behind what the dedicated text processors of the day were able to do (which WordStar recouped technically, it was far too late; that was another example of user experience, "prettiness", as a very big sales advantage); today I don't see much many such application with a superior coolness factor, neither do I see it brings much money.

Again, DO comes to mind (price range above), and then again, TheBrain, which is a graphical database and thus asks for 249$ instead of its competitors which do it without graphic representation but more functionally and which see in the range of 90-100$ instead or even less; another try at this had been some Brazilian wiki which sold for 120 or 130$, with some graphic representation; it's down to 40 now: It becomes evident that coolness is not a function of superior functionality alone and cannot even reached by it if other functionality is under-developed so that the application isn't that useful in the end.

TheBrain is not that bad an example for a certain degree of coolness; in fact it's graphical representation provides promises for easy inter-linking even of remote elements, and there's a strong promise of usefulness in this, when in fact, this application does not make it as easy as it promises, but certainly it's easier done here than in its list-based competitors; it's evident that they should allow for easy linking of the current element to any element in a list of search results, to ease this up considerably.

Anyway, I think the notion of [p]promise to do something more[/b] with it than currently is a strong factor in cool: promise of widening up your current capabilities, even if that promise is not fulfilled by the design of the software since it gets too complicated then, and availability of the important elements increases again. I repeat myself here by referring to my "Pulse" concept in my Navicat Review (link above), but I'm sure indeed that metadata, not only (relatively) stable metadata but also sort of "semi-plastic" metadata-on-the-run (which means immediate on/off, and with the possibility to store, also in combinations) should play a prominent role in displaying data/elements.

In other words, it's the notion of power to / empowerment of the user which comes into play for a prospect to shield out considerably more money than for similar applications: The user pays more for feeling to be in charge; things becoming too complicated will dampen this feeling but it may be too late then.

Thus, it's important for software to bring feelings of power-over-the-data quite early on in the trial period, in order for the prospect to buy the thing, even if later on they're let down by over-complication.

Two - quite different - Todo applications sell for more than some others, Swift To-Do List and MLO; it's not about more functionality, it's about the user feeling in control with their things when using either of them. Compare with the equally and in part even more powerful TaskMerlin which in practical use is a complete mess; sorry to say this but I tried them, and I fully understand why the other two are so much more successful, at least that's what I suppose from allegedly large user bodies mentioning them quite often.

So it's about feeling good again, but here again, it's Microsoft who throw the market upside down, more and more people now using OneNote (which is available for free), and which has go some additional functionality which comes very handy, so I'm not sure about the future of TheBrain and other text databases, and of its future as a premium offer (249$), all the less so when I think about how it will conveniently display data on a smartphone (or a tiny tablet), "lists are more mobile" if I may say so.

In the early days of mindmap programs, mindmap programs in themselves were something prestigious, the same was true for Flowcharters. The latter are an almost defunct software category by now, and Mindmaps don't live their old age really well either. Here again, Microsoft dumped most possible use cases of these programs of today with their ubiquitous presentation software.

At this point in time, it's doubtful if any desktop application can build up prestige, since the sheer fact that it's not available out of the office destroys any prestige it could have strived at by other means. Also, and even if you do not need the data (to that extent) on the road, the user's sense of "control" is damaged if they technically cannot access their material from mobile devices: the absence of the possibility, if ever needed, devalues the software, since that absence limits and thus devalues the user (It's in the rare cases where the data is simply never used on mobile devices where this does not apply: here again, Adobe are very lucky since they would have a big problem, did their customers wish to make mobile use of their applications.

Also, I'm positive about the fact that in the end, any try to choose what data you will need to access by mobile devices, and which data can stay in the office, will fail, so cloud storage, and whenever needed, private cloud storage, will eventually take over, and this brings a new aspect to the notion of "elegance": Elegance of access: Speed, completeness, economy in the use of the limits of the lesser mobile devices (less speed, less screen estate, less keyboard use), in a word optimization within the dearth.

This possibly includes alternative display of data than on the screens in the office, and nevertheless "it being all there", and without the user to have too heavily to adapt, will be the developers' challenge.

Did I say SalesForce has prices of around 20$ per mobile device and per month for very basic functionality, and that their prices reach a whopping vicinity of 200$ per mobile device and per month for the full functionality set? I cannot judge the price/value of this, but it becomes evident from this example that some software makers now try to build up price value from a notion of "complete access", while it should go without saying that in 2017 and further, complete access must become the common ground, the condition every modern software must fulfill, and then the exceptional user experience, by ease of use, can justify some higher price; prices nearing 200$ a month per mobile device seem outrageous, and it's then of interest if such corporations try to hold data as hostage, for example by weird formats, since it's obvious this market of the future holds a lot of opportunities by being cheaper.

This is not a contradiction to what I've said above: When both prices do no real harm, being cheaper in itself is of no value, but if the market leaders practice prices out of the reach, there's plenty of space to entice customers away, if that's technically feasible, for those customers that is.

From what I see, pc software has never been really cool then, and few applications for the general public have succeeded, and then in a quite limited way, to position themselves over their competition. I'm sure this will change for web applications; it's quite a difference if the price gap between two competing applications is 50$ every two years (DO vs its competitors), or if it's 50$ per user and month.


It's ironic that while the cool factor never that much entered into play for software, it's quite different for operating systems. I don't really know MacOS, but it always has been acclaimed as the far superior and "cool" system (it seems there's some "Finder", perhaps there are other things superior); Windows has always been functional at best. The same is true for iOS vs Android, the former is seen as cool, the latter as functional... at best.

This is of high relevance since Apple prices are not only a function of their beautifying the hardware, and they decidedly do, but in order to get the operating system, you had and you have to buy the hardware from them, and this factor cannot be undervalued.

It's ironic Apple predominance for quality/prestige/elegance seekeers (these terms are not synonyms, even though with regards to Apple products, some observers take them for that) will probably come to an end with the full advent of software as a service. (If I were them, I'd develop a totally superior browser into which then web services could hook like never before.

Remember the web browser was the first real try to standardize the gui (I'm not speaking of standardization of gui elements in dedicated applications), and in 2017, it's said that iOS applications are better than their functional browser counterparts, but this is too inefficient in the long run and will come to an end, and thus there's room for some super-browser which gives superior quality (incl. speed and all) for some web applications which will function then much better than in vanilla browsers, and I'm sure those web applications will be the future, but it's not possible to make them as high-brow when at the same time they are expected to function as well in a whole bunch of disparate run-of-the-mill browsers; some of the big shots will grasp this opportunity, and application development will tremendously benefit from it.

78
This is a triple spin-off from https://www.donation...?topic=43711.new#new (CintaNotes Pro with 50% discount), from https://www.donation....msg408619#msg408619 (Navicat Review) and from https://www.donation...ex.php?topic=43805.0 (On Software Pricing).

So now the CintaNotes Pro freebies are gone - the immediately following posts were in answer to a placeholder -, and those who had the chance to read my teaser yesterday, in time, are able to try out its tag management to its fullest. (Did you know there is such a thing as a Google Tag Manager? Neither did I, up to this morning, but then, it's not for our file system...) As said in the teaser, what immediately follows may be obvious, but current state of affairs seems to prove that even for the bloody obvious, writing it down sometimes should come as helpful.


First. Given the marketplace for notetaking apps and what CN currently (! this may quickly change though) has on offer, it appears that not its current prices are the right ones, but that the original prices were too low, and it's at those prices from yesterday that most people you could consider as heavy users, bought their lifetime, or then, when lifetime was not included anymore, at least some Pro, which they then sometimes "update" by buying fresh on bitsdujour for example, when Pro is for sale there, and from where the developers only get a pittance anyway. It goes without saying that this mistake of those early years cannot be amended, but there's a second, running mistake which currently costs CintaNotes thousands in missed opportunities I think.

Second. In general: 10-day trials are rare, since most people just don't have the time to use such a short delay thoroughly: They have their professional life, their family life, brutally, they have got other, "better" things to do than to spend . Let's learn from Directory Opus, but it's not only their 60 days as a timeframe, it's another aspect, too - and of course, the longer the timeframe, the higher the risk for bad software that prospects discover it's not for them; I do NOT suggest this may be the case of CintaNotes and wouldn't take the time to write down these suggestions if I thought otherwise; no, in CintaNotes the current weaknesses are evident from first try, and in order to discover its strengths, you need much more time.

In general, always, and especially with CintaNotes' tagging system: Considering the above, starting a 10-day trial with the coupled free version (!) is suicidal and never-heard-of; there are cases though where developer provide a 30-day trial which, then only, reverts to some free version. In general, always: This often makes sense since in 30 days, the users will have become more or less accustomed to some more sophisticated features, and when then after a month, they don't get them anymore, they may be willing to pay. Or they discard the whole thing if they don't want to buy; it's simply too frustrating in most cases to continue the free version, after some more days or weeks in which they will decide upon purchase or abandon, depending, of course, on their impressions of usefulness ("do I need such software?") and quality ("how does it compare with similar software?"). But for the trial-users-prospects in order to judge that potential usefulness for them, they must have a chance to have built something in that software from which then they can appreciate if it's useful or not; especially for CintaNotes after 10 days, that's not the case.

As for DO's 60 days: Depending on the software, and certainly with a sophisticated file manager, in 2 months people will have done lots of tweaking and personalizing, so that after 60 days, buying becomes almost mandatory, in order to not lose too much investment in time and effort. (Abandons of DO will occur within the first week or so, but certainly not after extensive trial I suppose.) So in the case of DO, it's not about building up raw material, but it's about having built up the tweaks and manners ON the raw material, the files (and the time investment made in constructing them / "putting them together") and which for a new file manager constitutes then much of the "aggregated worth" of the thing for you.


A psychological AND practical thing: Many developers provide 2 versions, a free one and a trial one, and this concept comes with 2 advantages:

The psychological aspect: Frustration of the users would be less, since they would not know the paid-for features, in detail at least, so inclination to use the free version is much higher than in the aforementioned slap-in-the-face case, and so either the developer will never sell, or he will be able to influence his freeware users over a very long time to try out, and to buy the paid version. This requires frequent updates of the freeware, too, with absence of further crippling of the freeware by such updates - slap in the face: even if it's then in the interest of the user to buy, they will not do it, for fear of such a dishonest developer getting their money -, and with quite some advertising for the brilliant features of the paid version, advertising available without additional effort from the user, so the info should be integrated into the free version. I know users complain about menu entries just going to advertising, the giveaways by Swift-To-Do-List being not-so-convincing examples, but I could imagine much better teasing during long-term freeware use than that applied by Dextronet. Taking away functionality from the freeware is neither a problem in CintaNotes nor in Swift, I just mention it here in general; all to the contrary, CintaNotes regularly enriches the free version, too.

The practical aspect: With 2 different files for 2 different versions, free and trial, the user can get acquainted with the program whenever they please but start the trial whenever they have time and/or the demand for it: There is a real demand for "playing around" with some software, in order to see if you like the way it handles things - there are big differences in "handling stuff" between softwares doing more or less the same thing -, and then to really trial and decide upon buying when the need arises, which may be some weeks or months later. In other words, you have two possible discard decisions here: First, you "trial" the freeware version, in order to have a look (and without a time counter running), to sense if you like the style, then you either discard the freeware or begin the trial; almost immediately or at a later time; for the latter, you decide if "is it worth the price" and/or "does it meet special requirements of mine".

I understand a developer doesn't want to make their freeware too powerful, the distance between free and paid versions must remain considerable, but if the freeware is too basic, it'll probably be discarded too early, that is before the user will be ready to make a trial and/or buying decision, based on their extended use of freeware. Now CintaNotes is a very bad example for this either since here again, it's obvious that the freeware version is a very, very good, and functional, one, so that Alex Jenter, the developer, from this aspect at least has got all the chances on his side to finally "seduce" the user into buying.

But this incredible chance for Alex to have the user build up a functional, extended notes repository in CintaNotes (which should remain perfectly stable then since it builds upon the usual SQLite database engine), then come to the conclusion that they need some better tag management - which the Pro version provides -, and then extensively trial the Pro version, falls short, since those 10 trial days will then be long gone, instead of the user having built up the necessary material which then will need sorting out, be given some "room", some time in order to do so - bear in mind the user doesn't know yet, at that point in time, HOW to build up such a tag tree, so they will need some playing around, so that 10 days even then would be way too short.

It's obvious that forcing an immediately-starting 10-day (real) trial upon somebody who just wants to get a rudimentary idea, a "feel", at that moment in time, is not a wise decision, by general means, but as said, for a program that will become useful to its fullest extent AFTER some time of gathering material in it, such a suite "install > immediate 10-day trial > then good freeware but without the chance to sort the material into something manageable except for buying without trying" is suicidal.

A note program needs notes. These notes will come from here and there; it needs some time for them to gather in considerable number. This is different for people like me who gather or write dozens of notes each day, but the general public will need some time in order to get together some hundreds of notes, from which then they'll feel the need to re-organize, to really organize, them. While you don't have such a number of notes, what use for a tag tree (which is the main sales argument for Pro here)? You could play around a little, gather some 20, 25 notes, then hasten to see how to best organize those 20, 25 notes in CintaNotes' tag tree which could probably handle very well thousands of notes, but how could a user discover such organizational strengths from playing around with some dozen of notes: that would not be very natural to begin with, right? If the tag tree, the organization of tag combinations, and in various combinations, is done well, I mean if its functional in organizing many, many notes, you cannot reasonably discover with some dummy data, with just some notes upon which you force aleatory combinations in order to try out what they would look.

No, it's after some weeks that you'll have gathered a body of sufficient size and different, and in themselves quite coherent groups, so that it'll make sense to now try out hierarchical tabs, or even more to the point: Then you will even NEED to try them out - if you're more into organization than into searching at least. That'll be weeks, months after your 10-day trial ran out, so now you'll either buy the (having become) expensive paid version without the chance to try it out first, or you export your stuff into something else, abandoning this software, not taking the risk (that had been my reaction at the time), or you do what many people do, you just hold "some" stuff in it, and which then will grow old in there - that spares you the effort of exporting what you will have put in -, and you probably will never consider buying, without knowing how well it could it all organize, probably.


From the above, it becomes evident that even a 30-day trial for CintaNotes (and similar organizational software which first needs the stuff to organize then; DO doesn't have this problem since most new pc users will first use the in-built file manager, then switch to something better when the Microsoft thing isn't able anymore to correctly organize it) would not be ideal, and it's also true that a developer makes available their free version in order to incite as many freeware users as possible to buy their paid version, so it's in the developer's interest to remind their users of buying, but not by nagging - which, most of the time, will result in the abandon of the freeware -, but rather by proving how useful the paid version NOW could be for them. So it seems that a dedicated free version, and a distinct trial, isn't the ideal solution either, since it doesn't take into account the fact that the freeware's justification of existence is the developer's interest in selling the full version, and most freeware users will not additionally install and trigger the trial, since it's simply too much fuss for them.

Thus, a combined version indeed, but 10 days at the beginning, then another 10 days after 30 days or whenever the user switches to it? As said, there is some learning involved on the user side, so 10 days is not sufficient, but we're speaking here of software which is regularly updated anyway, and that brings a big chance for renewed trials. Also, there is the question of what "result" of a trial period will remain available to the user after possible reverting to free. If you provide repeated trials, it's evident that the user, within such a period, should be able to create as many additional categories as they wish, but if afterwards you allow for adding any new note into such an additional category, users could find a way to create the necessary categories within the "trials", and then use the software as a quasi-full version, in the meantime.

Some applications allow for free shifting forth and back between "trial mode" and "free mode", without time limit for the former, but it's evident that in order to do so, AND to prevent free use of that combo as a fully-fledged paid program, they have to cripple their "trial mode" in a way that the user will never get the full "user experience" the paid version could provide to them, so what about some full 30 consecutive days of trial whenever the user is ready to switch to trial, BUT with a warning dialog if they didn't gather too much "material" up to then (too few items and/or too few tags): "Wouldn't you gather some more material before starting your fully-functional trial, or do you prefer just some 10 days of trial now, for just playing around with the full power of CintaNotes? The remaining 20 days of trial you can then turn on anytime you want to seriously bring in order all your stuff! - 30 days now - 10 days now - Buy now - Escape (must think about it before deciding*) (You will not lose any of these alternatives by escaping now)".

Any serious prospect (which means users who in case would also happily buy if convinced) will either chose 10 days or escape (abort would be the ugly, technical term here), and with every new update (or with an annual major update) you could make available the full functionality for another 10 days (and saying so then), but firmly withholding, after the first 30 days (10 plus 20 or 30 in one time), any re-organization capabilities. Whatever they will have formatted during those trials, will stay formatted (to mention another feature of the full version; never ever take away from the user), but those repeated trials will bring no chance to sort it all out anymore for free, while on the other hand you will already have gathered so much material that "Pro" functionality is really needed, while abandoning the program is out of the question now!

Similar renewed short-time trials could well help developers of other kinds of applications and even when no free version is available in parallel, since normally, the user will trial and then either buy or discard/de-install even, and then, in most cases, never ever trial again, all the less so since in most cases, since will be technically impossible, the trial/trial residuals blocking any new trial installation (or then it'll say "trial period is over"), while SHORT trials should be possible after every major update at least: 5 days every year, even with actively inviting the user to trial anew (and touting the major new functions) - but with an opt-out of course; rare will be users who this way will be able to fully take advantage off such an application, and it's 100 p.c. sure those will perfectly know all the other ways, too, needed to take advantage of trials as long as they need the application in question.

So, it's about giving the user the chance to really (!), effectively trial your application, and even when they missed that the first turn around for personal reasons, there should be second chances (and those users should know about them*), and if you do a free version, there should be repeated chances to get another, quick, but complete look, another 10 days with limitations, or another 5 days without any limitations (but then only once a years, not for minor updates).

*: For example, upon de-installation, not only the usual links to the developer's web site is possible, but also, from some (quantity-only) analysis of what the user has been done with the application up to that moment (and such quantity-only analysis - and which then, upon the dialog, should be communicated as such-only, in order to not enrage the user wanting to leave and who's very surprised anyway) is possible for any application, for example and also by timing the time spent within the application), from such quantifying analysis os real use of the application, the dialog could say, "You did not use this program much, just for creating and/or modifiying 3 files; instead of completely de-installing this program completely, why not leave it there for the time being, and have another trial [it's not necessary to mention here already that it'll be a rather short one] after the next major update? For that, this program will just ask once a month (!) for the existence of just an update (which then you can install or refuse, and also you will be able to de-select further such searches; except for this monthly check, the program will do nothing else! > "OK for now - No, get rid of it, I'll never want this crap again! - Esc (I'll have perhaps another look but don't want to decide now, in any case"

There are many possible variants within such a strategy, but any of them should take care of 1) never let go a prospect before they clearly say so, 2) not having them say so except when they really hate you (which means make offers, to not break the dormant relationship before it's really ice-cold, and which they very probably will not refuse otherwise), and 3) facilitate your prospects taking additional chances to get acquainted with your application afterwards, be it their "fault" last time around or be it that your application really wasn't that good enough last time so that you wouldn't have bought neither, hadn't you been their shoes.

And forget my 5 days above. Make it 10 days each time around, fully-functional, but not for minor updates, so that's it another 10 days once a year, and if you win your prospects' gratuitious "loyalty" by a free version, you don't even have to "sell" another trial: Your customers-in-waiting, once a year, are waiting for it, and if really they only buy after 4 years, discovering and experiencing that ace functionality which finally makes it worth for them to pay, that's so much better than having had them turning their back years ago.

Btw, the same is true for paid updates: Make them available for 30 days in a row, and if really then your customers don't want to pay, re-activate their recent version again. As it is, too many applications "sell" their paid updates from the feature list only, making it unnecessarily complicated for the user to go back in case.

It's about experiencing the usefulness of the full, of the updated version. This cannot be realized by playing around with dummy data, nor by not very clearly communicating or even actively inviting that the new version is ready for trial, even when previous trials did not fully convince the prospect.

And yes, most of the time, it's by lack of real data that web services trials fail. Are those web services vendors megalomaniac? Do they really think you leave your life data behind, begin some new service, out of twenty or so of the same kind and thus with no assurance you'll stay with their service? And then about your data which in the meantime have NOT been correctly entered into your life system?

It's one of the strengths of an application like CintaNotes that prospects are willing to enter some "addditional" data into it, data which up to then they probably would not even have stored at all, by lack of a quick, efficient way to do so. In order to sell the "Pro", make them dependent on it, and then have the "Pro" demonstrate how well it all can handle it.

I don't know how specific web services do this, but for example, when you got from Evernote paid back to free, they say you don't lose data gathered with paid, for example ocr. But the subscription model, when there is no corresponding free model (anymore), brings the problem of export, and of exporting in some format which henceforward will be acceptable to you, or let's put it bluntly: When EN becomes too expensive, people go to OneNote since that transfer is technically possible and convenient.

But it's very ironic that my model described above, multiple, fully-functional, time-limited trials in order to get free users paying or non-cutomers as customers, is even so much easier to technically implement in web services, while their model almost invariably is, one trial, then pay, or even, free with poor functionality, or pay for a year or so, then you can probably go back (if our free model continues to exist then). There could be much more flexibility, in order to push sales... or, in this case, service rents.

Btw, web space is rent, but web applications are not necessarily by rent: It's perfectly possible, technically, to buy your own web service you then install, say, on some amazon server; in other words, you'd not be dependant on some service provider, you would own your data and could shift it, together, with your web application, so some other space provider, or even to your own home (well, let's be realistic: office) server. The current situation is a transient one, where most web application developers see themselves as web services developers, alleged one-stop shops which in fact rent the web space they then rent out to you, and their coupling of data and of the not-making-available of their (for that: multi-customer, but would it not be multi-user most of the time anyway?) software is just for maximization of revenue reasons, so this should not hold for very long, corporate needs being different, and the needs of small businesses are, too. It's just that today's desktop software will go mobile, but its current replacement by web "services" will be ephemeral, it's just too much loss of control except for consumers.


Edit May 19:

Original short post was clearly worded as a placeholder AND was put here since I had wanted to give the possibility to readers here (thus the original title with "read this today Friday"), even when they don't check the usual freebies sites daily, to get the main example application in question for free, in a situation where my musings about the final subject weren't ready yet; at the same time I promised them for the following day (which for the freebie would have been too late), and I replaced the placeholder/freebie note that following day. (Another lead, from somebody else, in some other forum, was posted hours later than mine here; it was followed by a Thank you; the reaction here were quite different, weren't they?)

As for the "triple spin-off", I not only gave the abbreviated links and which do not contain the titles, but I also put the respective titles in parentheses, so that nobody, not being interested in reading the sources, was lent into following those links, in order to check them, since I made that check possible by reading the respective thread titles here. (Also, I put follow-up links into those sources, and in a similar non-obtrusive way, not as new posts over there which would have appeared in the thread list as such and would thus have incited readers to gratuitously open those threads, but as edits; with the exception of course of the main originating thread, in order for the main example application developer to easily find the link to my suggestions:

Since that developer monitors that originating thread, let's see if the 10-days-from-start-on will be changed to something else; for example, to very simple, to something like 60-days-from-start-on, as in DO; it's correct that DO has the technical means available, and uses them, in order to prevent multiple installs on the same hardware, while without those, there is a certain risk for the program to get unwanted free users, but those will be very few in numbers: the under-18 bunch who want to get anything for free no matter the effort, probably don't have so much use for a tool facilitating serious stuff, so there would be no real sales lost but many to gain, and if they really use, not only "own" it, even those "all mine!" kids will end up buying.

Some other little things I don't want to bother anyone with: Re Apple's Mac generations: it seems that both the F-key and the touchbar versions are from October, 2016 (with the said price difference of 300$/€), and that thus for some time at least, both versions will be available concurrently; also, the traditional wording for context-sensitive F-keys seems to be "Soft keys". - And last but not least, re software pricing: It appears that a higher price is also needed for status within a competitive environment, the proper term is "positioning", and then not so much more functionality is needed in the meantime: The higher price not only is accepted, but, conversely, helps (!) with the appreciation of the software/product as "superior"; I think DO does this extremely well, also since the premium (as put into perspective in the relevant thread interlinked and identified in this post) is very reasonable... while the surcharge for TB (ditto) is very considerable, but may also be reasonable, considering the very different respective user scopes (number of possible users) of a) a slightly higher-priced, very functional file manager vs other quite functional file managers (light premium not off-putting), and b) a strongly-surcharged data repository with graphical representation of items and links vs traditional data repositories (lists, trees) probably more convenient for everyday use of many users (then premium not off-putting either (very strong "exclusivity" factor) as soon as the alternative content rendering isn't off-putting anymore: if the main aspect isn't attractive but to a minority ("select group"), then those few will be inclined to pay even much higher prices, and instead of those prices harming volumes, they even facilitate the purchase decision: "club" effect).


Edit 2
Add-on May 19 - The Reverse Strategy: Hiding probable foils from the trial

In my article "On software pricing" and here, too, I spoke of TheBrain (TB) and its pricing; above, I said that neither CN nor DO have got any reason to fear an extended trial period; nor have many other applications btw.

But TB has, in a way. Some time ago, I had been surprised about the very poor import facilities of TB; maybe, they are better now, but I doubt that. My research found that TB staff was not interested in resolving this "problem" - at least, at the time, I had naively thought this would be a problem for that application -, and also, some user had written some import script, for some import format I don't remember, but after having unsuccessfully tried to sell the script to the TB developers, instead of making it available to TB users, he offered to sell it to them, one by one, at individual - maximized - prices. So that was then.

Now, in light of what I said above, and in light of what I know about TB, I see the whole thing very differently. When I said above, Give prospects a chance to trial your application in real-life circumstances, and thus after they will have had a chance to gather the necessary material in order to discover the strengths of your software, I now think that TB, while their trial period is the usual 30 days, it's not in their interest that prospects trial their application with large datasets, and within 30 days, those would either come from import or would simply be not (yet) there, in most cases.

Don't take my words wrong, I'm not implying TB isn't worth anything, I just think it's a quite valuable software for strategy, analysis and other tasks at hand, in the way of a spiced-up mindmapper. But those monster "plexes" they show you on YT and elsewhere, they look brilliantly and evolve the way they want you to see it, but you don't have a chance to WORK with those monster files with a maximum of items and interlinking, you just get the graphics' awe, but you don't get any feeling how it would be like to enter new elements into, or retrieve existant elements of YOUR choice out of, such TB monster files: You'd risk to discover in those processes that clarity suddenly isn't there anymore.

Now, by deliberately taking away, from most prospects, the chance to import their existing text/text-plus-photo databases, they limit the risks that prospects may discover that TB monster "plexes" are very probably far less manageable than their video presentations try to convey, while on the other hand, the quite little "plexes"/databases they will have the chance to build up from scratch, within 30 days, will stay quite functional and quite pretty, all the more so since trial users, because most of them will have to do it all from scratch, will be inclined to creae not one quite extensive "plex", but several quite tiny ones which will remain perfectly lucid, for example for strategy, planning, different aspects of one thing, and in which TB probably even excels.

This way, TB effectively optimizes (by intent - as suspected but not proven by me - or not, but at any rate by its outcome) the chances trial users will discover the strengths of TB, while missing its probable foibles before buying (and since it's a little bit on the expensive side, and since those users after buying and after discovering those possible problems will probably say to themselves, Oh, I could have discovered in time though!, many of them will then add, Ok, so now I have to negate those problems, in order to maintain my self-concept. A better solution to this dilemma would be to apply for some 60-or-90-or-even-180-days money-back guarantee ("no questions asked!"), and indeed, many applications come with a trial period AND such a refund policy - which, btw., is even another way of quicker selling of good software (but is often hampered by buyers not trusting such a guarantee from developers not having sufficient status in the market) -, but TB does not, to my knowledge, at least I searched google and their store faq in vain for it, and indeed, they would be badly advised to offer it (it's not specialized strategy-and-similar software). (Btw, current price is not 249 but 219, or the full monty for 299, 159 for subsequent years.)

As always, the example, here TB, stands in for the strategy it possibly follows or which can be applied to it, and the ideas described can thus be deployed to other use cases, even in dissimilar software or outside the industry. Regularly purging your forum from disturbing posts prospects may stumble upon is another successful element in any sales strategy and which of course is applied by TB.


Add-on May 25, 2017:
Another variant in inefficient trials: Trial too short to appreciate probable strengths, here not by lack of material but by lack of user experience

In the Navicat thread (link above) I probably spoke of its short 14 day trial. What I didn't mention though over there was the fact that I had installed and de-installed Navicat (not the free design version but the trial SQLite version) several times, and with de-installing always the same day of my install, but all that within those 14 days, so I didn't become aware of the fact that the trial didn't count my use days, about 2, 3 or 4 within the trial period, but that upon very first install, it set a final date of 14 days in the future.

Today, I tried another re-install, which worked, and then, upon opening the program, I was told to buy the program, and the dialog told me my trial was done at day x, some months ago.

Since that info is stored in some encrypted format anyway, somewhere on my computer, it would have been easy for Navicat to also store my de-installs, respectively, to store the respective lengths of installations, not in hours but in legal days, a de-install the same day counting for one day of installation; this way I would now have about 10 or 11 days of 14 left.


Why would that have been important? Since some months ago, I had been a bloody beginner with SQLite and just trialed the program by playing around; as explained elsewhere in this forum, I then went to SQLite Expert, for several things in Navicat for SQLite I hadn't been happy with, among them at least one replicable, big bug upon designing the database. Thus, at the time, because of that, I hadn't been that much more interested in the program's everyday capabilities for browsing the databases and editing the records, once the database design had been done.

Unfortunately, most SQLite database browsers are really, really bad, be they paid or not, and that's because most of them don't offer word wrap, in grid view, or even at all. Thus, when the text in some field is too long for the field's display, you must revert to horizontal scrolling within that field, which can become absolutely awful if the text length is not just a little bit larger than the field's width, but would need 3, 4 or more times its length.

As I probably said in the SQLite / SQLite Expert thread ( https://www.donation....msg408674#msg408674 ), SQLite Expert offers word wrap; at the time, I mistakenly thought this was standard for paid database browsers/managers (and as said, SQLite Expert even has got it in its free version); I could not have been more wrong.

With SQLite Expert, I'm not that happy either now since whenever you do not only browse records ("select * from ... where ... ...") but then want to edit some "find", you will quickly discover that SQLite Expert offers word wrap for display, not for editing then, and thus, every little change is quite demanding, if it's some real change, not only some add-on at the beginning or the end of the record.

So now I've been trying to find a better alternative, not for the design (which I did, as said, with (free) SQLite Browser / Browser for SQLite), but for retrieval and changes, and now, as said, I had to discover that even for display, even most paid SQLite browsers don't offer word wrap (for example SQLiteManager (3.9.5, 49$; could not trial version 4, being on XP). (Some would offer more complete text display in an additional blob pane, but only for texts in blob format, not for text in text fields, and AnySQL Maestro (free) has got an additional, multi-line field, but always says "n rows fetched" after a query, even when then you select some field within the results and expect the text of the additional field to change to the full text of that field; since that is so and since I had trialed SQLite Maestro (99$) some months ago and now probably cannot trial it anymore, I suppose that in that paid program, it's that way, too, but cannot say for sure.)

Rare are the SQLite browsers which at least have got some "memo" pane which means that the content of the currently-active field is also displayed, and more complete, in an additional field; also editing is then possible there, and in the original field. But then, some of the browsers didn't even allow editing at all, in the grid showing query results (with or without "F2" or other means), but editing records was some extra function in those applications and needed display of another part of the program in which, you bet, the search results of the query were lost but where you had then to search for the record(s) to edit by some "find" function (for example SharpPlus SQLite Developer, 49$).

Also, at least for editing, you would expect a no-word-wrap browser to then show a better, multi-line "edit field", but for example, SQLite Expert has got such a field within an additional pane for if you disable inline editing, but within that additional pane, all these editing fields are of equal size, which means that three quarters of the space within that pane is sacrified for big fields without any content worth mentioning (space for 300 characters or such for a field containing 8 or 10 characters), while for the field you need to edit, you first must scroll down within the pane in order to even see it, and then it's too short for its content, and you must again scroll down within the field - so much for coders and them designing GUIs.

Edit May 28: SQLiteSpy: No field editing possible even when "no edit toggle" is set to "no", allowing for edit. F2 doesn't work, double click in a field doesn't do anything, Del in the memo field does not work, just Backspace and inserting but the "edits" you do there then aren't replicated to the cell, and the menu command "Edit cell" is greyed out. From its name, edit is not included, so probably the edit commands available have been met there for future developments; no way to know since there is no help file. And so, [End of edit]


from these experiences, you will understand that now I had become interested in trialing Navicat (89) and SQL Maestro (99$) again, which for Navicat, as described, was impossible, and which for SQL Maestro would very probably have been impossible, had I tried against all chances.

Application developers, be their trials a laughable 14 days or the usual 30 days, almost all start from the triple premise that their programs are only trialed by users who

- have the time ready in order to fully trial
- have got the material ready to really trial (see above), and
- have the necessary experience ready in order to know HOW to "correctly" trial.

It's evident that only in rare cases, all three conditions are met at the same time, and for example, even a very experienced user - "experience" here meaning experience with that particular kind of applications AND with the tasks at hand within the context of their use - could get some new deadlines within the time frame they had the intention to trial the program, and thus, after technically having begun the trial, would have to postpone it to some later time: For most trials, even an immediate de-install would probably not help; see how it's done by Navicat or probably most others.

It's evident that my observations only apply to time-limited trials, while there are other ways, but it's evident that if the developer cripples the functionality of the trial, in many cases the user will either buy from assuming, from help file reading, from making the mistake to imagine the functionality, missing from the trial, otherwise than it's executed in reality - or they will refrain from buying, precisely from fear of making such mistakes, from some bad experiences of that sort in the past; the latter is my reaction to crippled trials, but if combined with a money-back guarantee, AND if I had some expectation that in case, I would get my money back, a trial could be made. (In the web, reports abound re the applications of some big Chinese consumer graphics vendor who systematically refuses refunds, while they strengthen it in their advertising though.)

The only notable exception within the time-limited trials and of which I know is Beyond Compare: The trial is 30 non-consecutive days - the thing I had tried to do with Navicat - but without the need to de-install the program in-between.

I hadn't had in mind that program when writing my original post because in the context of a note-taking program, it's evident that the use of such a program would be daily... but the re-arranging of the notes (tag-tree management) would be not. Also, make the note-taking possible any day, and limit the note-management to 30 days, would be possible as I see it now, and perhaps 30 days for that would be a little bit long IF there is no time limit to the distribution in time of these 30 "special" days, but you can clearly see the possibilities here.

It seems the developers of Beyond Compare are the only ones, up to now, who have understood - but without communicating their find to the industry except by implying it by how they realized their trial - that users, in order to really trial, must have the time, the (real-life) material, and the experience to do so - their trial meets all the requirements, for their program in question. (What they haven't understood yet is the need to do file compare incl. moved blocks; but that's another discussion, which btw has been done in their forum and in this forum here, years ago and without results up to now.)

It's evident that 30 non-consecutive trial days is very lavish and would probably not meet the requirements of most developers, but some non-consecutive trial periods with the same program on the same computer should definitely be possible, and without the need to de-install in-between, and it goes without saying that the developers should, as I described above, communicate the possibilities to their trial users, AND should communicate to them how to best take advantage of the trial set-up in question, in order to discover the strengths of the program - and all that within a framework that prevents the user from "using" the program for free. As explained above, smart (!) time limitations can do that, and without hindering the trial user to build up the necessary material in order to then much better appreciate the strengths of the program.

Btw, Navicat Lite only handles the very first 1,000 records of any database, not only on display of query results, but for the retrieval of any query result, too, so it's completely worthless; had I known this before, I would never had mentioned its one remaining download link here; I had thought it was helpful, but none of any free Navicat product ever is, as we thus have seen.


Again May 25: Allow (time) for comparisons!

In the above, I missed one simple aspect which does not even have to do with the need of first gaining some experience with that kind of software: The developer of good software should cope with the fact that a trial user will want (and has the "right") to trial several competitive applications. He should cope with the fact that a trial user may even choose to discard his* software, for some aspect or another, and then want to trial it again, since in the meantime, they (the user in question) have become aware of the competitors' foils, so that now they would like to check if they prefer to rather live with those of the provisionally discarded software (as I had, unsuccessfully, tried with Navicat, by de-installing it several times after just hours of trial each time).

It's evident that within even 30 days, let alone Navicat's ridiculous 14, such a "going back and check again" is not possible, and even 60 days will not be sufficient a time frame whenever, for any reason there is, the trial user will have discarded some software in favor of some other (for example for freewares, as in my case here, or then, when a user goes back to a free file manager but, with more specific requirements now vis-à-vis this kind of application, wishes to trial, let's say, DO again, 4 months later, the, apparently generous, 60-day trial period will be gone, too.

Thus, if you try to consolidate, to synthesize, all of the above, it becomes evident that any rather good software, which doesn't have to fear comparison (or at least any application which has got chances by the saying, "in the land of the blind..."), even on second or third try, should make possible such new trial, from a new perspective, which has now become a real "compared trial".

Thus, whenever possible, the developer should communicate his trial set-up and clearly state that it's in the users' interest to NOT trial every day but just for trial purposes, and that this is possible then even over a very long period of time, the application not storing private info, but storing trial days, AND communicating how many will be left. Also, this info should be stored whether the user de-installs the program or does not, so that even a previous de-install will preserve the remaining trial days. It's then up to the developer to prevent trial users from using his application in lieu of the paid program, by ways of combining with a smartly devised set of full functionality vs restricted functionality.

For note-taking programs, I gave an example above (continuous note-taking but management of notes only on special days, and certainly not 30 such days spread over a very long time), and for a database viewer or a file manager, it's evident that further trialing would not necessarily include saves of changes (IF this lack of functionality is clearly communicated: bulk rename's preview without the rename, copies/moves intercepted by a dialogue "n files would have been moved now", and so on: in good code, that would be a thing of just some minutes for every such functionality withheld from completion); it's just that all the functionality should be available in demo mode ("what does it do, how does it do it, by which (necessary) steps, by which GUI interactions):

It's about re-checking if you're willing to live with sub-optimal software, now knowing more or less intimately about the sub-optimality of its competitors.

*: I say "he" for "developer" since probably about 1 out of 1,000 developers isn't male (even Judy's Tenkey is (now) programmed by a man).

P.S.: I know about technical means like virtualization, restore points and so on. I think most software is for the general public, and the better part of that general public should not to have to be bothered with considerations like, "should we set a restore point, then trial some applications we've been eager to trial for some weeks/months by now, then go back to the point and have Windows updates reload for hours, let alone problems with mail and such of the meantime, and not even thinking of our not being allowed to set any settings from now on, for weeks, within our regular programs?"

That's all ridiculous: Make your application available to users; don't have them resort to convoluted stratagems in order to overcome fears of even some little "looking into it" making it unavailable for them for all future, short of buying (almost or completely) blind.


EDIT June 10, 2017: Make available the trial without asking for too much information
The (immediately) following isn't a new idea at all, but it hadn't been mentioned here: It's common understanding that by putting up too many hurdles before the possible begin of any trial, developers harm their business.

Just recently, I would have liked to trial WinSQL (free, 99$, 249$, from Synametrics) since the "Prof." and most expensive version looked appealing to me. They've got a trial in the usual form, it's 30 days for "Prof.", which then reverts to "Free". Unfortunately, they don't give away this trial but by
- asking for full disclosure, incl. street address, telephone number, etc., AND
- they say you'll receive the trial link by mail,

which in combination, in most cases, means that if you fill in dummy data into their application form, you won't receive any trial but they will first try to reach you by telephone, during their business hours. This is an incredible nuisance; it's similar to only get prices for some car or other assurance but by giving them all your personal info, and then afterwards you'll be flooded by mails and letters (when you will have unsubscribed from their e-mail list), and they always speak of your "application" when all you ever wanted had been some price.

It's not identical since Synametrics DO give a price, but as for all the rest, they do exactly as those developers for which a price must be a quote which means they try to get the max price from anybody, googling first their name, corporation and all that, and then think about the price which they will offer to you... all this when you don't even know their software except for their marketing speak and, perhaps, some screenshots.

Ok, ok, cynics will now say that the fact that probably 90 p.c. of all non-corporate users will back from ever trying is WANTED by those developers: They simply aren't interested in your (here:) 99 or 249$, but that they want only sell in numbers*.

To me, that appears to be exactly the opposite of what some other developers do: They (almost or really) give away their software to students, in the hope of them, later on in some corporation, will trigger licenses in numbers; this latter strategy can be VERY worthwile I think (ie if the software is of use in corporations, AND if it's particular and strong enough in order to not being overwhelmed by some other, competing software which isn't a competitor but simply has got almost all the market), while the strategy of "not interested in your bucks, and we let you know by pestering you" is just dumb**.

*: If they don't even do it for that effect but because they're just dumb, it's even worse, since, as I said above, there's really nothing new here so they should know better.

**: Of course, they hope they will have less customer service to do (10 licenses for the price of 8 but only 4 times the effort), but then, that's another misconception: Developers should do MUCH better help files AND charge for answering questions which are clearly (!) answered in those help files ("clearly" also implying "easy to find"): Clarifications needed because of a bad help file are not customer service but just product development, and customer service should be paid for - when developers complain that users don't read their help files, I'm sure they, the developers, do it wrong on BOTH ends here.

Some developers install a user forum, also in the hope that users will answer questions among them. This works, to a degree; in reality, the developer will, in most cases, either have to answer the question himself, or at least intervene after partial/wrong answers from fellow users, and this again and again, since the clarifications are somewhere in the depths of his forum, instead of being added to the help file, with just a short link to the user forum - the first time the question comes up; after that, any "short link" would trigger almost the same effort, and even checking the questions would cost time.

So you can see that a traditional user forum is to be avoided: Much better is a double help file, local and in the web, with monthly updates of the local one from the web one (not scrambled, no effort), and with amply links from the former to the latter (or even automatic updates upon every local consultation).

Then, users/buyers would put their questions into some field "in" the topic or near/"above" the topic (within their local help file, ie with user identification, and after the update check), ie into the field of a possible parenting topic***, and they either will get an invoice (10$) or a "thank you" and the link to the updated/newly created topic; in borderline cases, they would get a link, no invoice, no thank you either (and the developer should think of some additional clarification).****

And after some years, that software would have got a perfect help file and a very pleased, disciplined user base, instead of some inscrutable forum and an overworked developer with no time left for real development.

***: It goes without saying that today, it's so simple to put one (sub-)topic into any context it's needed ("cloning").

****: Wishes for the software would be handled the same way, they should be put into some inbox or into "related" subjects, and then be put, by the developer, and together with his opinion, into special help file pages (what about a different background color, chamois instead of white?) but which there are at their systematically-correct position: "Function xy? No." (and then the developer's argument for refusing them); "Function yz? Not yet./Will come soon./..." (And the circumnavigation for the time being, example for missing OCR in some information management package: How to use basic OneNote/EverNote for that while waiting.)

This would build up strong customer loyalty and strong expectation in order to ensure users regularly "go with" paid updates, and such a system could even become a reminders system for users having not updated: They would not only get the help pages for their current problem within their current version, but they would also get all the NEW pages, but in pink, in order to see what they all miss!

Btw, that's also the perfect system in order to get rid of "old" help pages within the online help system: As it stands now, almost any software with a forum has got 10- or 15-years-old help questions or bug reports which for 10 years or so have not been relevant anymore... but SOME of there still are, concerning problems which have never been resolved, so any non-expert-user, let alone any prospect, is LOST in those forums, not knowing which topics are of relevance now.

Ditto for bug reports: Just on the help pages to which there are of relevance, but with orange background, and any update will delete the pages bearing resolved issues, or will update those which have not been resolved ("we didn't find the cause but continue to search for it").

In development, well-organized developers all have got some "table", some database for follow-up of issues, but after release, they are willing to live with forums where about 80 or more p.c. of the messages either have become irrelevant or are (partly) wrong/misleading now.

Why do software users have to live with such a mess, which then prompts 2/3 of their new questions btw? Have your state of affairs online in real-time, and users and prospects will extrapolate from this superb organization (which, as implied, demands far much less effort than the traditional ways) onto your software, will happily cling to it, will happily buy it, some "not resolved/possible/available yet, but we're working on it" issues notwithstanding.

Demonstrate your program can be put into an up-to-date, systematized knowledge base; don't allow for users which, in 2017, want to know how to set sound track and sub titles of some DVD in Zoom Player, to be shown on page 1 by google tips from 2004 about that (not even how) to install additional "filters" from somewhere, not knowing if possibly in the meanwhile (13 years!) they COULD find the individual language settings somewhere! (See my Zoom Player add-on in the Software Pricing thread about ruining software by withholding base functionality.) Btw, it's also a sign of respect to not steal hours of wading thru some forum with thousands of posts and almost no info about the current state of the issues discussed over there.

In a word: Make your help file interactive. (I've never seen this done; if you have, please share the link(s); and I changed the title from "about" to "around", in order to align it to my add-ons.)


EDIT June 11, 2017: VIP Customer Service (incl. No-Reply) at Cyberlink
Yesterday, I missed relating some real nice little story perfectly illustrating how dumb people can be when they try to coerce you into their product by all means; this as an add-on to my WinSQL story where my interest in some software product had been aborted by too much zeal on the side of the developers, too.

Some time ago, I've had a question about PowerDVD, clearly stating I would buy immediately if the answer was yes, and that was indeed my intention.

First reaction from Cyberlink, the usual automated receipt, nothing to say against this, except perhaps for the more than ridiculous "VIP" name, but perhaps that appeals to 13-year-olds; on the other hand, pricing's a little bit on the steep side for little children**, but it would have ok with me. So:

From: CyberLink Customer Support [email protected]
Customer Support E-Mail Response
Dear ...,
Thank you for contacting CyberLink Online Support.
We are handling your question and will reply to you shortly.
Please do not reply to this mail. It is an automatic response and has been sent to acknowledge that we have received your submission.
If you have further questions at this point, follow the link below to edit and resubmit your question.
http://www.cyberlink...es.do?isNewQuestion=...

Then I waited a few days, and in came the second reaction from cyberlink, and again I got the real VIP treatment:

From: CyberLink Customer Support no-reply@vipmail.cyberlink.com
Customer Support E-Mail Response
Dear ...,
We would like to inform you that a response to your inquiry was posted at the URL. Please visit this address to view the response.
Note: For your personal privacy, a CyberLink account is required to view the response and keep an inquiry history. Get a CyberLink account for free right now!
URL: http://www.cyberlink...esponse-page.do?pId=...

Isn't that lovely as a treatment for eager would-be customers? Needless to say I've never created my Cyberlink "account" and have continued to use WinDVD instead, and I'll never know if the answer was yes or no:

If it was no, their way of treating my request would have been particularly nasty, stealing an additional 10 minutes of my time for the "account" creation, and if it was yes, their try to manipulate me was a hilarious fail, creating lose-lose instead of win-win, and they should have foreseen that, in order to not get treated like sh** in case of a No, I couldn't create the - then totally useless - "account", so a smart correspondent would have said, "It's my pleasure the answer to your question is yes. Please create an account with us in order to get the details how we'll do it!", and I'd be happy to go into that effort; this just as an advice how smart customer service staff could at least individually overcome the blatant dumbness of their superiors.

AND of course, there are some - rare - developers who not only prevent prospects' writing in their forum, but who even prevent them from reading in there; even though I don't remember the name of the product(s) currently, I've seen this at least once, quite recently - so much for sheer idiocy for today?

NO: I changed this thread's title again, why? Because, together with the term "conceive", "trial" will get you to artificial insemination on google, and almost exclusively so, and that's why I think the additional term "software" may do no harm here.

AND: The above cyberlink "links" ain't links, but they remind me of some artificially-created problem: Who invented that idiotic idea to abbreviate links in web pages in the first place? Weren't they aware that there's no link left when the reader copies the text which contains them, to some zettelkasten*? (It's said there's a special FF add-on which then fetches the original links from the source code and replaces the abbreviated ones in your clipboard with it; should try that, but it would be so much less fuss to have correct links in DonationCoder, for example.)

*: Oh, one more, this reminds me. In spite of a 10-minutes' search, it has been impossible for me to find Tietze's own zettelkasten product, neither on his site christiantietze.de nor on his site zettelkasten.de - just other products, and third-party zettelkasten software he recommends for Windows/Apple; probably it's a language problem (it's all in German over there (?)). But then, I would have liked to at least find some note with regards to that - defunct? - product, so we can note here: Communicate about things people MIGHT (still) search your site for, and be it solely because that reflects - both ways, positively or negatively - upon the other products you still sell.

**: Oh, what did I say? They ARE still real links, and they'll inform you: PowerDVD 50 p.c. off, just 85 bucks now, but just for a few days, so hurry up, and this will put their VIPs into ecstasis minus 70% (as quite very often during the year, see my remarks over in "Software Pricing" which apply here), and who wouldn't have wanted to be a PowerDirector for cheap at the age of thirteen? (Just 80 bucks; alternatively, there's a vacancy for a PhotoDirector if you prefer stills, just 50, down from 169.94. Oh my God what a treat.)

79
Being a cat lover myself, I didn't know about the NyanCat yet, thank you, f0dder!

You are right, the Escape key is now incorporated into the Touch-Bar, I hadn't paid attention.

I feel with Apple users; Apple has a tendency to not ask their (high-paying) customers, but to decide for them, treating them for children, and that's certainly very upsetting (the mouse comes to mind, which I had mentioned above, then the little things around the iPhone/iPad (wasn't it them who replaced the battery with an internal one? then the absence of possible external memory (memory card, usb stick), then the earphones connector; did I leave out any?).

As I explained above, I welcome the idea of the re-introduction of the context-sensitive F-key - by taking away the physical F-keys, they now enforce the development of this concept.

I understand that many users aren't happy with it but this could bring real progress.

I very much hope, for Mac users, that the developers will be smart enough to adapt the concept to "older" Macs (incl. the 2016 generation), with physical F-keys (I showed above that the Touch-Bar is not necessary for it, so "Touch-Bar readiness" could perfectly work on the F-key Macs, too), and I also hope that they do it 2-ways:

Leave it all as it is now, and just display, among other things, 12 F-keys in the Touch-Bar which function exactly as do the traditional, physical ones up to now, AND do some real research into context-sensitivity and offer that, by application-wide option/toggle, also both for F-key Macs and for the Touch-Bar variety.

Thus, for both hardware variants, there would be both function-trigger paradigms, and user could chose the concept they prefer - this time, what Apple has done is NOT enforcing context-sensitivity, they just took away the physical keys, but they cannot prevent developers from also offering the traditional F-key operation.

My guess would be though that very soon, developers will excel in smart "contexting", and users will be quite happy about it, thus my complain above that, again, Windows users will be left behind.


Not necessarily, Tuxman, as far as the hardware side is concerned; as for software updates, yes, but more and more software is available upon subscription only anyway, isn't it?

80
General Software Discussion / Re: On software pricing
« on: May 10, 2017, 01:11 PM »
(Second of two posts of mine immediately following each other.)

User experience can overcome competition, as can really useful ("really" meaning "really useful for a LOT of people"!) superior functionality; for "user experience", there is a restriction, too: The more complicated the software functionality (example: a fully-featured file manager), the more "user experience" (or the lack of it) comes into play (we've had the (positive) DO example above and which also proves that with some additional "user experience", you can overcome the competition even price-wise, meaning you can even enforce considerably higher prices (if you count the updates they are even multiplied), without even cutting (too much) into the market share you would have had with "competitive" prices perhaps; of course this latter assertion is subject to some doubt since they never tried out if they got 80 p.c. of the paid-file-managers market if they priced DO "competitively"; anyway, the market for file managers is a fragile anyway since any time, MS could incorporate much more file management functionality into their operating system.

"User experience" in very simple tools (in which your working time is very limited and where the notion of "fun" could not really apply) would be more reduced, both in scope and in importance, to something of minimisation of fuss, minimisation of necessary user interaction (steps) in order to get to the result, incl. for example variants management, the tool not asking each time for all the single settings but providing stored and named variants from which to choose and coming with their settings stored by the user beforehand ("once and for all", in fact for as long as the user will not change the variants' details); you could call such variants "presets". (These are general considerations.)


Now to crazy-pricing (and naming becoming inadequate over time). Today, PDF Writer  is 85 p.c. off, on bitsdujour, this means for 9$ instead of 60.

It's a pdf printer-driver, this means you install it as a printer driver, for it to produce a pdf. I didn't try this product, don't need it al all, so this "review" could contain mistakes, my point is pricing here, not a review of this program. bitsdujour says: "PDF Writer lets you create PDF documents from any Windows program that has a print function."

There are some free and paid pfd printer drivers, some well-known free one is said to install malware, so perhaps there is a good reason indeed to install one of those in the 10-12$ range if you really need such a "pdf printer". Many will not need one since they will have installed one of the quite numerous applications which come with such a pdf printer, and which more often than not are then also available as printers from other applications (similar to fonts installed by one application and which are then available to anything in the system). I even remember to have had additional pdf printer drivers installed by some trial software, and the the pdf printer driver was left when behind when I de-installed the trial; currently, I've got several pdf printers installed in my system, without even knowing which one comes from which application.

But the PDF Writer has some goodies, too, bitsdujour: "Please note folks, the application can merge and split PDF files and it can also add text or images as watermarks." So it comes with what you could call a fully-featured GUI while it's very rudimentary for most of such pdf printer drivers. Now I don't know by the heck of me why someone who does NOT have a fully-featured pdf editor (30-40$) would need images as watermarks in their pdf's, but clearly, the merge and split functions are often needed, and dedicated programs for this are, for some, overpriced (40$, full version now "on offer" for 20$ instead; they're obviously checking if that new price more than doubles sales or not) and not needed, since here again, for the same price, you get fully-featured pdf editors which include that functionality, and also, there are free programs for that (Icecream and others=, without limitations, or the free version of one of the paid ones, and which can handly rather big documents for splitting and which works also fine for merging most documents and for all of them with a little fuss (ex Adolix Split & Merge, now 7-PDF Split & Merge from another entity ( http://www.7-pdf.de/.../pdf-split-and-merge ), seems to be the successor of the former which was halted, or then is at least very, very similar to it): It merges up to 5 documents but which can have any number of pages each, and you can use the same tool, if really needed, twice or several times in a row.

If you have 60$ to spend (the regular price of PDF Writer), you'll get quite advanced pdf editors (or two of the more basic ones), and I suppose you'll have all the functionality of PDF Writer included, with much more; again, this is not a review of PDF Writer but a consideration of prices for dedicated tools vs competiting tools and vs more complete applications which include the tools' functionality.

From the above - again, I could be partly mistaken -, it seems evident for me that an adequate price for PDF Writer would be 12, 15 or 20$, with 20$ certainly a "correct" one, but with 15 (14.95: sales doubled?) or especially 12$ PDF Writer certainly becoming a so-called "non-brainer", with sales probably 3 or 4 times the ones at 20$. Also, the naming should be revised, "PDF Writer" not including the, as said, highly-useful Split-and-Merge functionality, something like "PDF Tool" or such being less limitative for the appreciation of potential buyers not knowing the tool yet.

As you can see from my 12$ suggestion, it's not me intention to denigrate this program (it's sold at 9$ at bitsdujour, from which just 4$ go to the developer), but it serves here as an example for pricing which, as far as I see it, totally out of reach considering the competition (from other dedicated tools, from more complete applications, and even from tools which are there "by chance", installed alongside by third-party software), and which, ironically, could probably get a very high share of the special market for such tools if priced smartly (here: pdf printer / merger / splitter). In any case, I firmly believe that this tool, priced at a fifth of its current price, could generate revenue tenfold or more.

Btw, I've seen quite some time that applications or tools in particular came named inadequately, with further development having put their functionality well beyond their original naming, which thus seriously hampered their sales now since most prospects take the name limitation for granted, without looking into possible plus functionality.


Also, there is the notion of convenience: If some dedicated tool (here: pdf split/merge) is "cheap enough" (12$, and with full functionality, not as a "lite" version which forces you to first think, will I be within its limits or not?), it may become of interest even though you have the same functionality available within some bigger "package", for example for very different loading times and/or for the functionality readily and very easily available within the dedicated tool (immediate availability), while in your more complete application, it's more or less "hidden" within menus/ribbons, and/or introduces other complications in there which are not needed in most use cases. Of course, this aspect best works for tools which provide the functionality as quickly and easily as possible, and when it's a functionality which is very often needed outside a more complete functional framework; both pdf merge and pdf split are core examples for such functionality needed without any other arrounding it.

(The more complete Abbyy pdf editor has got a discrete pdf merger tool, so it's distinct from (but bought with) the main program anyway, but for the heck of me I never discovered a pdf split tool which I need even more often though, so I always use the free combined 7-Pdf tool, all the more so since I can never remember from one case of need to the next if the Abbyy tool misses the combining or the splitting). So my point here is, even if a tool is redundant, if it's "available enough", both by price and then later on for / by it (/ by its way of) executing frequent tasks, it's successfully marketable.


Another phenomenon is slicing functionality and/or scope down to an extreme and sell each function/scope in a different tool, in a wish to maximize revenues by inviting the same customer to buy 5 or 10 different tools at the same time; this outrageous behavior can be seen from some tool vendors in the database, Outlook (!) or file format/translation sidecar businesses. I suppose that most of them seriously damage their reputation and their possible revenue by this inappropriate try to squeeze the purse of their possible customers wherever they try this in fields where indeed the same customers would need (or would like to have) several ones of their very similar tools at the same time. It's different of course for tools where the same customer in most cases would only need one of the very specialized tools from that vendor, but that's very rare while the policy of slicing tools up into unnaturally tiny functionality and/or scope fractions is quite frequent.

To stay in my pdf-split-merge example, it would be unsuccessful to try to sell TWO such tools, one for merging (12$) and one for splitting (12$), since while it's true that you rarely use those functions together, you'll need both of them quite often, at least hypothetically, and even at 6$ apiece, prospects would get the impression that the developer is milking them since their functionality, while distinct, is conceptionally tight-knit, so any try to make prospects "choose one or the other or pay double price" (even if those are very low and doubling it remains perfectly acceptable) will end up for most prospects in not buying any.

81
General Software Discussion / On software pricing
« on: May 10, 2017, 01:09 PM »
tomos, interesting info! In fact, XY... isn't that bad for pic viewing lately (while xplorer2 is bad for that): It has the usual quick file view, but then also, since some versions, the "Floating Preview" (best on big screens of 2-screen setup, similar to FastStone ImageViewer); this being said, I'm not entirely sure I would need any file manager for viewing photos, I use this FS ImageViewer for that, but you say it's not pictures in your case, so I cannot say if (and why perhaps) FS ImageViewer was less good at it in your case. Also, I understand that DO is said to be so versatile that very simply the use of additional programs (even free ones like that FS ImageViewer) is not needed. I own XY... (paid, lifetime) AND the FS thing (free), and in spite of knowing how good XY... is with pics, for them I only and always use FS, but cannot say why. I suppose I like to have dedicated programs for distinct uses, but there is no real reason for my choice.

Also, I've said it, I only use FreeCommander as my file manager, in practice, while you use DO all day long; for pics I would need to start XY... then, so I can start FS instead, while your DO is already running. This is to say, it's all about convenience when there is NO quality difference, and I suppose there is no quality difference between pic rendering between FS, XY... and DO, but I could be mistaken about DO's capabilities: It has also been said as to be very good with file preview and pics in particular, I just want to say perhaps XY... is now as good as DO here, but for somebody accustomed to DO, that would of course not be a reason to switch horses.

So indeed, we've got another element of pricing, and there is no pun intended when I say, even with a lesser price (XY... lifetime license vs regular update costs for DO, so the former is cheaper in the long run) at probably more or less identical software quality (which I just suggest as a possibility), inertia will hold against switching, "inertia" being used as a strictly technical term here, without "judging" or something.

Fact is, the more sophisticated software is - and all those file managers, DO, XY..., xplorer2 are -, the lesser the chances for a competing product to replace software a user is quite accustomed to already, for the additional reason of their time / learning / knowledge / know-how investment then being invalidated.

I say "additional reason" because even if technically, the switch was quite easy and would not imply much loss of application-specific know-how, there is always the problem of the financial investment being invalidated by a switch, psychologically at least, but it's a very heavy psychological burden.

Numbers of pure invention: Some file manager A (or DO) for 100$, plus 5 updates at 50$ each over the years, makes a total cost of 250$, which psychologically are "lost" when you switch, instead of buying the 6th update, also at a cost of 50$. Switching would cost 100$ for a lifetime license of file manager B (or XY...), so after the next update, you are technically even, and then afterwards, you are on the plus side, but the 250$ "lost" weigh enormously, so that few people would switch without any good, additional "reason". (Technically, with application A, of the 250$, 200$ are already "lost" in both cases, switch and non-switch: Not-switching brings an immediate "gain" of 50$ since you "just" pay the update (50$), not full price for either A (100$) nor for B (100$).)

This teaches us that for competing software, there are two ways of success: Trigger a switch or trigger an additional buy - the user will hold (and hopefully use) both programs from now on in parallel -, and that in both cases, just being cheaper or even being cheap is NOT a valid argument, but only additional, very important "added value", (for a certain time frame) unrivaled, really useful functionality (and which can be communicated as such or better, as highly desirable) is.

Btw, every day, countless businesses go bust which try to "be cheaper" than the competition, so this is not specific to software, where, as said, the additional problem of application-specific knowledge the user will have built up, comes into way.

But there is the more general concept of convenience which is one of my next subjects.

Regarding Navicat Modeler, I just played around with it a little bit, but that gave me the ideas for something like the software I describe there and which could be applied to other things than databases, it's about a general, pulsing software concept. (Major "Pulse" domains are all taken, of course...)

(XY... and DO just serve as examples here, that's why I abbreviate them; my point is not a comparison of file managers, all the less so since I only know one of the two.)

82
General Software Discussion / Oh, it's 12 now?
« on: May 08, 2017, 04:15 PM »
Yes, my theory is that software tend to lose updates when users lose functionality, either by their own "fault" (no pun intended) or by crazy decisions of the developer (I don't know InfoSelect, but read some things about it), but then, I also said that fun, user experience* is THE sales factor of them all when absolute necessity is not, and that visually, DO is quite pleasant, so I fully understand you! ;-)

*: In fact, I had succeeded in writing so much yet about user experience, without ever mentioning the term, I just missed the core expression of my subject, don't know why, but yesterday after posting, I noted it and promised to myself to finally say it out loud: user experience! user experience! (It's all about that indeed.)


As for the ribbon, just yesterday, when trying to install "FolderViewer" which had been gratis somewhere - didn't work for XP, though, in spite of the freebie site saying it would -, I discovered the culprit for the ribbon, or at least they pretend to be, eternal shame on them!

MatirSoft.com (homepage):

Do not feel overwhelmed by the amount of features. MatirSoft is the inventor of the Ribbon, first released in our program Winuscon® (©2001). FolderViewer is neatly organized using the familiar Tabs.

Btw, do you experience the roller coaster effect? In a line and a half they've built up a roller coaster all by itself - say it aloud, this bunch of 3 short sentences is pure cabaret, even by just reading them I cannot stop laughing.

Ribbon Robbers, them!

(Space, time, mobility of your right arm in the long run, if they weren't robbers, I wouldn't say so, but then, had they known it'd spread and literally take over in such a way, they'd have patented their invention, so they are perpetrators, and victims, too.)

Oh, and for convenience reasons, I'd like to add the "Navicat Warning" link, and I rename it to "Navicat Review" (click bait! Also, it hadn't been a real "warning", but I was upset by them automatically refusing my suggestions): https://www.donation....msg408488#msg408488

83
but left it out for 2 reasons: It may be taken for even more Apple hating, and I must say I have been quite impressed by screen reactivity of the iPads, while I regularly have problems with screen reactivity of other touchscreens (photocopiers and so on). I do not like touchscreens at all, but nowadays, for tiny devices, they have become unavoidable - I said it here, I'd so much prefer that HP mini pc of some years ago which now can only be had for outrageous prices, used - but, if touchscreen, then the variety of modern iPads (and I suppose, IPhones) is very, very good (real typing is possible without frequent typos / need to type a character twice).

When I played around with those Macs and the touch-bar, my main problem I immediately felt was my finger hiding the symbol/lettering, and thus the felt need to "first read, then move the finger" while my impulse was to not do it at the same time (it all was new for me), but nevertheless to do it with some overlap: begin the finger movement shortly after begin of reading, and that clearly wasn't possible.

And I remember a very strong point now against the touch-board which I had missed above:

In order to read the symbols/lettering, I had to bow my head; that's probably not an additional problem for people who type with 2 fingers; I type with 10 and stare at the screen. So a symbol/lettering list on the bottom of the screen, for me, would come without or just very light bowing of my head, while here, with the info at the height of the keys, I had to bow my head very sensibly each time, and that was very inconvenient, and time-consuming, too.

Also, the touch-board wasn't tilted in my direction (45, 30 or just 20 degrees), but it was totally flat, and that was very unpleasant for reading; for typing, I would have preferred a tilt, too (shorter reaching out for the fingers, over the number keys in-between).

Technically, it's possible to read from the screen and to read from the touch-board, but to roll my eyes down so far had been just more unpleasant, thus the bowing of my head.

Btw, it's of interest that Apple didn't implement the touch-board, additionally, for more rarely-used commands, above a line of traditional F-keys, since it's for more rarely-used (context-or-not) commands that symbols/lettering are so much more needed. Of course, that would have put the touch-board out of immediate finger reach for (frequent) text-expand use, which obviously was the reason why they did away with the F-keys.

I'd prefer TWO ranges of F-keys, but that's because my F-keys aren't context-sensitive in any application, and thus I'd so much need more of them. And it's probably also true that if applications had smartly-devised context-sensitive F-keys, most of the time, 12 of them were amply enough.

Whatever, it's Apple again where the "research" in context-sensitivity is now made, by trial-and-error of all the application developers trying to make their software "touch-board ready", and again, the Windows world is left behind, and that's annoying, all the more so since it's a multiple-occasion déjà vu.

It's them again who take now the most out of, develop fully and optimize a 30-or-more-year-old DOS invention, while, as described above, every Windows computer could do it as well as, and better than, MacBooks (since they come without F-keys now).

The touch-board being flat, they can and probably will change that; also, it's in color - but modern screens are in color too, and both the Apple touch-board and the general/Windows screen symbols/lettering for current F-key assignment could make big use of this: Smart coloring of it all (which is different from coloring optimized for "prettiness" or something) could enormously help with scope/context and kind of function, and thus with immediate, intuitive recognition and thus speeding up F-key pressing without the need to read / consciously check.

84
Currently, I had the occasion to admire the new Apple touch-bar; new Mac Pro's had it and started with about 2,000€, the 2016 models came without it (but with F-keys instead) and startet around 300€ less.

Apple certainly has had it patented, but they are re-inventing the wheel again (they did it with the iPad - there had been a mobile touch-screen device before by Microsoft but which was too bad, too heavy and so on), and somewhere I read "this software is touch-bar ready" indeed, while in fact there had been DOS programs with context-sensitive F-key assignments, or in short, context-sensitive F-keys.

It's very difficult to find such context-sensitive F-keys in today's Windows software, I cannot think of a single one at this moment, and I think I've read somewhere some discussion of it coming in the way of the user, being unspecific, being error-prone and all that; I doubt this, but cannot speak from experience; it's very interesting that Apple now does exactly that thing, and I suppose that now that it comes from Apple, the old criticism will be very subdued since openly hating it would be "Apple-hating" this time; as said, I'm in favor of it, I'm just hoping that it makes its way into Windows programs, too!

I say it's not different from the old thing, you will answer that's not true. So to start, here's a good introduction: http://appleinsider....h-id-for-macbook-pro

First, it replaces the F-keys, it doesn't come on top of it, but even if it did, it wouldn't make any difference. The current assignment of the (virtual) "key" (tap on the touch-bar) is indicated by changing lettering there, but this means - within the frame of the criticism that it's not unambiguous and error-prone - that you first must read what's available, or at least check that there it's the function you expect to be there, and then only you can move your finger there in order to activate the function, since before, your finger would cover the lettering; this takes a moment of time.

The touch-bar isn't only for traditional functions, but also for text expansion, which is probably a very good thing; since the suggestions are of different length though, I suppose that this means you cannot count on suggestion 1 being on a certain place of the touch-bar, suggestion 2 being on a certain other, defined place there, and so on, but that you first must read what is where, and then tap there, so the moment of time, referred-to above and needed for reading before tapping probably cannot be shortened or avoided.

How did those DOS programs convey the info? By using the bottom "line" of the screen in order to display 3x4 F-key symbols there, together with their current meaning, here's an example from wikipedia: https://en.wikipedia...le:GW-BASIC_3.23.png - note that the symbols are of different length and thus could have contained text expansion suggestions, had that concept been already invented at the time. Here's another example, even more basic where they do without even the symbols but just do a list: http://ece.wpi.edu/~...es/EE2801/Labs/tasm/ (both screenshots, by scrolling down).

Note that those screen symbols/texts are readable from the moment on the function is available (as is touch-bar lettering), but then even up to the moment you will have pressed the respective F-key, so there is available (but not forced-upon-you) a possible and wanted overlapping of checking time ("yes, it's well the function I expect") and time for moving your finger to the F-key in question, so at least for functions where you just check and don't need to really inform yourself anymore (learning phase), it's bound to be speedier than the touch-bar variant.

It's evident that in order to be speedy, the F-keys must be grouped on the screen (3x4) which in my 2 DOS examples above they were not, but that was 35 years ago (they were not for 3x4 but for 10 F-keys in 2x5 rows to the left of the keyboard); also, it's understood that the touch-bar has quite high resolution, and that your screen also should have quite high resolution in order to brilliantly display 12 different texts in 3 groups and in one single line, but whenever that condition is given, the F-key-plus-screen-display should be speedier than Apple's touch-bar, at the very least for often-used functions, since F-keys always are at the same position, while the relevant function on the touch-bar is not, necessarily, or at least the boundaries of the functions are not that distinct as with physical F-keys, so at least some visual check, before moving your finger, is needed for the touch-bar command, while for F-keys it is not.

So it seems that the touch-bar is just another eye-catcher - yes, it's cute when you look at it in the store -, but its full functionality should be replicated, both in the Mac and the Windows system, by physical F-keys plus visual indicators on the screen; 3x4-groups give immediate indication which F-key to press, even without looking out for their respective number, "counting" them or otherwise. Also, I doubt very much that the touch-bar of a tiny-and-cute MacBook Pro will present more than 12 different functions at the same time; if it really does, this will sharply rise the time for reading/identifying the correct function, so that could not be regarded as an advantage at all - the same is true for big screens where the readability then is much better, but the "findability" will not rise accordingly.

As for the old criticism that it's not explicit: First, now it's Apple which re-introduces the system, so it's above "hating" but has to be accepted as anything else that Apple pushes into the market. Second, bear in mind that it'll spare you, to the extend of the application of this system, both to have to remember weird key combinations, and to then press them (hoping you'll press the right one). Third, bear in mind that you always have the "help file" before your eyes, and that even if you lose time by needing to read the lettering, you'll quickly find the correct command, while in the alternative of dozens of multi-key combinations (Shift-Alt-Something and all that) you do not have the help on-screen but you will have to look up the right key combination elsewhere, in some file or some brochure.

Fourth, bear in mind that it's perfectly possible to allocate standard functions (F3=search again) to their standard keys (F3 here), and that there will not necessarily be a mix-up of it all; this will depend on the courtesy of the developers, and in order to have users accept their software, they will have big interest in observing standards, like they now have in observing menu standards or ribbon standards; we all tend to discard software wherever possible when they don't observe standards. Also, it's possible for example to assign some 4 keys, F1-F4, for functions which are available from everywhere, while only F5-F12 may be context-sensitive.

Whatever you think of my endorsement of that Apple re-invention, it's obvious that its functionally better variant, F-keys plus 3x4-groups in bottom screen "line", should be made available in general, for Mac*, Windows, Linux.

*: The irony is, Mac developers who sell their software as "touch-bar-ready" will probably not adopt it to F-keys since that would ask for some hours' work, and "modern" Macs, as said, don't have F-keys anymore (like, they told me, Macs do without any mouse keys except one) - but that's no reason for not making the context-sensitivity paradigm available again for pc and elsewhere where Apple cannot discard the F-keys. Since it has always been there, even dormant, I doubt Apple got the whole concept patented (perhaps for text expansion? but even that should be available on F-keys, Apple re-inventions notwithstanding).


EDIT:

And bear in mind the traditional key combinations (Alt-F4 for example) would remain available, and, depending on the agenda of the developer in question, even ALL possible key combinations could remain available, even re-assignable by the user, as we know it from many a software today, as alternatives, so a given function would be some key combination OR some context-sensitive F-key, at your choice.

You would, in theory only, "lose", in my concept above, 8 F-keys out of 12, BUT what are those functions currently which you really need to be available from anywhere in a given application? In reality, those F-keys are dormant most of the time, while you effectively need other commands of which you will have to remember their weird key combinations, so in practice, do you really need F11 for "maximize" all the time, or could it be Control-F11 instead, from now on, and F11 (as F5 and following ones) being readily available according to context?

Also, what is "context"? This concept of context could be quite broad, for some keys (F5...F8), and quite narrow for the rest (F9-F12), which means that some keys would be available, for the SAME function, in EVERY situation where their function would be needed, so you would not need to muse, is it the right context here or not, or check visually, but you just press the key, you're certain that it'll work the intended way. While the "upper" F-keys are very specific, and thus have their specific meaning in very specific contexts, so for them, you may check indeed quite often if you don't use them, in that specific context, all the time. Now compare with rarely-used commands with some control-alt-something (which you won't remember from now for the next occasion 6 weeks later), and you see that the context-sensitivity paradigm is superior both for often-used functions and for rarely-used ones.

85
General Software Discussion / Superlauncher
« on: May 08, 2017, 11:30 AM »
Just downloaded and installed Superlauncher, regular 30$, free on giveawayoftheday TODAY ONLY (up to 10 o'clock tomorrow morning, seen from Old Europe).

No help file, prefigured short keys to some system folders and so on, possible to enter files or folders, manually (only?) into the list, list available from symbol in the notification area (only?), didn't find a hotkey to launch the launcher, but there very probably is one.

It does not seem to be possible to have it as a vertical ribbon (?). Does anyone use it? Judging from its price, it should be as good as some described in this thread, but it seems to be far behind, but then, I could be mistaken; as said, I did not see any help file for it. Probably download/install today isn't that a bad idea if someone comes with more specific information even later on (install needed within the downloading timeframe, but I suppose you know that).

86
General Software Discussion / tomos,
« on: May 07, 2017, 03:17 PM »
what you say about Softmaker Office reminds me of an aspect I missed, Underdog Assistance. I've written for hours and I'm tired, but this would be worth to be developed, it's not about being cheaper, being simpler or other "justifications", it's about identification with other people who are NOT in power while the like of Microsoft and Adobe clearly are. And it's fascinating because individual developers can try to trigger that Assistance from many a potential buyer, but most of the time commit mistakes which will break the spell.

It would be interesting to hear your "reasons" of the time which made you chose Softmaker Office over MS: Was it early enough for cheap MS Office licenses not being available yet? Since even at similar prices - around 100$ vs around 100$ for a MS "Home & Student" license it's evident for many people that their money go to the Underdog, not to the richest man in the world, even when they get much less functionality for the price - "I don't need the overkill", they silence their doubts in those cases: in fact, for political reasons, they then go against their own best interests (old laptops excluded here). No pun intended, it's just human and rather "friendly buying", you you tell us, your loyalty was unidirectional: They don't return the attachment of their customers, but just take advantage of it. (This aspect is one of the points I referred to by saying they break the spell by making mistakes, just having found your niche isn't enough, for a developer, they need to secure it, too.)

And what you have experienced with DO (which stands in for other programs here of course) is another aspect of what I said about not having the time/leisure to dig deep enough (again, in your case); when I had had that other, Win10 pc, for some days, every usage of (my) standard software was horrible, having made the installation, but not having all the thousands (!) of little tweaks - before, I had even taken note of many of them, in order to replicate them, but it just wasn't the real thing, and would have taken months, and hundreds of hours, to get even near there.

(Besides, I probably over-simplified my description in the other thread; some months ago, I had unsuccessfully tried to buy a new HP workstation with Win10 and a good XEON processor in the range of the i7, but they were already sold out at the very good price (1100€ but without graphics card yet), so now I had bought a used HP workstation, with a processor which is said to be similar to an i5 (and with that 4 GB graphics card, 400€, also with 16 GB server RAM), and that really was a big disappointment, independently of its instability / alleged motherboard fault.)

So going back to my old XP pc has been literally a relief for me, and I would pay a good price for a really good tool which was able to transfer my programs, with settings, to a modern pc, but from what I read about such programs, even from 7 to 7, 7 to 8, or 10 to 10, they are not strong, so taking it all from XP to 10 and hoping to get much help from such a program, would be delusional.

But what you report, discloses another very important factor in selling, and in pricing, too: It's quite easy for a developer to guarantee the transfer of all (!) the settings from one pc to another, but almost none even ever touches the subject, and it's with software that can be deeply personalized, and which will gain almost all its power from such extensive personalization that such transfer help is of paramount importance, and from one software version to the next, this should not even be a subject of worth mentioning. But you updated to 11, then, in spite of those goodies not available to you anymore? That's what I'd call an very loyal customer!

But then, if you really did, I admit that filing with DO's more fun than with, say, xplorer2, that's for sure, graphically and functionally, but then, the latter's developer is not that open to his customers' suggestions, either, he's a very (brilliant) technical guy (see his new search tool, specialized in meta data: wow!)


But now I finally must check the French election results now, I don't have the slightest idea yet (about the percentages, that is, I'd guess around 63:37 or even more apart (advantage: High Finance), so I'm not intrigued by the possible outcome, would have checked if I had had any doubt); software design being just too fascinating a subject for me, you even could speak of addiction.

EDIT:

Ok, ok, it's 65:35, and nobody here will believe me I didn't know 1 minute ago. That's life. ;-)

87
General Software Discussion / On software pricing (3)
« on: May 07, 2017, 02:24 PM »
timns' mention of clothing price (citation on top of this thread) has been a real eye-opener for me - as said above, similar clothing of similar quality: similar prices: you get what you pay for, more or less; there is a common perception of how much a certain clothing in a certain quality will/should cost -, and thinking about it:

In software, too, there is such a price expectation, too, but it's entirely dependent on other software prices, even on prices of software which is in a complete nother range: It's almost as if clothing prices were heavily influenced by car prices or vice versa.

This phenomenon has quite broad consequences: The availability of some "super software" for almost nothing devaluates other software, even other software kinds, and invalidates their price demands, except for cases in which some software is needed to earn money AND where all the softwares in/for that/those special business (needs) are priced more or less evenly.

From a productivity-worth point of view, prices would have been very different. In the beginnings, for example for a translator, there were typewriters / electric typewriters (several hundred $$), then electronic typewriters (over 1,000 to several thousand $$) - no real gain in productivity so far -, then electronic typewriters with disk storage (and some sort of text display, sometimes just several 80 character lines): These were up to 10,000 $$, and more if they were a little bit sophisticated (for example Wang).

They came with big gains in productivity for translators (or for office personnel having to type very long and quite plastic documents, but also, another way, for short business letter writing, thanks to "phrases", standard bits of text ready for multi-usage). For translators, though, the gains in productivity were much less than the additional costs (about 8,000 $$ more than their ancient equipment which by the move of buying modern hardware was rendered worthless), BUT they tried to buy it anyway since their files on disk were much more worth for their customers than were the hundreds of printed-out pages of their competitors.

And speaking of printed-out pages, printing them out correctly was very expensive since you had to use highly-expensive, 1-use-only carbon tapes in order to make them readable for the (15,000 $$ and more) reading machines in the publishing houses and other corporate customers. (Those reading machines, even if they were already present there, did not read printed-out texts without inserting new, additional typos, which made files-on-disk even more valuable.)

In other words, there was a time where the cost of equipment was either directly related to the productivity gains coming from it, or where for most would-be buyers of such equipment, the costs even surpassed possible gains. (For most translators of the time, it should have been less expensive to read the new print-outs from the files produced by the reading machines having read their print-outs, for free, than to buy a Wang - which were new enough for not being available used for lesser cost.)

The the introduction of the pc, and shortly afterwards or at the same time, of the earliest Apple machine, and it was the first time that hardware and software fell apart; the second time, many years later, this occurred with smartphones which today do all sorts of things for which some time ago you would have needed dedicated devices, one by one.

The same phenomenon I describe here for text, occurred for crunching numbers, and even, with some time delay, for vector and other graphics; the latter made that Apple had its chance beside the pc; graphics on pc's came much, much later only. Databases ditto: Rolodex and other cards were the norm for anybody who could not afford sort of mainframes.

In these early years, standard software (text, spreadsheet, database, graphics) was about 1,000 $ each, and it was evident that for anybody making even a little bit of money with any of these software kinds, the price of software, even though it was 1,000 apiece, was extremely acceptable, and affordable, considering their gains, once you had bought the necessary pc or Mac, and, since hardware development was incredibly fast, you were able to buy pc's or Macs used, for much less, very soon, so the necessary upfront cost for then having access to a Wang-in-pc and other production machines (spreadsheets and so on) was not necessarily 5,000 $ or more but just the half of that.

You could call this the democratization of productivity.

(Of course Wang went bust.)

From then on already, software prices went down, too, by means of individual developers also doing some development, and also from the availability of "used" software (when people switched from Word to WordPerfect or the other way round, for example) and from software "sales" coming from software producers which went to the wall or software products which were not longer developed but which were sold a prices making them of interest for people not wanting (and not needing) to pay 1,000 $ for each software; it's of interest in this respect that Microsoft (but not WordPerfect) succeeded in establishing their text format as a standard, so that anybody who had to sell their texts had no other choice than to buy Word, in the worst of cases even had to switch to Word, from WordPerfect (total investment just for text files 2,000 $ instead of just the 1,000 they had thought it was sufficient). Of interest here: The inability of WordStar to establish their format as the standard, even though they were first, and much cheaper even. Perhaps it's because they hadn't formatting but years later, when both Word and WordPerfect had it introduced in awhile.

Whatever, Word and WordPerfect both succeeded in upholding their very comparable prices for some years (in the beginnings, Word's victory was not yet foreseen, so most text buyers had both programs available, leaving the choice of format to their text suppliers, but WordStar, even though cheap in comparison, was never a contender), while, as said, those prices were incredibly cheap, compared with Wang and similar systems just some years ago.

And now you get almost the whole bunch from Microsoft for almost free, and almost the whole market is deeply affected by this, except for very special software where (or more precisley: as far as: see Excel) Microsoft software cannot help you: business software (besides text/mail/Excel), scientific software (beyond and besides Excel number crunching), special software for special professions (the translator example since now that they know it's possible, they (and their paying customers) have discovered urgent needs far beyond text processing), and so on.

It's fascinating that Adobe makes the exception: They, and they only, have succeeded in preserving a quite "general-public" market with high prices for them, and they are even much higher than in the past. This is due to 2 factors: For one, graphics software obviously ask for extreme coding (most graphics software from other publishers is not as sophisticated, obviously because they don't have the necessary development money), and for two, they did it all right, not having an operating system to rely too heavily on and to blur their minds, like Microsoft did; both tried to extinguish the competition, both succeeded in that. It has to be said that Microsoft's user and usage scopes were much larger (text, even spreadsheets) so that to deliver to the general public, they had to deliver products which also were then available, at general-public prices*, to corporate customers, while for many years, Adobe (and their competitors) provided software which wasn't yet asked for by large elements of the general public, too.

*: It's ironic that MS in all those years and even now made and makes available their software to corporations at much LESSER prices than to the general public, it's just that now those corporate licenses, too, by jurisdiction, have become to the general public, too. But the rule of price is, most of Microsoft's corporate software is also very useful for the layman, or more precisely, it's the other round: Most of layman's software would totally suffice for corporate needs, or in just other wording yet: Most of Microsoft's software is just not sophisticated enough in order to be sold at much higher prices yet since then cheap competing software would take over their markets insofar.

This is different for Microsoft Server, Exchange Server, SharePoint: have a look at their prices over there... So the problem obviously is: Whenever there is software which, for the general public and corporate needs, is almost identical, prices cannot be upheld, even while Adobe succeeds in it, partially - it's evident they will have lost lots of their former non-corporate customers by their price strategy, and it will be of the utmost interest to have a look when Microsoft buys some Adobe competitors, then injects the necessary money into them. (Ok, for vector graphics, no serious contender is left, but for the rest of Adobe's choice of products, that would work quite brilliantly.)

Now compare the whole Microsoft package for less than 10$ with a Wang machine for 10- or 20,000 $ (since their "cheap" machines didn't dig that deep into text processing), dollars of then!

And then try to sell some quite simple piece of software for 100 bucks. Even when we all convene that you get some 20,000 $ of development costs for your purchase price of 100$, and some 20,000$ of 1981's worth: If you don't need it desperately, and if it doesn't provide fun* for its money, you compare with the 10$ MS Office on your machine, and we know the result.

*: That's why the Chinese do so much media software, and why the most brilliant developers today often code games instead.

If it doesn't produce direct money, and if it doesn't give pleasure, and isn't even outrageously cheap, shelve it; freeware and almost-freeware from Microsoft provides for all they will need: 20,000 or 200,000 $ of development costs it isn't worth because you don't need it and think you know what nice-to-haves should cost. You think otherwise on the subject of "a decent pair of boots, say. Or a warm jacket.": Even if you don't need them since you've got more than you can wear out between now and the time of your death, they have agreed-upon standard prizes (as had software in 1985), and you either need them indeed, or they are attractive before your eyes, you WANT them, for color, fabric, for feeling good.

While most software nowadays, let alone for the Windows operating system, isn't attractive at all.

You thought TheBrain was functional? Come on! They sell on looks-n-fun. (But since it's not so functional in the end, they don't sell that much of it after all.)

Some days ago, I've been to some store where they also sell Apple. Impossible to have another look (yes, another look, so it wasn't important: Yes, that stuff IS sexy, so I stroll along whenever I'm there for other reasons) onto the iPads - iPads, I say, not IPhones: 4 girls between 11 and 13, and some around-12-year-old boy had the fun of their life (or day), bent over the pads: From just looking and touching, they were in heaven, so I didn't want to disturb them, and probably about 80 p.c. or more of the smartphone and related businesses are driven by that fun factor, not by business needs, and that's not even counting the non-availability of batteries after some years.

It's by need (and that's function of the competition, too!), or by real, sheer fun (include dreams, prestige and so on here) that you sell; anything else is always "too expensive" - that's why people complain about prices which, from a matter-of-fact point of view, may even be a steal.

If you want to get an idea of how to inject fun even into software-you-need, re-read my thread "Navicat Warning", more thoroughly this time; also some Mac software is fun to work with. Even most mindmappers (incl. the Buzan thing) don't understand that either: What fun could be driven from software which is so much less fun, and that more cumbersome, than handicraft?

88
General Software Discussion / On software pricing (2)
« on: May 07, 2017, 12:13 PM »
Talking of roller coasters: Bitcoin as a perfect example. In fact, nobody knows what the real value of Bitcoin is, except probably its creators. And I don't have any idea, but would be thankful for suggestions, why some software developers tout Bitcoin, by lowering their price for Bitcoin payment. Or do they, the touting I mean?

Could be that they muse that Bitcoin users are juvenile types (no pun intended) who otherwise would not buy at all? Or even another, quite exotic effect: They know that sympathy sells, too. So they think that 99 p.c. (be it 95, whatever) of possible buyers will NOT have access to Bitcoin (it doesn't make sense to go into it for a payment of some dollars where you'll spare some dime) BUT will say to themselves, Oh, that's a fine fellow, he even gives out discounts, so IF I had Bitcoin, I could spare a dime. That's really friendly, so I'll buy from him anyway, all the more so since what I lose by not getting the discount isn't that much!

We're speaking of a buyer here who without the discount (which (s)he doesn't even get) would not even have considered buying.

I call this the virtual-discount-purchase-trigger. (Which can of course come along in other forms, too, but that's another problem: Student discounts ain't that attractive to non-students for example...) - Remember, you just have to give it out to perhaps 3 p.c. of people who would have bought anyway, but it'll bring you possibly 10, 20 or 30 p.c. more customers, all paying full price, and which would have not bought without it being virtually there.

But even that can be wrongly applied. When DVD Anywhere passed hands, I decided I wanted to buy, at last, but only ways of payment were Visa or Bitcoin. I just have some MasterCards, no Visa, no Bitcoin, so I never bought since digging into Bitcoin (or applying for an additional credit card) wasn't worth it for me, and since, renting films has become free in the public library, so DVD Anywhere won't get my money anytime soon even if they accept MasterCard.

In this special case of media piracy tools, Bitcoin acceptance obviously was given out of necessity, not many banks wanting to be involved in such business, and so even them offered a discount for paying with Bitcoins, probably in order to induce to get non-Visa-card-holders into the Bitcoin system. (EDIT: In order to not lose the sale I mean.)

But, frankly, for all that fuss the discount wasn't sharp enough, so I happily do without DVD Anywhere (or was it AnyDVD rather? Anyway, the Elby successors.).


AND OF COURSE, in https://www.donation...ex.php?topic=43777.0 (my second post there) I had described another price problem, which is more a problem for would-have-been-buyers than for the seller except when it gets too generally known: Price explosion, but in another variant: Not to an unjustified price, but starting from an exceptionally low price level, or simply from zero, longtime freeware getting normally-priced at some point in time: this latter phenomenon certainly is a problem for the developer, the more well-known the freeware was, the less users now will buy.

Let's hope for freeware developers that at some point, the operating systems will make further use of the freeware impossible, AND that they understand that their now paid version must offer BIG advantages over existing functionality. Gratitude ("Now I finally can pay after all!") doesn't work here.

But as said, nothing's worse than missed lifetime updates periods...


I FORGOT:

Some days ago, some mindmap program was offered half-price on bitsdujour, the one that has been endorsed (probably for free, for being the best, or then for something else?) by Tony Buzan, the inventor of the mindmap term.

Mindmaps come off a time where pc's were rather mainframes, so they were drawn by hand, and indeed, the software that Buzan endorses, comes with very hand-crafted-like shapes, which must have pleased him. It's in its version 20 or so now, and comes with lots of graphical bells and whistles, but functionality-wise, it does not seem to be that extraordinary; reviews are so-so. There's a lot of competition in that field in any case.

Bitsdujour customers are called "folks", and those "folks" were warned, even days before the big sale, that only 100 licenses were available - it now occurred to me I should have posted here at that very moment, to share my laugh, and it came as expected by me: After day 1, the "folks" were informed that the big day had been to extended to 48 hours! And of course, after almost 48 hours in full, Buzan's adopted child was always available for purchase.

At a price of some 117.50$ instead of 235 if I remember well, but where was the clutch? Since for 117.50 plus VAT, it would have been something nice-to-have, it's optical bells and whistles screaming "buy us!".

From the reviews it was evident that additional functionality (for example Gantt charts) were in quite early stages of development and needed quite a lot of polishing, and on the other hand, "folks" who tried to "help" with their business, told "folks" that paid updates were very regular, every year, minus 50 p.c. at the time of the update, less so afterwards, and so anyone who would have bought this as a nice-to-have would have faced life with a program of which major parts were not ready for prime time, AND if they wanted rather minor increments (as said, current version was 19 or something around that, so you could have expected it was sheer brilliance now after 18 tries, so if it was obviously not, further development would to be expected at that rather languid path), they would have to pay 117.50 or more every year from now.

That's a big, continuous deception with a big announcement I'd say, and so it was evident that most of nice-to-have-it-users would refrain from such a purchase, while on the other hand, professional users will probably buy something more professional, less cumbersome, less "handcrafted" (see the reviews).

I'm not speeking of "greediness" or something, 117.50 per year and per seat is nothing to whine about for a productive program in a professional environment, but annual updates, even correctly priced, and which after 19 years don't bring perfect results, in graphics and in functionality, yet, in a market of quite performant contenders, that's simply not good enough as I see it.

And the never-ending follow-up of updates of which perhaps some should have been free minor ones, does not produce a "worthy" product which then, for the casual user, will suffice for some years, it's not "rounded-up" enough but screaming for the next update, and the next, and the next, and they're all paid ones, at the (here) original price:

That offer was psychologically very near a subscription, and for the casual user, subscriptions fall flat. (And not even the scarcity incentive here could overcome that.)

89
General Software Discussion / On software pricing
« on: May 06, 2017, 04:36 PM »
(EDIT May 8, 2017: Repair of the mix-up: Put #1 in frame 1 and #2 in frame 2, which should also change the title of this thread.)

Starting point of this thread has been timns'

Personally I have no problem paying a reasonable chunk of change for something that I am going to get a lot of use out of. The example of DOpus is a really good one. God knows how much I have used that software, and how many hours it has saved me both at work and at home. So is 80+ bucks a lot for that?

I think: no. I wouldn't blink at paying that for a decent pair of boots, say. Or a warm jacket. Or even a decent bottle of plonk. Hell, how about filling the car with petrol?

So I still struggle to understand why there's a very strange attitude to paying for software compared to just about anything else.


and several answers over there (all page 1 there): superboyac's software pricing table, nudone's answer and wraith888's own pricing table underneath it: https://www.donation...ex.php?topic=25742.0 (all on first page there). So:


It is certainly correct that any price has to be viewed with respect to usefulness of the software, which can be divided into several aspects: frequency of use (quite cheap but not free tool used daily: no brainer), intensity of use (quite expensive file manager but which you use almost constantly*), returns of the use (either in terms of direct money* or in terms of productivity, in comparison with similar tools*), likeability (you just like the looks of some software while similar software may be cheaper... but the more expensive one may either even produces better results, since you really like working in it and thus you work better it, or just you have more fun with it which is also kind of a return even if it's not a financial one)...

*: Directory Opus has been mentioned in this respect and stands for other, business software in the following respects:

Perhaps it's quite time-consuming to set it all up in order to really take advantage of its special features, and if people don't have/take that time, the price extend may not be worthwile; this can of course be worsened by regular update prices, or update price extents (compared with similar software) when the user does not take/find the time/quietness to dig into these optimization matters. The above discussion on DO shows this: Perhaps there are possibilities, but they are not made evident or readily available, and so for those users in question, they are NOT available for the time being, even if they are there. From the developer's point of view, it's obvious that they should try to clearly communicate whatever is possible, and how that is possible, and it also should be made possible in easy, clearly defined steps. The DO help file is not bad at all, but they don't integrate realizations for special wishes into it, so that the user has to browse the forum, and more so, has to put together the necessary steps in order to get to it, which many users will not have the time to do, and ironically the users for which the price is no consideration at all. Simply put: If you just use it as the Windows file explorer but with two panes, a lifetime license of a competitor, at a price less than the initial price of the alleged superior contender, will do, and will do indeed for most people.

As implied in the paragraph before, an important factor is the existence or non-existence of competition in the field, and here again - but this does not apply to DO and its competition - the readiness-of-availability of extra features in that competition. I say this does not apply to file managers since the much cheaper file managers do not make readily available their extras either, so the problem of using them just as an explorer replacement with 2 panes is present there, too, and so it often comes to some sort of a feature list race, most of these features never being used. It's ironic and very understandable that those who dig deeper then quickly form sort of a select community for and in which the price is perfectly justified, and tout it, to people who don't have the leisure to get into it and for which the price, in view of the cheaper competition, is not justified. Here again, it's about the making readily available the extras, with and without the help, I mean by the help file and ideally even without needing it.

For direct monetary extent returns of some software over its competition, file managers may not be a really good example, but I'm sure many business software can trigger such direct surplus return over its competition, and then it's up to the developers or resellers to prove this / make this understable and plausible upfront in order to justify the price extent. End of *.

In general and except for software which immediately helps with producing monetary returns, public appreciation of software prices has very changed over the years, by the fact that by Microsoft products available to mass markets at incredibly low prices, the public does not see the individual programming effort, but the price in relation to the package, and this has never been as evident as nowadays since Microsoft Office 2016 is available UNDER 10$/euro (incl. even VAT) everywhere, including some totally incredible package for the price, which by that not only affects competing products (text, spreadsheet, mail and so on) but also software in other fields which often appear "poor", too basic in comparison, and spending some 100$ plus 25$ VAT for some simple software becomes indecent (EDIT: becomes "indecent": I wanted to express that in direct comparison, many users will have the feeling that it's "indecent"; the mention of clothes above is of interest here since at similar prices, you get similar quality, most of the time, while this rule, in software, by Microsoft, has been broken indeed; not by Adobe, though, since they charge indecent prices month after month for quality which is not superior in every aspect...) , running on a pc on which a complete Office 2016 runs for almost nothing. (On the Apple side (Mac, but not AppStore), this has been a little bit different, so developers like to program for Mac, and that often shows by better quality of the software.)

I'd like to give a recent example of a price roller coaster. IdeaRover is some academic text program possibly including citation management, but I don't know to what degree (maintenance, formatting...). It does not seem to be an "outliner" integrating resources and texts-to-write-from-those but manages with two different lists for the former and for the latter. (I had wanted to try it but it doesn't work on XP anymore.)

That program was 89$. Then it was 249$, but not for long. Now it's 99$. Which reminds me of an additional criterion which is the presence not only of cheaper competitors, but of free ones, most universities having one such academic writing software as a campus license.

Which reminds me of file managers where I use FreeCommander almost exclusively, holding some paid licenses for competitors, too, but those tools are totally dormant on my pc, so imagine they changed their price policy to a subscription model, ha!

ANOTHER EDIT: Sometimes, the value of the higher priced software is invalidated by missing or very poorly executed basic functionality. For example, xplorer2 may be superior to FreeCommander, but has not got any "favorites" management worth mentioning; even the Windows Explorer has got some much better one. The one in FreeCommander is not ideal either, but it's functional, as are for example the file rename possibilities, regular and RegEx, too, very intuitive and of big everyday use. So if I had properly appreciated/known these differences, I would never even have bought xplorer2. This is not to criticise xplorer2 which must have many qualities I don't even know, but it's another example of where when buying software at a certain price, you (sometimes wrongly) expect all the basics being there, and well implemented, when in fact it's perfectly possible they are (almost) missing, and then in your daily use, you replace paid software with freeware. Such experiences then also have the effect that when in doubt, you don't buy (but after an extensive trial perhaps, but which may never take place then, so the purchase will not take place either: in this respect, DO's trial of 60 days instead of the usual 30 is a smart move, since it gives you time to discover qualities you may have overlooked by a trial in a hurry), when before, you bought and weren't even in doubt about possible missing qualities.

90
Now here: https://www.donation...ex.php?topic=43805.0 (second post there)

91
General Software Discussion / Re: SQLite, SQL...
« on: May 01, 2017, 12:44 PM »
Here: The "like" query for strings as a real alternative for not-too-extensive databases

classic sql selection:
select * from tablename where "x" = 'y' -- display all records where column x has got the value y

"like" syntax:
select * from tablename where "x" like '%y%' -- ditto for the string y anywhere within the value in column x

It's known that simple databases - we're not speaking of full-text search here, which is available in SQLite - are not perfectly suited for text search, the "like" clause is said very slow. This certainly is true for big databases, but for thousands of records on an old xp pc, even "like" will bring the results in less than a second, so it's perfectly adequate.

To give an example, my list of CDs:

ID (SQLite ID number)

Code/Category

Range (bear in mind CDs are physical stuff, so they only do have one place if not copied, no physical CD in more than one physical range, so the "where to find" info is not redundant even though most of the time (only), Category and Range are identical; in pop, categories are by countries and then in case decenniums; in jazz, Category is the instrument of the main artist),

Artist (can be several, for jazz in particular; it's just my classical music CDs that I have got in a different list, the main difference being the fact that the composer column is the main field over there, where pop, jazz, and so on are sorted by artist)

Year (not of a single CD but year of birth of the artist or year of creation of the group)

Titles (which means different CDs, not titles on a given CD, that info being on the sleeve; it's just in certain cases that I put exceptional titles in parentheses here, in order to know where to look for them; also the year of the CD is in parentheses if I took the time to look it up from the sleeve or from elsewhere)

From the above it becomes evident that my data isn't in perfect database format since as said, some records can have several artists, and it's similar for the titles: I put some solo artists together with their respective (main) groups, and then, in the titles, I put some additional info; it's evident I need (and have) links.

Instead of lengthy explanations, I just put an example here, with the query and its immediate result; from this it becomes clear that a simple (and not too big) SQLite database, with systematic "like" instead of "=" query, is also suitable for text data which is in a less-than-perfect form, and where to bring it into such perfect form would not make too much sense, because of too much work, for not really much more effect in practice (left out fields without interest here):

select * from cds where "artists" like '%stefani%' or "titles" like '%stefani%'

1)
[Artist:] No Doubt
[Category:] A8 [this means Pop, USA, Eighties]
[Year:] 87
[Titles:] (Gwen Stefani 1968, Eric Stefani) a) Rock Steady, b) Gwen Stefani: The Sweet Escape (2006), c) Gwen Stefani: Love. Angel. Music. Baby. (2004)      

2)
[Artist:] (Gwen Stefani see No Doubt )
0 [default when not applicable]
0 [default when not applicable]
[Titles: empty]

From the above, it becomes evident that it would be very welcome to have some input device (macro tool) which would allow for typing the search string just once and then put it into both query statements. The same macro would also apply to queries for titles, or, for speed reasons, you would apply a second macro, with just one query statement (the string just being searched in titles).

A macro may be easy on the pc, it should also be available for Android and/oder iOs/iPad, I hope. Here again, if you make the slightest typo, most of the time you will just not see the wanted results, without any error dialog, for example if you write "artist" instead of "artists", so doing this by some macro tool is not only a question of convenience, but of avoiding typos which may leave out wanted records.

Also, if I put my links to artists from the "artists" column into the "titles" column (which should be quite easy), a simple query would be sufficient in these searches for an artist also belonging to some group, but it would not cover cases where, as said mainly in jazz, there are main, and "secondary" artists, the "secondary" being just the ones by which the CD in question is not filed. As you can imagine, in jazz, it's not so neat at all, but in my text files, every important artist is either named in the artist or in the titles field, so that with a query over both columns, they will be retrieved reliably.

In other words, in many practical, private or office, use cases of not-too-extensive data collections, "like" is the solution to use a light database instead of a text file: on a modern pc, some 20- or 30,000 records should deliver almost instantaneous results, and that without enormous adaptation work beforehand which could often not be justified in terms of time investment.

P.S.

Tuxman: Any statement is an implicit question, too, for people who'd be able to add significant info/advice.
widgewunner: Thank you!

EDIT:
Just imagine music group names with or without the leading "The": Who; The Who; Who, The? If you all made it neat instead, that and similar ones will be additional problems; ditto for names in an office environment (but where most of the time the databases are very big and thus name search should be made possible by "="):
- prefix before first name
- first name
- prefix before last name
- last name
- suffix after last name
Not speaking of the multiple variants, according to country, too, and then even according to existence/absence/combination of these elements, how these elements are then to be combined differently (!) in the address field and in the salutation.


Edit June 1, 2017
Writing from some more experience with transposition from text data to database data:

As said, you'll need a RegEx editor, and you'll need to know how to do RegEx replace with it, and then you'll be able to achieve about 80-90 p.c. of the necessary translation work by automated means. It's also necessary to have a RegEx editor which allows for details, for example for only replacing the very first ":" in each line, instead of further ones, too; this avoids a lot of manual work after visual check.

As for the result in the database, you will want the family name of the very FIRST author/artist/etc. of probably several ones as the START of your author(s)/artist(s) field since you will want, for example for print-outs, to be able to SORT by them ("order by..."); for just FINDING authors/artists/etc., you'll do as explained above ("like" instead of "="). Thus, you will need to do some macroing in order to seperate the first from the last names, which means in practice that you clip by the last space (before the first comma if there is a comma) in the field, except when there is some "di/von/de/of", by this allowing for more than one first (or middle) names.

For example, when there is an institution as the author, this will not work, so you will have to do some visual checking afterwards, for this like for everything else, but here again, it's a very good idea to combine several tools; for example, visual checking is much easier within a csv editor (I use Ron's Editor for that, but do NOT try to SORT with that, it's bug-ridden in this respect but the developer is not interested in de-bugging it; at least he does not react to mentions of those bugs) or even the target database, but the editing should be done by RegEx, within your editor or such (2-screen or big-screen setup): At this stage, you'll use the database just as a visual checking tool but you work on your editor csv file, which means you'll create the same database several times, up to it being almost perfect, then only NOT going back again to other tools for editing.

Before the final version, all your fields will be in varchar format and without size limitations, since all too often, depending on your original text file and/or your editing ("|" or something else as field separators) in order to distribute your data into the respective target fields, SOME contents of for example fields 6 and 7 will be within fields 7 and 8 instead, and since thus things you can quickly rectify in a csv editor (at least in Ron's Editor), you'll do your editing in some systematic way, sometimes even doing changes in the csv editor, then saving it for further editing in the text editor for RegEx, and so on - mixing things up will end up by inadvertantly discarding corrections already done.

Instead of buying a csv editor and/or learning RegEx, you probably can do a lot of things within Microsoft's Excel, but you'll probably get a double-quotes problem which can be avoided otherwise; anyway, you should replace all double-quotes with single-quotes in your original file(s) to begin with, before getting to any editing in the copied version of the original file. Btw, since your editing will probably have unwanted effects sometimes, you should do some sort of versioning (between your visual checks in the csv editor or in Excel or in the database), and of course you should store your original source file in a safe place as they say.

As a general rule: If your editing had unwanted effects on SOME records / datasets / text lines, rectify them manually at once, before doing other automated editings; if they had such unwanted effects on a lot of records, though, go back and devise some other (/ some other RegEx) strategy.

And: Many users don't use text files for data storage but Excel files to begin with; it's evident that they then will get all the advantages of a database (SQL queries, incl. standardized ones and ones for filtering), even out of their flat files (before, or even without ever normalizing them), without all the semi-manual fuss described above, so to store half-way standardized files in Excel instead of in text files seems to be a very good idea: The field format is common to spreadsheets and databases and thus forces you to a much better standardization from start on then the text format does, so you'll get much less "micro-architectural" variants than if you use text files, even if you had thought you had been entered your data in a standardized format in there.

As for "sorting fields", I mean fields FOR sorting here, it's evident you can easily establish a flat tree structure, by (creating, and then sorting by) fields like C1/2/3/4, for Code1..., for categories and possible sub-categories, while the latter can remain empty when not needed.

But as implied above, databases, even "primitive" flat ones (and Excel, for such things), only get "interesting" when you're able to trigger technically complicated sql (or other) selects very easily; in other words, you need 1-key(-combinations') "stored searches", but with variables for the key terms; it's surprising that current database frontends' developers didn't grasp that so that you have to write your own macros for it.


Edit June 2, 2017
Printing ("Reports")

I didn't look into the various "Reports" facilities of paid/trial frontends; from previous experience, I would assume most of them are not very flexible so that you must accept what you get; for example, from Navicat, you can buy some extra "Report Generator" so that it's safe to assume the report functionality of the Navicat frontends is very basic; also, there is some famous generic report generator I don't actually remember the name of and which isn't cheap either.

Thus, printing from some database frontend does not seem to be easy, but it is, with some macroing. First, you must identify how you want to get the records; for example, you want the first record separator ("|" or other) of each record to be translated to ": ", the second one to " - ", the third one to a ":" but with a code for printing bold, the fourth one to be translated to a comma and a code for end of printing bold, the fifth one should be a code for newline, etc. - it goes without saying that we're speaking of database data and thus we're speaking of perfect standardization of the records.

I've spoken of this problem before, you need a RegEx implementation which allows for differently treating the first, second, third... occurence of your field separator in each line. In practice, this means you need an implementation which allows for treating each line individually, and then, instead of trying bombastic macro code, you just set up specific RegEx commands for the specific field separators 1, 2, 3..., and you run the first RegEx to the first occurrence of the "|" (or whatever) in each line, then the second RegEx to the first (!) occurrence... and so on: Since each RegEx replaces the first "|" in each line, the next run will automatically replace the next occurrence.

So you must find a RegEx implementation which allows for treating each line individually, but in bulk of course, and which allows for treating just ONE occurrence in each hit region (here, the lines); this will unfortunately exclude some of the available editors and their RegEx Replace implementations, but then, you will want to have the whole macro available for any printing anyway, instead of manually entering the different RegEx commands.

After this, you will have your csv file, exported from the database file, and you will have a text file which originally will have been a copy of said csv file and which is now a plain text file, in case with markup codes for formatting.


Oh, I forgot: You will need titles and sub titles for prettier print-outs, and you also will want to have automated page breaks. You will run an sql select with "group by" (without the further data of course, in our example above without "N", "FN" and "T" and so on), which will give you a list of the groups, and then you put that list, after the necessary formatting, as bulk into your database (insert into ... (field, field...) values (v, v, v,), (v, v, v)...; - you know how it's done): This will give you sort of "empty" records but which are perfect titles and sub titles even for browsing query results on screen (but then without leading blank lines, while for printing, you can easily insert blank lines or page breaks before those titles/sub titles). In the future, you will not create new categories or sub categories without creating the respective "blank" record, the record with just the categories, first, and that's not too difficult to remember, since:

You shouldn't create new records entirely by hand anyway but create a macro which allows for creating a new record as a copy of the current record, and if you systematically do this, this means that you will create new records by first going onto the "title" / "sub-title" = the "dummy" category record, trigger the macro, and then just fill out the real data; this way, it's technically impossible to ever forget to create the dummy, "title", record for some new category.

Oh, and I forgot, how to then automatically insert the codes for leading blank lines or leading page breaks? You will insert an additional column, name it "pc" for "print codes" or whatever, 1-char numeric, default 0, blank line before = 1, page break before = 2, or whatever, OR you could write a macro which afterwards checks the blank content fields, but that would be less flexible since it's not taken that you want a blank line before ANY record where C3 and C4 are blank (lesser group change), and a page break before ANY record where C2, C3 and C4 are blank (and all the rest, of course, since we only consider titles of any level), so an individual setting is probably best (and will "cost" 1 digit per record). (Of course, your macro mentioned above will then also translate the "1" or "2" into the needed formatting code.)


So, you will have got some select result which then is perfectly pre-formatted by your macro but which looks horrible to the eye. What to do with such a thing? You can put it into InDesign or some other dtp tool, but for most use cases, that would be way too sharp an effort. It's probably possible to import it into Microsoft Word or such, but the question remains, how much formatting will be preserved, how much of it will be lost? It would be ideal if there was some pdf printer (tool) which could read some formatting at least, but in fact, I didn't even find any which preserves page brakes (ascii 12 for example), so let alone for bolding the content of some field content...

Thus, you may think of html, which should be perfectly possible, but then, printing from html is not something I had never considered to do. On the other hand, it's very easy to enlarge your macro just that little bit as to add a little rtf header and the closing brace, and running a macro translating all possible special characters (à, ù and so on) into the respective, 4-chars rtf codes, is easy since the web abounds of the respective tables for that task.

Then, you open your newly-created rtf file from within an rtf editor (for example WordTabs, free) and trigger "Print" - that's it, or perhaps it's even possible to send an rtf file directly to a printer, the necessary commands (paper size, margins, and so on) should all be available within the rtf language.

I'm just assuming here, but let me guess that the above work-flow (which is entirely free) is much more flexible than some expensive database "report" programs are, at least as no graphics are involved, and I admit that I didn't bother to look up Tibetian-to-rtf, but for the rest (and by printing from WordTabs or Works), it works fine already... to be refined of course, and when I have time, I'd like to get rid of the additional rtf-editor-for-printing step - all of rtf is available on the web, but it's quite some stuff to wade thru, for direct printing.


Oh, and for standardization, all the tables in my one-table databases are called "main"; helps big with writing heap select-macros.


Edit June 3, 2017
To clarify why RegEx here.
I'm not speaking of the classic RegExReplace "somethinga fieldseparator somethingb fieldseparator somethingc" by "somethingb fieldseparator somethingc fieldseparator somethinga", for which RegEx is needed in order to maintain the original somethings to be relocated (you'll work with parentheses for groups and numbered placeholders for them), it's just needed here in order to treat the lines (records) one by one and then applying the "replace once only" (in the specific way of the respective replace of the replace stack) to every LINE. You probably can do this by other means (for example have a look at KEdit, its price has come down from 129 to 99$ recently and which possibly could do it, did not check), but for me, doing it with RegEx replace has proven simplest.

Instead, you could write a one-line classic ReEx replaces "doing" each line/record for all which is needed, then (with the same command line) the next, and so on, but I suppose that will take a little bit more execution time (by the multiple replaces 1-by-1 of the field contents (which are left alone in the strategy described above, and which I prefer for that reason of "minimal invasiveness" then), but it's correct that it may be simpler to maintain specific 1-line replaces, one for each database select-result file (independently of course if this result comes from flat tables or from a more or less normalized database), than doing multi-line replace commands, one line for each field separator in the record. But then, I prefer the latter also for the fact that in a multi-line command, you will get neat comments at each line, whereas in a 1-line RegEx, you would need to write down the respective comments not as near the respective command but for example as a multi-line block below, each line referencing one group in the 1-line replace command, so at the end, the 1-line flavor is not as elegant anymore when you put down comments, and it's evident that even when you do, changes aren't as quick to do as with multi-line since you'll always need to check if your change concerns the right element (and not possibly its left or right neighbor, inadvertently), while in the multi-flavor, with the (right) comment in the same line, it's always evident where you are and do your changes. But that's a discussion of RegEx styling, both ways are perfectly possible.


Oh, and I would like to modify my claim above that the described strategy may be "more" (?) flexible than out-of-the-box "report designers". For example, it's easy with the former to automatically insert a leading blank line whenever the first, or even the second character of some field changes; this would be helpful for example for the field "A" when you set changes of the fields "C1" to "C4" to "leading page break", and would not even take much white space in case you print in more than 1 column - but good specimens of the latter variety should do that too, of course, so let's say the way described here is AS flexible as good "report designers" (very probably) are.

You just run another macro element which retrieves the content of that field in every line/record, compares it with the stored content of the previous one, and if character 1 or 2 isn't the same anymore, it changes the content of field "pc" ("print code") of the line from 0 to 1 (if the one of the previous line has been 0, not 1, 2 or other, before, so this will not affect records after titles/sub titles, or you do it by "changes, but no changes from nothing", titles'/sub title's "N" fields being empty, in our example above); or you simply bolden the content of the "N" field in the output upon such character changes only, instead of boldening it for every record - as you see, the possibilities of revamping core database outputs are seemingly endless, just by running text macros on the csv, in the right order.


As said above, you will want titles/sub-titles on paper, and on screen, and on top of the respective real data records, not below them, so with the above, you will have an order problem:
select * from main order by C1, C2, C3, C4, B, N, FN, T -- I renamed "PC" to "B" for "breaks"
does it wrong,
select * from main order by C1 asc, C2 asc, C3 asc, C4 asc, B desc, N asc, FN asc, T asc
is ugly, so, in order for the real select
select C2, C3, C4, FN, N, T from main order by C1, C2, C3, C4, B, N, FN, T
to work correctly,
you either code line breaks as -1, page breaks as -2, but I don't like negative values when not needed, or you set the "B" column's default value to 3 instead of 0 and set it for line breaks to 2 and for page breaks to 1; this will even leave you the 0 for a possible other code before you'll have to do an "update main set... where...". I concede a default of 3 isn't pretty but then you don't even have to display it (see the last select); for new records, your macro will put it in automatically, and sql lets you sort ("order") by invisible columns.


We're continuing the "refine it" part here: When a field is empty, you will probably want to get another replace string as the one you want for a non-empty field, for example if you have 4 hierarchical categories C1...C4, you want them "C1 - C2 - C3 - C4: ..." but when subcategory C4 is empty, you want it to be "C1 - C2 - C3: ..." instead. How to do it, since RegEx does not offer some "if it's the full string (with variable part here, that's a given), replace it by (the various part and) "a", and if the variable part is missing, replace it by "b"; look-arounds do not help either if you always replace the very first string of the kind of each line, in every iteration, and a non-replace would create chaos, any un-replaced field separator wrongly becoming the first one then.

So in most cases the replace would be done, which would result, in our example, in a "C1 - C2 - C3 - : ...", and later, you simply would replace any " - :" or even " -  - :" by a ":". If that's not possible because of possible occurrences of these strings in the regular text, you simply replace all field separators with the wanted strings but in which you maintain the original field separator, by replacing it with a special character which does not occur anywhere. For example, if the field separator is a tab, do the replaces as wanted, but replacing the tabs by an additional "|", too, which gives, in our example, "C1| - C2| - C3| - [C4 being empty]|: ..." = "C1| - C2| - C3| - |: ...", and then, you'll simply delete (always by macro since you'll know beforehand which strings can occur by this) unwanted "|" combinations (here: "| - |"), and in the end, all remaining (single) "|"; Perl is said to be best, so perhaps it can be done in simpler ways in that language. Anyway, you clearly see that you don't need to just use spaces, then delete the multiple ones, but you can do anything you want; let's hope the commercial offerings go into these depths, but then, 2/3 of their development efforts inevitably go into the GUIs.


EDIT June 12, 2017: Some (useful and/or very important) Details

Output: RegEx / Replace

To my surprise, I didn't find any RegEx editor which allows for only replacing the very first occurrence of some given (sub-)string in each line, so I had to do it the opposite way described above, by changing every line in one occasion, the whole line, line after line, and this gave those horrible 1-line, inscrutable RegEx replace code I referred to above. As you know, you must group the sub-strings by (), and in order for then not getting totally lost, I did a comment line, with numbers 1, 2, 3... 25... immediately above the code line, for every ()-group and with the necessary numbers of spaces before and between the numbers, in order to identify every group by its number - since in the replace part, you must refer to those numbers but which are not in the what'-so-be-replace part of the RegEx code - a RegEx flavor where you write the group numbers directly into that "original's" code would be very welcome.

The above applies to the regular, the "B=3" lines (and thus, the "B=3" is checked within this code line and will thus leave alone lines where this condition is not met); you'll do your special RegEx replace code for other lines, for example for B=1 or B=2, for other conditions, and you'll also don't necessarily replace the original line by the final rtf code but for some formatting code, you'll use placeholders (for example little/suspended/superscript 1/2/3/a and such), and you'll do their final replacement later on (incl. extermination of unwanted code combinations, see above): This will greatly enhance the clarity of your code (so you'll not be get lost in it) AND will greatly help you with formatting changes you will want to implement afterwards, be it that you want some else formatting or be it as alternatives for several different output situations.

Also, you must pay attention to a correct, AND to a smart, order in your replace code (which replaces will be made before others; if you do this smartly, you can greatly reduce complications in your code (differentiation of which lines / sub-strings are affected further on: "discard" anything which is easy to "discard" at some stage, then do the "complicated" things, then again take the "discarded" lines and do further replacements there, in case); also, only the core replacements where RegEx is really needed (for groups) are done by RegEx, most other replacements are done with regular replace commands.

In order to not get lost in this bulk of code, read your text body into the clipboard, then have your code work onto the clipboard, and put a stop code after the first line, then the second one, and so on, and have your code display some of your (changed) text body to check if the replacements made up to the stop code are what you will have expected.

If you do the replace according to these lines of advice, it will all work out to your entire satisfaction, as it did for me, and your replace/rtf-formatting code will differ only in some core parts from one database to another; for almost anything, it'll be perfectly "reusable" - in practice, of course, you'll use the same code block for everything, doing just some "if database = a/b/c..." branchings where the code differs.

(You'll do a lot of replacements in order to refine your code, for example you will print/export only the very first words in bold, up to the first comma if there is any, of the content of the field "N", which means the family name of the FIRST author will be in bold, not further authors (in that book example)*, but if the author is some institution with several words in its name, all of these will be boldened; also, if you will have no first name (field "FN" is null), there will be no space (which would be inserted between a first name and a family name): All these details (fine-tuning) you'll realize by refining your code after the code works well in its core functionality, and placeholder characters greatly help with all this.)

*: Since I now have just one author in the "N" (for "name") field, OR the first author is before the very first comma in there (if there is any comma in that field), and the order within each sub-category (or overall, in order to get an ABC authors' list) is by these "first authors", it would be easy to put further authors (ie anything after a comma there) into additional, referring records, but since I do "like" searches anyway, I don't feel the need to do so yet.

Output: RTF

To my surprise, printing an rtf file isn't done easily, like you would send a pdf file to a printer, and I cannot do it do, there's too much special knowledge involved. So you will need some intermediate program in order to open your rtf file, and from which then you'll print the file. Forget the usual rtf editors, I tried some of them, like "Jarte" and others, and they will open your (correct) rtf file indeed, but then you'll lose much of your formatting, in other words, they aren't able to correctly read the rtf code of the file they will have opened.

Don't laugh and say, "then your code will be wrong", since in MS Word, it all works out perfectly, incl. for example printing-out in several columns, with title lines of several indentation levels starting a new section, so that's exactly the application I advise you to use for checking and printing your programmatically-created rtf files: First, Word is of great help to correct and refine your code, by checking what it will give in Word, and then, it's the perfect intermediate tool for printing as long as there is no basic tool available for sending rtf files to printer.

It goes without saying that such "raw-data-transformation" of database data into "pre-formatted target data" not only works with rtf code for printing, but also, with the respective html/css/js or mark-up codes, for web publishing or for further treatment in InDesign or any other (ex "desktop") publishing tool; even links to graphics or other elements, present in the database, could be integrated as needed into the target text body InDesign or similar then will work on. (Btw, the size of your database files will be extremely low, comparatively, since there is no formatting or other "noise" info within the database body, but just will be added programmatically after export into csv/text format; this doesn't apply to BLOBs, of course, but then, you probably will prefer links to files in the file system instead of BLOBs within the database anyway.)

To begin with, you will send the correct sql select statement to your database, in order to get the wanted data (only), in the wanted order; then, your text-processing code will do everything SQL isn't, or is less, able to do.

SQLite: rowid / ID / autoincrement / primary key

As for SQL specifics you must know beforehand, there's the SQLite particularity that by default, it re-uses (!!!!!) row IDs that will have become "free", by deletion of the row/record in question; it's obvious this can create total chaos in your database (while for flat book lists and such, that is not really such a problem in reality, you'll want to make even such a basic thing "future-proof"). In the web, there is also WRONG information upon this phenomenon available, for example http://www.sqlitetut...qlite-autoincrement/ pretends/implies by omission of clarification that such rowid re-uses will only occur when your SQLite rowids have reached the stunning number of 9,223,372,036,854,775,807: this is definitely wrong. Fact is, most of the time, SQLite uses new rowid numbers, not numbers of deleted rows/records, but there is no guarantee whatsoever for this, in my tries, when I added (single or multiple) new records, then deleted them, then created new ones, rowid numbers of these recently created-and-deleted were re-used; HOW SQLite decided upon re-use vs new number is beyond my knowledge.

Thus you will need some ID column replicating the internal rowid but with the "autoincrement" setting (which then, but then only (?), also works for the internal rowid, or in other words, I don't see a way to apply "autoincrement" directly to rowid, you seem to have to replicate the rowid in some ID column which then, while remaining a replication of the rowid, forces that rowid to autoincrement; there does not seem to be a direct setting.

They all say that for performance reasons, you should avoid autoincrement at all cost, but imagine you do some zettelkasten database, with cross-references to other records, and then SQLite re-uses rowids/IDs of deleted records for new ones?!

Ok, that's not a perfect example since when doing cross-referencing, you should prevent deletion of referenced records to begin with, but this implies technical cross-referencing (cross-references either only in special, referenced fields (foreign keys) or cross-references-in-text with additional such referencing and the overhead that comes with that, and which is not possible within a home-made, hobby application) while in real life, you would probably use non-technical-cross-references, ie not-monitored ones, just by copying the target's ID number into your text within the content fields (hopefully with follow-the-link formatting), so that these "inline" references to other records MUST be absolutely stable, which, if you use SQLite along the lines of their common advice (ie avoiding autoincrement IDs), is NOT the case.

I did not yet try how it works if you create new records programmatically, ie by executing your own sql, only, but in practice, you will probably create many or even almost all your new records from within your frontend, and doing so will create plenty of re-used rowid numbers at least in SQLite Expert, but I think it's not the fault of this frontend, this time. Of course, there is always the solution of avoiding record deletions, discarding records not being wanted anymore into some sort of "archives", by switching a boolean switch in each record (for example column "a" for "active: yes/no", with or without some "cleaning" of them within the "active" table and shifting them into some "archives" table(s) every now and then), but it's probably a good idea to have such "archives" indeed, AND to provide for the possibility to definitively eliminate records, even from these archives, and this would make reappear the problem, and overall, how would you really prevent ANY possible (and potentially problem-creating) (manual) record deletion from a hobby application? In other words, you couldn't use any out-of-the-box sql frontend anymore but would have to write your own, deletion-proof frontend. It's evident that corporate applications would not come with such problems since there, for compliance reasons, it's even necessary that no record will ever be deleted, instead of being moved into some archives, but then, SQLite isn't the right back-end for any group application to begin with. So we're back to "autoincrement". But:

In the different flavors of SQL, it's more or less difficult do database design changes after creating a database, or after creating a column, and in some cases, your only means is to create a new database, with the characteristics you want it to have from start on, and then fill it with the data from your original database (fortunately, this can be done by sql code), and in the case of unique IDs in SQLite, that's exactly your problem here if you will have created your database without creating the autoincremental ID column when creating the database, ie the command alter table main add ID not null integer primary key autoincrement will NOT work, because of the primary key constraint ( see, among others, http://www.sqlitetut.../sqlite-primary-key/ ). (So when creating your database and your very first (empty) table in SQLite browser (see above), first create the necessary ID column, THEN only do the "File - Import - From CSV" command in order to create the (basic) database from your csv data, before "formatting" the (other) fields in your frontend.)

SQL: Mix-up of nulls and of empty strings

This problem described above did not come from my using two different frontends, alternatively, for the same database(s), but from my using Ron's Editor in order to quickly and "manually" shift data from unwanted columns into the correct ones, after sloppy RegExes of my original text data (see above); in Ron's Editor you can shift manually the content of many fields into another column very quickly, so this tool comes very handy for putting "far-from-being-normalized" text data into a table (csv) format for spreadsheet/database use. But whenever I imported or re-imported my data from that csv editor into my database (ie whenever I rebuilt the database from the updated csv file), I didn't have nulls but empty strings in empty fields, while, as said above, for newly created records from within the database (frontend) itself, fields left blank on creation correctly got null values.

Thus, csv editing of database stuff should come to an end at some point, then you'll replace any empty strings by nulls, and from then on you will have only non-empty strings, and nulls, to cope with in your database and your sql code. (I largely prefer nulls over empty strings, and be it for the only reason that I can simply leave fields blank when creating new records from within the frontend, without creating by this a mix-up of nulls and empty strings*; you'll do how you like to do it; it's just the mix-up that's really unbearable - or you'll get very complicated code if you want to get correct results.)

*: If you programmatically create new records, you could write your code so as to create empty strings for fields "left blank", but if you create them by typing within the frontend's table view, those fields get set to null if you don't manually set them to '' one by one, and voila, you'll need the code for empty-string-nulls-mixup further on, so clearly avoidance of empty strings in favor of nulls is preferable; your mileage may vary.

SQL: Field names

The ID field can get any other name (but not any other numeric format than INTEGER), but as a general rule, my uppercase field names above are a big nuisance when entering them into the respective sql commands; I'll change them to (equally short) lowercase names, and you should use those from start on (Windows doesn't make any difference here between lower- and uppercase anyway).

92
Then, a word on SQLite Expert Personal

As I said in the "Navicat Warning" thread, I recommend this free software, it's really pretty. It has got some bugs, but not of the data-destroying kind, just minor window problems and such.

I said that I use it on my pc, but I left out an important info: Since it's the free version of a paid program, it will not import data, so as it is, it would have been unusable for what I'm trying to do, translating some organized text files (and also csv-files from ListPro) into databases / flat database tables (flat to begin with, looking out for doing it perhaps more to the regulations later on if I find the time).

So I use SQLite Browser / DB Browser for SQLite (free, they don't know how to name it, anyway, for XP I use version 3 of it since 4 will not work there): Import from csv-file into table, then save the new database. All other things I then do in SQLite Expert Personal. csv means comma-separated, but obviously a comma as field separator isn't the best alternative, so I use tab-separated, but it's with the .csv suffix anyway.

Since in SQLite Browser / DB Browser for SQLite, I create the database, for example cds.db (from cds.cvs), I can then open my CDs-database within SQLite Expert where I format the columns, and so on.

Now my explanation why I just don't buy SQLite Expert since I really like it a lot. Well, my psychological loss is too big. Last year in summer, I had already looked out, superficially, for SQLite frontends, since my problem of having data in text file, as explained in another thread, is a persisting one, and then I already had had the idea to do something about it.

It was 60$ lifetime, which was an absolute treat, and I promised myself to buy that fine program as soon as I would need it. Now it's 100$ with 1 year of updates, which means instead of costing me 60$ plus VAT, this same program (btw, the price rise came for the "old" version, not even a new major version in-between) would cost me hundreds of $$ over the years. Of course it's all my fault, and I don't blame the developer, but since I lost the lifetime option, now paying a price almost double of that one, for just the current version, plus update costs again and again - it's unacceptable for me at least for the time being where SQLite Expert is a fine SQLite data browser (I like it a lot more than SQLite Maestro, same price, so you see where the hike had its idea from), but where for example there is no graphic designer, let alone that imaginary piece of joy I've just described in the "Navicat Warning" thread, and there are no stored queries yet, which are available - as a graphic designer, but not a good one - in SQLite Maestro, but as said, stored queries in SQLite Maestro cannot be triggered from the tree entries but must then be triggered by an unnecessary intermediate step.

SQLite Expert is in continuous development (got just another minor upgrade), so it's very evident it will get at least some of its missing functionality later on, and with no doubt, people who didn't come too late for the life updates, as I did, will have got an absolute treat. (It's been very unfortunate that the price hike and the dismissal of the lifetime license came both at the same time - together with the almost 50 p.c. rise dollar vs euro, that all combined is not too bad for a developer who's in the euro zone...


EDIT May 25, 2017
You will find more info upon SQLite Expert, also in comparison with some competitors, in my May 25, 2017 add-on in my thread about trial conception ( https://www.donation...ex.php?topic=43835.0 ).


EDIT June 1, 2017 - SECOND Edit for the day
Destroy Your Data with SQLite "Expert" (ha, ha)
You will be interested in naming your columns in some standardized way, between your different databases, in order to greatly facilitate your management of pre-defined sql queries, so for example you will use C1...C4 (see my edit of the post below) for (possible) hierarchy, and even when you only got one code for the time being, that column should be called C1, not just C, in order to standardize your sql selects as far as possible. Also with "Titles", "Items" and such, just name it "T".

The same with "Authors" and "Artists" and such bad column names, just call that N for Name, and the Christian/First name should be FN; if you call these columns N1 and N2, you will have the logical problem that when your sort, you'll either sort by N2 first, then by N1, or you will have those columns in the wrong order on your screen (if you want to have the traditional set up with first name first, then family name).

So I lately did a lot of column renames, in order to standardize my tables / databases, and my selects - they all are somewhat different, but they should identical in all parts similar.

Then, in one of my tables, I made a big mistake, with SQLite "Expert" (ha, ha), by switching the names of two columns, then triggered "Apply": This, as you will already have guessed, immediately deleted thousands of names "N", putting the FN into N, and leaving the FN column blank; of course, this happened without the slightest warning.

My solution: I had the Ron's Editor csv file from yesterday or the day before, so I dumped a copy of that (as text) into Beyond Compare, the later-on-deleted column set to the end of the bunch in order to be able to visually compare in BC; I also dumped the remaining database into a csv and put that into BC, again as text (BC data compare may function with additional settings; out of the box it did NOT align the datasets); then from the differ I updated the the original csv from the partly destroyed newer one (not in BC but in Ron's Editor), then with the updated original csv I built up the new database: Two hours for nothing, but imagine the loss if I hadn't had a recent csv of my data available.

What do we learn from this: Do daily database backups. Do an additional backup before doing any database re-"design". Don't assume software which calls itself an "expert", is an expert, but assume it probably misses relevant security code analyzing the possible interaction of several steps which will be done from what it lets you "combine" in some task list*, and by which it will possibly destroy your data without even knowing.

You know the principle: switching names a and b means renaming a to c, then b to a, then c to b; SQLite "Expert" (ha, ha) obviously thought it could make without the additional step. If I had done the renames "by hand", by sql renames,
- I would have made a backup first, since I would have trusted myself less then I obviously trusted this "Expert" program, and
- I would have made the right steps in the right order since I'm aware of the problem.

In SQLite "Expert" (ha, ha) I relied on the software to make those steps, in the right order, and which failed, but I made my choices within some tasklist where you can ask for all sorts of changes, and then the program, upon "Apply", will execute these tasks one after another, it seems, and where the SQLite engine itself will not stop the execution, by its own internal safety routines, it's obvious this program proceeds, without taking account possible unwanted interactions between the various commands in the line: There is no safety procedure implemented BEFORE sending the commands to the engine, like you would do in your head, by doing it manually.

It would be of interest to send such requests from within other SQL frontends, in order to check if they come with such checking routines or don't either. This comparison would only be valid for different frontends but the same database format (here SQLite), since, as said, it's perfectly possible that some SQL engines have some of such routines in-built so that any frontend applied upon them would work fine, either by applying the necessary additional steps or by rejecting the commands it cannot faultlessly process in that context.

And what can developers learn from this? FIRST exterminate such fatal bugs (here: not checking if any rename was in fact a switch: recursion probe: if the commands are sent one by one to the database engine, how could that discover such a problem? So it must be done by the frontend, which also triggers the necessary intermediate steps/commands.), THEN triple/quadruple your price if you really think you must do so (here: from 60$ with lifetime updates to 100$ with 1 year's updates).

93
EDIT June 5, 2017:
Title change and intro
(Previousoriginal titles: "SQLite, SQL, SQLite Expert", then "SQLite, SQL, SQLite "Expert" (ha, ha)", for lack of warnings when stacking up database updates which in combinations can get dangerous and which you would do one by one if you did them by hand)

This thread does NOT intend to replicate the usual sql intros, but wishes to give additional practical advice for beginners those intros' authors did not think about.


Original post:

First, some quite obscure trap for beginners, stackoverflow helped me to find that out.

For many a command, not using quotes (single or double) will work fine, so you will not discover it's faulty syntax.

for example
select * from tablename where somecolumnname = somestring
will work fine as long as there is no column (another column) which is called/homonym to somestring

example:
select * from A where T = V
will work fine and display all records where there is a "V" in column "T"
IF there is not another column which by any chance is called V

IF there is, SQLite will simply do not show the resulting records, no warning, no error message, nothing, so when you've got some other columns and your requests work fine in general, it'll be probably a lot of time before you realize SQLite will not display many wanted query results.

You know ask why would anybody call many columns by single characters, and also many string values by single characters, to begin with? That's right, but I had just a few 1-character columns, incl. "C" for Code, with 1-character codes in it, for several, different todo-categories, just 6 or 8 characters in all, all single ones, and so I wanted to have it neat and without sacrifying space, and so I called that "Code" column "C".

This works fine, except for my code "h" in column "c", since I also had another 1-single column called "h", and without the quotes, SQLite is unable to differentiate column names from values if they are identical, even if you would have thought that the "where c = h", by its syntax, must have told SQLite the first is a column and the second is a value, but no.

Thus, it's absolutely necessary to write in the official syntax

where "columnname" = 'value'
which is double quotes for identifiers and single quotes for strings/literals
(if at least it was the other way round, I could memorize that better...)

in order to avoid lots of problems which, as the above use case has shown, are far from obvious and could have you get incomplete results without even discovering the problem.


EDIT June 1, 2017: Title change; see next post


EDIT / Add-on June 5, 2017: Null
The web abounds of discussions if you should use null values or empty strings; I won't add to add to that discussion but rather want to indicate some more beginners' traps caused around Null/empty values; btw it's of interest that Oracle's and FileMaker's sql flavors which both try to do away with the Null problem both don't succeed at it, so switching over to them will not allow you to ignore what it's all about.

When I imported my csv data, I did that, as I said further down in this thread, in SQLiteBrowser 3/DB Browser for SQLite (they don't know how to name their tool, it's not 2 different tools here), then opened my databases in SLite "Expert" (see next post), since the latter's "Personal" edition doesn't allow for import or export (but you can do some select, then ^a, ^c, then treat the resulting csv clipboard by macros, for example for printing or for exporting the data into anything else).

So the first problem described here may result from this 2-step setup. Anyway, blank fields from the imported data were zero-length/empty strings, while the same for records I then created in SQLite "Expert" (fields I simply left blank, in newly created records) were/are Nulls. If you don't know databases at all, you will think that's the same thing, database experts will laugh about what I describe, but for database beginners, it's important information.

This also implies two things: You should start your database experiences with data you intimately know, and/or you should always set the option "display Nulls" to "on"; both measures will help with at least becoming aware there are problems. Since, and that's the problem, your selects will not display the expected results if you don't  always cope with the " where x = (or <>) '' [these are two single quotes, not one double one!] or where x is (not) null" problem you will have got now.

Also, traditional logic is different from SQL logic, and you must become aware of that phenomenon, too. To start with, we use the (possible) column setting "No Nulls allowed here" for "in no record, this field must be left empty", but in the SQL 3-tier-boolean world, this just means "absence of Nulls", not of empty strings; similar for boolean fields (yes, no, null (allowed or not)) and for numeric fields (some number, null, or, very bad in most cases, some "special" value for "no number here" from developers wanting to avoid Nulls at all (and in case very high) cost.

Also, you should google for "not in vs not exists", and even when you think you are aware of the problems and do it all correctly, you probably get wrong query results again because of your having left out the fact that Null is NOT the opposite of "True"/"Yes", and that this logical sql truth extends to any field type in sql.

Many authors insist on the fact that Null stands for "unknown"; since we also and even mainly use it for "no value"/"field empty", and the meaning of "unknown" is rare, we may overlook the fact that as for the sql logic, these authors are perfectly right. And we fall into the trap of writing "where x <> y" instead of writing "where x <> y or x is null": In our logic, Nulls are excluded anyway there, in sql's logic - which we must observe when we want to get correct query results -, it's "who knows about these Nulls?" instead, so they have to be mentioned separately in your query in order to deliver correct results.

Somebody in the web wrote something along the lines of, "Which consumers/non-professionals still access data directly, nowadays?" - well, it's the traps in sql which discourage people to delve into sql, since they prevent them from getting the expected results. It's correct that you would have to delve deep into sql in order to gain some real expertise in it, but some advanced facts must be known to the beginner, too, from day one; it's not like playing the piano where you safely attack some piece of Kinderszenen without needing to know anything about the intricacies in Kreisleriana.

94
General Software Discussion / Navicat Review
« on: April 28, 2017, 04:12 PM »
Thank you so much again, mouser, hadn't discovered your message in time, just your post.


Originally an EDIT above:

Thank you, mouser!

And:

What I meant above but had not expressed is that of course, bolded or otherwise "important/todo/must-be-taken-care-of"-formatted field names/entries in the greyed-out tables (I mean greyed-out on purpose of course, not the temporarily greyed-out vicinity when some table is enlarged and made prominent for temporary "activation" (reading or reading/writing) purposes by mouse-over) should be readable, just like link sources/target, not be minimized as the rest of them, so that with progressing work on the database, first these table representations will grow, then will recede again, up to a point where only link sources/targets will be left, beneath the table captions, so you see some bulge of problems which have to be addressed, then, by disappearence of this bulge, the progress and (provisional) finish of your work.

Also, there could be a toggle for "show all tables in full" vs "only show distinguished field entries" (distinguished in the sense described above: link sources/targets, and manually for "has to be taken care of" and such, or even core columns overall); remember, a "take care of the table" in general (adding further columns and such) would be indicated by a box in the caption; it would even be possible to grey out the captions for "done" tables, while tables which must be worked upon (by additions of problem solving) could be in color - when "greyed-out", in faint/washed-out coloring.

This way, FOCUS, even on groups of tables, is always ensured, without hiding the tasks which have to be accomplished yet, which is all the more important considering the fact that in many cases, those tasks will have some interaction with other things already completed and should thus be borne in mind for that reason already.

Here again, it's evident that such an integrated visual representation can be helpful for lots of other things blatantly organizational or not that much at first sight (problem solving) beyond database design, see flowcharting software (for example Microsoft Visio) or socalled Concept/Topic Maps: It's certainly a worthwile paradigm to see the graphical representation of problems grow, taking all possible consequences/emanations and (external as internal) leverages into consideration (this would imply sub-elements, with extension lines to them, and which would then disappear when resolved (which is not or only rarely the case with database tables), meaning the respective parenting element only would stay visible by default, bearing the short description of the problem and how it was/will be taken care of); and then that graphic bulge see recede again when more and more sub-problems or internal as external interactions have been taken care of, and by thus the overall workouts/solution(s) come nearer.

In other words, a graphical representation that is able to de-blur the intermittent clutter as far as technically possible not only helps enormously in speeding up the outcome, but very probably also in finding the best possible outcome there is, since when you clearly, distinguishly see sub-elements (which implies there should be technical means, as described above, to visually distinguish them to begin with), there are much better chances you'll keep those elements and their importance for other elements in mind when taking other decisions within that framework but which happen to be interconnected in some way. In the use case discussed here, it's certain that with a software like the one described above, and in varying parts realized by available software, database design will not only be easier, but show better results than if you build tables and their interconnections, just alternating between flat views of the tables in question, one by one, and the same will be certainly true for any not-too-basic problem solving.


And I'd like to add:

As said above, all the relevant database languages put comments into their code, while SQLite does not. So for databases other than SQLite the comment is available for encoding individual field/column formatting in the graphic representation like the one I describe here, the developer of such software could put a leading dot in the comment attribute of the field for just a formatting toggle (regular/bold, then bold would be displayed even when other fields are not), or for a leading dot plus some code character for more choices in formatting: regular, bold, bold-blue, bold-red...).

This way, these individual formats would persist between programming sessions, without even introducing a tool-specific database, ini-file or other. Here again, this is not possible for SQLite but then, comments are that much useful that some instrument to preserve comments between sessions should be made available by a good SQLite frontend, even if extensive databases typically aren't developed in and for SQLite.

In any case, the user would not to have fiddle around with the field comment for the formatting but would just click, the tool behind the scenes then switching the code in the comment accordingly, as well as fetching the code from there in order to display the right formatting on load.


EDIT May 8, 2017: More info on software pricing/design here: https://www.donation...?topic=43805.new#new (On Software Pricing) and (EDIT May 13, 2017) here: https://www.donation...ex.php?topic=43835.0 (How NOT to conceive trials (and some new ideas about them))


EDIT May 25, 2017: In the "Trials" thread, you will now also find some info about Navicat's trial policy and why that one is really dumb for software like Navicat.


EDIT June1, 2017: Today, very substantial price rise for almost all Navicat products, between 20 and 30 p.c., which makes their products even less appealing. There should always be much better products, it has very simply been for the "XP" availability reason that I had been interested in their products. For example, the very best MySQL frontend seems to be Devart Studio for MySQ, which also includes a graphical "design" component, and there are several of these available from other makers, it's just that with XP, I cannot trial them for the moment being.

The same is true for database transposition tools where it's not necessary for most users that they "do it all", as Navicat Premium (now 1,300$) pretends to do, but they do it, for those languages you need it for, without fault, which, it seems, is not always the case with Navicat translations. Btw, "Languages: English - more to come" - I'd be highly interested in knowing which developers do not read at least some very basic English; this reminds me of the absolutely crazy Microsoft move to localising their VBA, so that their macros don't travel from one country to the next. Whatever, Navicat better exterminated their bugs and amended their functionality; for 1,300$, you'll get a lot of fine, language-specific database tools, especially if on top of the 1,300, from Navicat, you then have to buy those additional, dedicated tools anyway, in case Navicat's do-it-all doesn't live up to its promises; you know, some people called that the Trump mode, ha ha ha.

95
General Software Discussion / Make it even more plastic
« on: April 27, 2017, 06:30 PM »
In the last paragraph, I forgot to repeat the demand for the tables and their lettering taking less space, which is an easy and important thing at the same time. But when it returned to mind, it occurred to me that resizing would be a good start, but just that.

Today's graphics cards can serve several big screens, but not everybody wants to place them on their desk in numbers, and even if they are available, with some 200 tables, it can become difficult.

Also, yesterday, I envisioned filtering (by caption colors) only for on-off toggle, but when you have got a good screen setup, there are some alternatives to that.

Imagine the tables of the not-selected colors be greyed out on request, instead of being hidden, with all connectors being displayed all the time. Also, there should be two switches for display (or "full display"), primal color which means belonging to some functional group (as described above), and then also a secondary color (a dot, a border or such; as for Windows windows, why not do a little box in the right top corner of each table, with a default status and, by clicking/rightclicking, several possible not-default statuses, indicated by different colors?) for todo/status: for "is deemed complete/ready" vs "is incomplete", and "speed problem, must be changed", and it should be possible to combine these complementary switches.

Of course, there should be a toggle "full hiding" vs "just greying out" for hiding.

Then, even any greyed-out table would become full-shape and responsive by mouse-over (this means it would overlay neighboring tables somewhat, which then would be temporarily greyed-out, even if in the current set-up, they are not greyed-out, so visually there would be no clutter whatsoever), and would get greyed-out again when the mouse leaves it.

What about readability of the tables' lettering? The caption should always be readable, but what about field/column names and attributes when there is not enough space on the canvas, and the table is greyed-out anyway, for example because it's viewed as "complete", for the moment being at least? An automatic receding should be implemented for such field names, and the table with it.

Here again, whenever there is a mouse-over, the table would regain its original size, with all its lettering perfectly readably. And what about the important fields/columns, the ones which are key for a foreign key, or are foreign key? (Note: Above for the link-drag-and-drop, you will have probably thought that I mixed up source and target, but I consider such a link as unidirectional between key, which is source, and a foreign key, which is target, since the data flow is in that direction, not in the reverse one. But I suppose most people will not see the data flow in a link but the direction of the data-fetch-wish, so for them, source and target are the other way round.) Even when the currently-not-important field names are just a point high, so as to indicate there is some field indeed, it's perfectly possible to leave the link sources/targets in their original, readable size.

As you can see from the intent of the visuals above, it would also be very helpful if you could distinguish field names of fields which have to be worked upon, by some bolding or such (toggle or even several colors, as for the tables/table captions, in case by context menu), so as they are distinct from other fields which do not need any more attention for the time being.

In later stages of development where speed considerations become primordial, why not move fields from one table onto another by drag-and-drop, too, the tool doing the necessary sql commands behind the scenes? (This implies that such a tool should fully work on production databases with real data in them, too.)

It should be noted that such plastic visuals where currently-active elements are enlarged and in lucid colors while adjacent elements are greyed-out can be applied to many use cases, way beyond database design, and they ensure that even on quite regular (large but not monstrous) screens efficient management of a high number of distinct elements and their interaction becomes possible and perfectly feasible.


NO EDIT, had just added some edit which I now put into the next post.

96
When I said, registration to their forum took several days, this means a weekend, plus several workdays. Message was "Your account has been activated but you are currently in the moderation queue to be added to the forum.". I then finally got a message I could post, with the result shown above. There's also "Live Support", which is "Offline" most of the time when I look but then, it seems that they are available indeed between 5 and 7 a.m. in Europe, so this should be between around 17 and 19 o'clock Chinese time.

Some 2 years ago, a user (successfully) asked in their help forum: "Is there a difference between the functionality of the data modeler in the various For [DB] products and the Data Modeler product? If so, is there a feature matrix comparing the two?" That was the question I asked myself, too. All (s)he got for an answer was pur marketing speak, which you can also find on their web page:

"Thank you for your inquiry.

Navicat Data Modeler is designed for customer to create data models. If you want to use Navicat to design database, Navicat Data Modeler is the product that you want.

Navicat Preiumm [this type proves they repeat their advertising instead of answering a good question in earnest, but instead of just copying this crap, they type it anew: bad organization!] is a database administration tool that allows you to simultaneously connect to MySQL, MariaDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application. Inside Navicat Premium, there is a Data Modeling Tool. However, the function is not completely similar as Navicat Data Modeler."

My (perhaps wrong) personal answer to the question is: "Premium" is 1,000$ and does it all while the database-language-specific subsets are around 200$, but without possible translation from one database language into another, even if you own the respective subset for both languages. "Modeler" is, for around 300$, as "Premium" again but without any database contents, this means you can import any (of the allowed database formats that is) database's structure, work on it, and export the changed (or not) structure, in the same language or in another language (example: input SQLite, output MySQL, or any other pairing), but with "Modeler", this would be possible for the structure only, so if you have so-called production databases, this would not be possible and you would need again either the "Navicat for ..." specifics or the "Premium" version if you want to translate.

If I'm correct here, this would mean that the scope of their "Modeler" is quite restrained. Anyway, it's evident that the "Navicat for ..." and the "Modeler" are subsets of the "Premium" version, which means that very probably, bugs in the first or in the second are also in the third one, and vice versa.

When I said I fear that in their "Premium" (do-it-all) product (and their subsets "Navicat for ...", see my observation in their current "Navicat for SQLite"), they didn't go into all the necessary specifics of every language, I had not found this Modeler mini-review https://www.macupdat...navicat-data-modeler yet: "ming-deng
Dec 08, 2014

I tried navicat data modeler to export SQL statements for table creation for PostgreSQL and for SQL server. The output is unusable. For PostgreSQL it creates "Create Index.." which is not supported by PostgreSQL. For SQL server it generates "Drop constraint w/o name" which is going to fail in the SQL server. Also stupid enough it tries to drop all the indices before dropping a table!  When drop a table all indices got drop all together why generate tons of dropping index statements?

The tool has nice GUI [as I had observed, too, and said above] but at this point it seems useless and is full of bugs!" [I cannot say as much but think this observation for Postgres (for the paid or the trial version, since as said above, my free "Essentials" version does not do any export whatsoever, correctly or wrongly, it's just a demo) is of interest. Cannot say, of course, if that bug prevails or has been exterminated in-between.]

They specify their "90 days software maintenance plan" as "complimentary", obviously declaring this pity as a gift aims to silence up any criticism of 90 days not being a standard period for free updates.

As said, the crow's feet and other foreign-key lines are connected anywhere to the table, not to the specific field/column, but you can move them manually to a better connection point; this is not ideal especially when you insert fields/columns afterwards which shifts the position of existing fields. (You can color those lines even though I can't conceive a use case for that.)

You can create foreign keys by drag-drop the target field onto the source; for this, you must select the foreign-key instrument first, but anyway, this is very good functionality of Navicat Modeler. Since any help is online only, and without search functionality, I better praise this excellent point here; you'd risk to discover it belatedly.

When I said that on pre-selection of a table, the contour of the table changes its color, it's more correct to say it grows thicker, as indication for the pre-selection, but as said, mouse-over will not show any comments then.

Let me tell in brief why I think field comments are so important. They can contain musings about the format of the contents for this field, of course, but they also can contain To-Dos and other reminders for your construction work: do this, pay attention to that, etc., even for fields not having been created yet, in other words, you use the comment of field x for a reminder for the future field y; also for this reason it's necessary that you have visual comment indicators. You cannot create some fields before having created other fields it seems, so it is reasonable to create reminders for those other fields to be created afterwards. All this is not possible for SQLite, which is why I came up with the idea that an SQLite frontend could do this on its own.

Since I (unsuccessfully every time) tried on several occasions to post my questions / suggestions in their forum, I had even lost my text, the text above being a second version (which I could not post either) in which I only remembered some of my points, but now I have found the original text again, so here are some new/old observations left out above:

From my original text: "I would like to use the Modeler to quickly jump from any column to any column, in any table (which is on the canvas), in order to develop and refine it all iteratively; in the comments I do not only put hints, but also "ToDo's", "Attention" and other "Work to to", and for such things with regards not to some columns, but to a (whole) table, I even create a column "PROBLEM" or such, in order to get a "generic" comment for the whole table.

Those "EXTRA" columns are immediately visible, but of course, regular comments (their content and also their shere existence, to begin with!) are not, and I would greatly appreciate the possibility to see any comments by just hovering over any column ("field name") anywhere, without having to do that "real selection" of the respective table beforehand, all the more so since in most cases, I then do not do something with that table, but just get the information (recall, aid for my memory) and go to other tables, retrieving comment info there.

In particular, I do not create tables as fully as possible, then create others, but just create some core columns, then some columns in some other tables, and so on, "completing the picture" in this process in an often seemingly "chaotic" way. Thus, with the need to "activate / really-select" any table beforehand, before being able to have a glance at its comments, it's an enormous amount of "clicking around", instead of smooth, intuitive working."

So you have also the idea here that you create immediately-visible columns, like "PROBLEM" (text format of course) where you then put some generic comments, and which afterwards you delete again when those problems are resolved / those tasks have been done; this is also possible with additional columns in existing databases already containing data.

And first of all, it's, as said in my text, intuitive, iterative construction all over the canvas, neither top-down nor bottom-down, but as flexible as it gets, which means it's by constructing in little steps here and there that you discover the best distribution of your columns over the tables, including new tables to be created for that or other tables to be stripped off of some of their columns.

It's evident that for such, very natural, work, you need a big screen and quite tiny tables (as said, those in Navicat are too big even on traditional screens, let alone high-dpi screens), so that you can display many tables on the screen concurrently, and it's also evident that such "chaotic", iterative work will need comments and visual comments indicators, and that the comments should be visible by simple mouse-over, not asking for an unnecessary intermediate step of "really activating" the table in question.

It also becomes evident why layers would be very helpful (what "Navicat Modeler" has got instead of layers is almost unusable and very badly executed, it's background graphics with some "glue" which doesn't function properly: it's a lot of fuss for no real outcome), and with one table being able to belong to more than one layer only: In big databases, there are obviously table groups, but those groups are not, for every one table belonging to them, clearly defined, so it often makes sense, I think, that just 1 or 2 tables from one group also are displayed together with another table group to which their columns "belong" in a way, without being incorporated into those tables, since they are either more generic, more special or belong even more strongly to some other data.

Btw, you can colorize the table captions, which is not bad at all, but then, you cannot filter by these colors - which would have been an almost working alternative to layers; anyway, you can only assign one color to these table captions, not two (which technically would be possible, you often see two-colored tabs and such, one color a triangle in the left "half" and the other color a triangle in the other), let alone three, but for tables belonging to more than one group, you could then have assigned additional colors, for example brown belonging to yellow AND red, and then, if the filter was "brown PLUS yellow", "brown PLUS red", that would have been perfectly working instead.

What I also had tried to tell them in my first try: When you select a field/column, an info-box to the right of the canvas will display some data for the table; this is devoid of any sense, but of course I worded that differently. In fact, I suggested that when you select the caption of the table, you get that table info in the info box, and when you select a field/column, you get extended info for that field/column.

As it is, there is some very basic field info within the "field" field itself, but for all the other, often very important, info, you have to double click, and then you get an "Design Table" window, for the whole table, not for the field in question only (which for both looking up data and for changing data is too much), and a window which then you will have to close by alt-f4, so for just looking up relevant field attributes, this is not intuitive at all and takes a lot of time and effort, while all the time, the info window to the right of the canvas shows irrelevant table data.

It's evident that selecting the table's caption should display table info, selecting a field should display all field attributes, and that a double click into the caption should display the "Design Table" window, for some bulk edits, while double clicking a field should not even display some "Design Field" window, since it would be all there in the info field anyway, where of course there should be allowed inline editing - this is all so simple I am unable to understand why anybody would do this in any other way, considering the info pane is there anyway.

Of course, if you don't display such a permanent info pane, then you must do it otherwise, for example all the field info (and not only the comment) by mouse over, and a "Design Field" window by double-clicking the field. That would be a very viable alternative for not sacrifying the screen estate for the current info pane, but I think a brilliant developer should offer both alternatives: The inline-editable info-field for the first period of work, when the user will enter enormous amounts of data (and the mouse-over-info anyway with all data, by option), and the mouse-over-display of all data, with a clickable "Design Field" (which in fact would be the inline-editable info pane from the previous alternative, but not besides the canvas, but over the canvas, hiding any table there as long as the user enters data; upon "enter" the editable info pane would disappear again - this alternative for the later stages of work when there is much less data entering (by data I mean fields and their attributes, not contents), and much more fine-tuning. (Call this info pane "Properties pane", as Navicat does, or "Inspector", or as you like. Btw, you can hide it in "Modeler", and since currently it doesn't contain any real information, that's what you should do in order to get a larger canvas.)

From the above, it becomes very evident that graphic construction of a complicated database on a big, high-resolution screen and with the right tool (the fact that I don't possess either does not invalidate this), which has got the most important ones of the functionalities described above, is much more straightforward than by mechanically filling up tables one by one with columns, each table separately in its own window hiding the other tables.

P.S. I have left out query building. It's evident that "Navicat Premium" and "Navicat for ..." come with these, as do any other frontend; I didn't look for it in "Modeler" but probably it has got it, too. Also, you need named, stored queries, which are available in Navicat, but not in every other frontend, or then, in some not very practical way. In "AnySQL Maestro/... Maestro" for example, you store these, but clicking on them then doesn't run the query but just opens another pane in which you must click on a "run" button or something.

"Navicat" should do three things:
- implement better organization (see above)
- show real interest in the specifics of the database languages it covers
- introduce into "Modeler" the most important ones of my suggestions above (functional layers or functional color sorting; making comments available without "real selection" needed, with visual indicators; making all field attributes available by mouse over or at the very least by simple click, either in the Properties pane or in a floating, mouse-over pane (not only in the extra, cumbersome "Table Design" window as currently done); then, in a second step, the position of the link lines.
And don't call a demo "Essentials".

97
General Software Discussion / Navicat Review
« on: April 22, 2017, 12:44 PM »
In my tries with SQLite, I played around a little bit with the "Navicat Data Modeler" which is not cheap; in fact I played around with the free "Essentials" version which is not a lite version but a demo one, in fact, you don't get any data in nor any data out, or perhaps then with the 14-day trial.

Playing around with it was lots of fun, but some points I didn't like at all, so I tried to have their point of view about them on their forum. The sign-up for their forum does not take several minutes, but several days (!), and then I wrote:


Navicat Data Modeler is graphically very pleasing and functional, but I miss some functionality which would greatly enhance my productivity with it:

There are no visual comments indicators for fields (see Microsoft Excel such for indicators). This makes a lot of unnecessary clicking (see next point) and mouse-overs, in order to detect possible comments.

Reading comments by mouse-over needs the respective table to be really selected, just the pre-selection does not display anything. Or you call it pre-activation and activation, respectively, or something else, anyway: the "pre" thing will color a frame around the table by mouse-over, but will NOT display any comments; for that you must really activate the table by clicking. This is counter-intuitive since you cannot freely move the mouse over the whole canvas, in order to read a comment here, then another there, in another table, and again in another one. It's all a lot of unnecessary clicking, and then even searching for possible comments (see previous point).

Link-lines (foreign keys) are not field-to-field (column-to-column), but just table-to-table.

The field (column) fields are too big, so the tables become too big, too, and take too much room (screen real estate). I've seen flowcharts where these symbols were less big, so you get many more table symbols on the screen of a given size.

The grouping of tables does not work correctly in all instances; more often than not some tables will not follow when moving around. Also, I would prefer named layers for table groups, with one table being able to belong to several named layers (!), and with a multi-selection layers list, i.e. the user could display just one layer, two or several of them concurrently. Ideally, outgoing or incoming link-lines to/from tables not visible currently would end in sort of an end point with the name of the table which currently isn't displayed, and ideally even with the target/source field name.

Navicat Data Modeler is ideal for constructing databases with 100 tables and more, but it's precisely with such big projects the realization of the above wishes would help enormously.


Which gave:

Error
You are not authorized to create this post.

This is different from:

Error
The string you entered for the image verification did not match what was displayed.

Of course, I tried on several occasions, on several days...

For the above, it's important to know that their "Modeler" does of course not add comments to SQLite, even if that would be terrific to have (for example by an SQLite database in Navicat which would load the comments for display, and could even write them into the SQLite code, into some comment lines block for example), but I discovered the joy of having field comments by playing around, selecting "MySQL" instead of "SQLite".

That being said, I also tried their "Navicat for SQLite" and discovered BIG bugs, by trying to insert a column into an existing table, instead of inserting the column, Navicat wrote the data from another column into the rowid column and so destroyed the whole table, no "undo". Inserting columns into an existing SQLite database is not that easy, as I then learned from forums, but both SQLite Maestro (trial) and SQLite Expert "Personal" (free and highly recommended) correctly do the necessary intermediate steps (and in no time) in order to execute this task faultlessly.

If this 1000$-plus-VAT program (updates for 3 months included) "Navicat Premium" does do similar mishaps in other databases or when translating databases from one format into another, that'll be fun!

Fact is, building a database from a graphical representation is real fun, but only when you can organize that work according to what I say above.

Current "Essential Premium" is 160$ plus VAT (had been 40$ when the "Navicat for..." subsets were 10$, they are now 40$ each), but if you're willing to live with a quite ugly, old version instead:

Extensive search had me find the only (?) surviving download link for the last version of Navicat Lite 10.0.3:
http://www.chip.de/d...7e81fa40c05dc3d1cb76 (download dialog for NavicatLite-10.0.3.exe will appear in about 5 seconds)

Edit May 8, 2017: Title Change

98
Cranioscopical, if I'm not very mistaken, I even trialled that editor, and it does not display all occurrences of a search string at the same time. If I'm mistaken, please rectify.

NoteCase Pro is what they call an outliner, right?

And then a wiki even.

I had not thought of such alternative software forms. So here the task would be "display all items which have one or several key words/strings in them" - I currently do not see their advantage over a more traditional database, all the more so since the latter could be queried by simple sql queries, and a general translation problem would subsist.

In fact, I have tried to do some planning for exporting my text files into a database, and I have found that this task is not that easy, because, as described, I currently organize my data into "pages", for - 3-column - printing and also for searching/looking up on the screen: When I see some entry, it's within a vicinity of similar entries, and all of which are below some title or some title/subtitle titling/header hierarchy (1-3, sometimes 4 only).

If I put my data into a database, this titling/subtitling will be lost, or I will have lots of work to do: For every text line/record, I need the hierarchy of its respective titles in additional fields - or spread over several several tables, with foreign keys -, and if then I want to look at some record together with similar records, I would need an sql query with the respective titles AND subtitles, OR I refine the titles/subtitles in a way that I need less hierarchy for these, or in other words, I could try to replace my title hierarchy by flat tagging, with some tag combinations when necessary, in order to simplify the queries and especially in order to simplify the "typing" when searching for some group of entries.

In other words, I had not been aware before that if I want to transfer my titling hierarchy into a database, my queries would become very worded, since I the subtitles are neither unique nor do they come with sufficient info, that info being within the titles higher-up. In other words, I become aware of a difference in the organization of a hierarchical text file and a database: The database can select by many more criteria, but the criteria in your subtitles will be lost if you don't recode all info which has been in your titling hierarchy, now optimized for database usage - it's simply not realistic to put, and then into a mobile device, sql "where" strings which add up to 150 characters, while in title/subtitle combinations, that many characters do no harm.

So I will have to find very short codes/tags instead, but which I can memorize at the same time, or/and even reorganize my current titling hierarchy and with that the textlines grouped by them. This is fascinating, but comes totally unexpected.

If I put my data into an outliner or a wiki instead, these titles/(sub)categories were either lost, or then, instead of putting each line into an outliner/wiki item - as I would put each current line into a distinct database record then -, I would need to create the titles/subtitles as items, and put multiple textlines into those items, in other words, they would remain grouped as they are now in the text files. This would be quite messy, as it is now - but with better search IF the outliner or wiki displays lists of search results - and I would not take advantage of what databases could do additionally.

SQL allows for searching/grouping of any records that contain value x in field a, value y or z in field b, and so on, and at the very least value x or/and value y in LINE a; RegEx search provides the latter functionality at least also for text files. But if I put my textfiles into an outliner or a wiki, I even lose this functionality of combining values x and/or y in the same line, meaning the same record, since the records in an outliner or wiki are not the text lines in an item, but the items, and "search for value x and/or y within item a" would NOT display just the corresponding textlines then, but any outliner/wiki item in which ANY of the textlines/records would comprise these values, which is obviously not the needed result.

So outliners/wikis seem to be an alternative, but a lesser one, or then, you put every text line into its own item, which technically would be possible I suppose, but which probably doesn't make too much sense since these instruments seem to have been created for more developed texts, not for single text lines - but for processing text lines, editors are a very natural solution.

Of course, there is always the problem of currently having combined info in ONE text line, an example from just one of my files being one author for several book titles but which are separated by ";" which do not occur otherwise, so technically it should be possible - if not easy - to distribute this info into several text lines, each with its own, repeated author information, there being a ":" after the author. Then, often, there are several authors, in my example file, but here again, RegEx probably could help, since in other cases, there is no "," before the ":".

Similar for other such files: All of them are sufficiently organized (some with special characters like "[]" for example) in order for some automatic reorganization appearing possible, before translation into database.

But it's quite a project.

So in my requirement above for a Android/iPad text editor - just a list for the search results all together -, I mistakenly had left out my requirement for Boolean search, so there should be "or" and "any" and perhaps "not", and all that for the line, not for the file/grouped item.

It's evident that's too much asked for in a mobile editor, and it also brings to light the enormous advantages of a database - or an Excel/spreadsheet file, but to a lesser degree, since as I said above, a flat database would need a descriptions hierarchy to replace current titling, while in a correct database, you would put the descriptions into additional tables, then just put the keys into the core table. I do not know yet what the creation of such a mobile database would imply, but I currently play around a little bit on my desktop, SQLite and several frontends being available. In fact, it's from trying to plan the database that I discover that my source file is far from being database-ready, it's really two very different formats, from the conception on.

99
Thank you for your forum hint. It's correct that in specialized forums, chances are much higher; before my post here, I had just searched by searching and reading, not by posting the problem.

Also, the Elevated-Account hint is a very valuable one. Of course, looking out for any account solution would be the wrong approach since I have to run those tools with respect to my regular computing, so an account change before and afterwards would be out of the question, and doing it all in an elevated admin account, incl. web browsing, is not recommended by anyone. In the few days I had that pc, I did it all in that regular admin account, since three thirds of my doings were of the administrive kind, but I was decided to settle down to a regular user account afterwards.

So for SPECIFIC things which can not harm, a regular user account, even in Win10 Professional, should be able to be tweaked that way, and that's the problem I should from now on try to solve. Their folder permission control seems to be a step into that direction, meaning it's an exception for folders but which seems to apply to actions done in/upon those folders, not to actions done by exectables FROM such folders.

So my question has been better clarified, Thanks!

100
As I had said, I had tried to work with a Windows 10 Professional machine during some days, and for probable motherboard problems, I didn't get that system stable. But I also I had problems with command line tools which W10 Prof. systematically refused to execute, notwithstanding all my tweaking tries from hours of reading respective tips and tricks on web forums; I suppose that W10 Home would accept these tools executing but for other reasons (establishing a little LAN network mid-term), I would like to get W10 Prof. instead, so I would like to find a solution to these problems.

Those tools are for example in the form (from the "run" window, then the following, then enter/return)

toolpath\toolname.exe someattributes sourcefilepath\sourcefilename.suffix targetfilepath\targetfilename.suffix

From this input into the commandline, those tools then are expected to open a command window (the thing which is black as a DOS window).

With sourcefile, I mean the file upon the tool will work, and by targetfile, I mean the file the tool will then create from it will have done upon the data from the sourcefile, so in reality, even with misfunction, there is no harm done to the source file which is just read, and the newly-created file will be some changed copy of the source file, not some ".exe" or other harmful file, but W10 Prof just refuses to execute those command lines.

I tried this with an administrator account, but not successfully. I tried to put the tool into other directories and for example into its own directory

c:\toolname\toolname.exe

instead of

c:\toolname.exe

or c:\programs\toolname\toolname.exe

and I also tried to put the sourcefile and the targetfile into other directories than ones that Windows constantly checks. And as said, I messed around with the UAC settings, then was not even able to reset them them, after all that messing around with it, according to those hints and tricks had not been successful.

Also, I did not even create an ordinary user before sending back the pc, just just the administrator account which should be allowed much more, permission-wise, than an additional user account.

It goes without saying that with Windows XP Home, all this works smoothly, and also, the tools in question work also with W10 or are specific versions to work with W10.

So I suppose now that I missed some core concepts in this permission control, since neither directory permissions nor user permissions did not work, for these command line tools.

To begin with, when Windows speaks of folder permissions, it's not evident if that means the folders in which the tools-to-run are put in, and/or the folders in which the files-to-be-worked-upon / files-to-be-accessed-from-these-tools, and the coordination of folder access in general and of account control - what some account is permitted to access / to do - is not evident for me.

Also, I do not understand why the administrator - not some additional user - would not have the right to run some tool from the commandline, independently from the storage folder of that tool, when on the other hand, any program installed into the programs folders - c:\programs (x86)\ and c:\programs\ is executed when run from the start panel, but a tool put into a folder c:\programs\toolname\toolname.exe, when I try to execute it from the commandline, which means from the Windows "execute/run" dialog - which is necessary to enter the necessary attributes, does not run.

I suppose that any program in c:\programs\specificprogramfolder is executed when triggered from the system, is sort of "object attributes heritage" from folder c:\programs, but as said, when I install those tools into such a folder, then try to run them from commandline, which is refused, so it becomes evident that there are possibly 3 or more security concepts which come into play: folder permissions, account permissions, and then also permissions depending on from which system internals a tool/program has been triggered, even when folder and/or account is/are identical, or then also, if has been triggered with or without attributes.

No help I found - and I tried hard, just finished a 1200-pages-W10-Prof.(!)-book without getting any help on this from there either - specifically treated this run-from-commandline (and/or with attributes) permission problem, which, as said, is probably specific to the Prof. versions of Windows in general and/or to the W10 Prof. version in particular.

(If I hadn't bought so many programs for Windows which I then had to leave behind, or would need to run from within a virtualization tool which probably will not be practical, I would jump from XP to iOS, not from XP to W10, but it's not only about buying anew, it's especially about finding, choosing, learning all those new programs then, so I'd better learn some Windows internals.)

Pages: prev1 2 3 [4] 5next