Messages - ital2 [ switch to compact view ]

Pages: prev1 ... 15 16 17 18 19 [20] 21 22 23next
96
When I said, registration to their forum took several days, this means a weekend, plus several workdays. Message was "Your account has been activated but you are currently in the moderation queue to be added to the forum.". I then finally got a message I could post, with the result shown above. There's also "Live Support", which is "Offline" most of the time when I look but then, it seems that they are available indeed between 5 and 7 a.m. in Europe, so this should be between around 17 and 19 o'clock Chinese time.

Some 2 years ago, a user (successfully) asked in their help forum: "Is there a difference between the functionality of the data modeler in the various For [DB] products and the Data Modeler product? If so, is there a feature matrix comparing the two?" That was the question I asked myself, too. All (s)he got for an answer was pur marketing speak, which you can also find on their web page:

"Thank you for your inquiry.

Navicat Data Modeler is designed for customer to create data models. If you want to use Navicat to design database, Navicat Data Modeler is the product that you want.

Navicat Preiumm [this type proves they repeat their advertising instead of answering a good question in earnest, but instead of just copying this crap, they type it anew: bad organization!] is a database administration tool that allows you to simultaneously connect to MySQL, MariaDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application. Inside Navicat Premium, there is a Data Modeling Tool. However, the function is not completely similar as Navicat Data Modeler."

My (perhaps wrong) personal answer to the question is: "Premium" is 1,000$ and does it all while the database-language-specific subsets are around 200$, but without possible translation from one database language into another, even if you own the respective subset for both languages. "Modeler" is, for around 300$, as "Premium" again but without any database contents, this means you can import any (of the allowed database formats that is) database's structure, work on it, and export the changed (or not) structure, in the same language or in another language (example: input SQLite, output MySQL, or any other pairing), but with "Modeler", this would be possible for the structure only, so if you have so-called production databases, this would not be possible and you would need again either the "Navicat for ..." specifics or the "Premium" version if you want to translate.

If I'm correct here, this would mean that the scope of their "Modeler" is quite restrained. Anyway, it's evident that the "Navicat for ..." and the "Modeler" are subsets of the "Premium" version, which means that very probably, bugs in the first or in the second are also in the third one, and vice versa.

When I said I fear that in their "Premium" (do-it-all) product (and their subsets "Navicat for ...", see my observation in their current "Navicat for SQLite"), they didn't go into all the necessary specifics of every language, I had not found this Modeler mini-review https://www.macupdate.com/app/mac/43140/navicat-data-modeler yet: "ming-deng
Dec 08, 2014

I tried navicat data modeler to export SQL statements for table creation for PostgreSQL and for SQL server. The output is unusable. For PostgreSQL it creates "Create Index.." which is not supported by PostgreSQL. For SQL server it generates "Drop constraint w/o name" which is going to fail in the SQL server. Also stupid enough it tries to drop all the indices before dropping a table!  When drop a table all indices got drop all together why generate tons of dropping index statements?

The tool has nice GUI [as I had observed, too, and said above] but at this point it seems useless and is full of bugs!" [I cannot say as much but think this observation for Postgres (for the paid or the trial version, since as said above, my free "Essentials" version does not do any export whatsoever, correctly or wrongly, it's just a demo) is of interest. Cannot say, of course, if that bug prevails or has been exterminated in-between.]

They specify their "90 days software maintenance plan" as "complimentary", obviously declaring this pity as a gift aims to silence up any criticism of 90 days not being a standard period for free updates.

As said, the crow's feet and other foreign-key lines are connected anywhere to the table, not to the specific field/column, but you can move them manually to a better connection point; this is not ideal especially when you insert fields/columns afterwards which shifts the position of existing fields. (You can color those lines even though I can't conceive a use case for that.)

You can create foreign keys by drag-drop the target field onto the source; for this, you must select the foreign-key instrument first, but anyway, this is very good functionality of Navicat Modeler. Since any help is online only, and without search functionality, I better praise this excellent point here; you'd risk to discover it belatedly.

When I said that on pre-selection of a table, the contour of the table changes its color, it's more correct to say it grows thicker, as indication for the pre-selection, but as said, mouse-over will not show any comments then.

Let me tell in brief why I think field comments are so important. They can contain musings about the format of the contents for this field, of course, but they also can contain To-Dos and other reminders for your construction work: do this, pay attention to that, etc., even for fields not having been created yet, in other words, you use the comment of field x for a reminder for the future field y; also for this reason it's necessary that you have visual comment indicators. You cannot create some fields before having created other fields it seems, so it is reasonable to create reminders for those other fields to be created afterwards. All this is not possible for SQLite, which is why I came up with the idea that an SQLite frontend could do this on its own.

Since I (unsuccessfully every time) tried on several occasions to post my questions / suggestions in their forum, I had even lost my text, the text above being a second version (which I could not post either) in which I only remembered some of my points, but now I have found the original text again, so here are some new/old observations left out above:

From my original text: "I would like to use the Modeler to quickly jump from any column to any column, in any table (which is on the canvas), in order to develop and refine it all iteratively; in the comments I do not only put hints, but also "ToDo's", "Attention" and other "Work to to", and for such things with regards not to some columns, but to a (whole) table, I even create a column "PROBLEM" or such, in order to get a "generic" comment for the whole table.

Those "EXTRA" columns are immediately visible, but of course, regular comments (their content and also their shere existence, to begin with!) are not, and I would greatly appreciate the possibility to see any comments by just hovering over any column ("field name") anywhere, without having to do that "real selection" of the respective table beforehand, all the more so since in most cases, I then do not do something with that table, but just get the information (recall, aid for my memory) and go to other tables, retrieving comment info there.

In particular, I do not create tables as fully as possible, then create others, but just create some core columns, then some columns in some other tables, and so on, "completing the picture" in this process in an often seemingly "chaotic" way. Thus, with the need to "activate / really-select" any table beforehand, before being able to have a glance at its comments, it's an enormous amount of "clicking around", instead of smooth, intuitive working."

So you have also the idea here that you create immediately-visible columns, like "PROBLEM" (text format of course) where you then put some generic comments, and which afterwards you delete again when those problems are resolved / those tasks have been done; this is also possible with additional columns in existing databases already containing data.

And first of all, it's, as said in my text, intuitive, iterative construction all over the canvas, neither top-down nor bottom-down, but as flexible as it gets, which means it's by constructing in little steps here and there that you discover the best distribution of your columns over the tables, including new tables to be created for that or other tables to be stripped off of some of their columns.

It's evident that for such, very natural, work, you need a big screen and quite tiny tables (as said, those in Navicat are too big even on traditional screens, let alone high-dpi screens), so that you can display many tables on the screen concurrently, and it's also evident that such "chaotic", iterative work will need comments and visual comments indicators, and that the comments should be visible by simple mouse-over, not asking for an unnecessary intermediate step of "really activating" the table in question.

It also becomes evident why layers would be very helpful (what "Navicat Modeler" has got instead of layers is almost unusable and very badly executed, it's background graphics with some "glue" which doesn't function properly: it's a lot of fuss for no real outcome), and with one table being able to belong to more than one layer only: In big databases, there are obviously table groups, but those groups are not, for every one table belonging to them, clearly defined, so it often makes sense, I think, that just 1 or 2 tables from one group also are displayed together with another table group to which their columns "belong" in a way, without being incorporated into those tables, since they are either more generic, more special or belong even more strongly to some other data.

Btw, you can colorize the table captions, which is not bad at all, but then, you cannot filter by these colors - which would have been an almost working alternative to layers; anyway, you can only assign one color to these table captions, not two (which technically would be possible, you often see two-colored tabs and such, one color a triangle in the left "half" and the other color a triangle in the other), let alone three, but for tables belonging to more than one group, you could then have assigned additional colors, for example brown belonging to yellow AND red, and then, if the filter was "brown PLUS yellow", "brown PLUS red", that would have been perfectly working instead.

What I also had tried to tell them in my first try: When you select a field/column, an info-box to the right of the canvas will display some data for the table; this is devoid of any sense, but of course I worded that differently. In fact, I suggested that when you select the caption of the table, you get that table info in the info box, and when you select a field/column, you get extended info for that field/column.

As it is, there is some very basic field info within the "field" field itself, but for all the other, often very important, info, you have to double click, and then you get an "Design Table" window, for the whole table, not for the field in question only (which for both looking up data and for changing data is too much), and a window which then you will have to close by alt-f4, so for just looking up relevant field attributes, this is not intuitive at all and takes a lot of time and effort, while all the time, the info window to the right of the canvas shows irrelevant table data.

It's evident that selecting the table's caption should display table info, selecting a field should display all field attributes, and that a double click into the caption should display the "Design Table" window, for some bulk edits, while double clicking a field should not even display some "Design Field" window, since it would be all there in the info field anyway, where of course there should be allowed inline editing - this is all so simple I am unable to understand why anybody would do this in any other way, considering the info pane is there anyway.

Of course, if you don't display such a permanent info pane, then you must do it otherwise, for example all the field info (and not only the comment) by mouse over, and a "Design Field" window by double-clicking the field. That would be a very viable alternative for not sacrifying the screen estate for the current info pane, but I think a brilliant developer should offer both alternatives: The inline-editable info-field for the first period of work, when the user will enter enormous amounts of data (and the mouse-over-info anyway with all data, by option), and the mouse-over-display of all data, with a clickable "Design Field" (which in fact would be the inline-editable info pane from the previous alternative, but not besides the canvas, but over the canvas, hiding any table there as long as the user enters data; upon "enter" the editable info pane would disappear again - this alternative for the later stages of work when there is much less data entering (by data I mean fields and their attributes, not contents), and much more fine-tuning. (Call this info pane "Properties pane", as Navicat does, or "Inspector", or as you like. Btw, you can hide it in "Modeler", and since currently it doesn't contain any real information, that's what you should do in order to get a larger canvas.)

From the above, it becomes very evident that graphic construction of a complicated database on a big, high-resolution screen and with the right tool (the fact that I don't possess either does not invalidate this), which has got the most important ones of the functionalities described above, is much more straightforward than by mechanically filling up tables one by one with columns, each table separately in its own window hiding the other tables.

P.S. I have left out query building. It's evident that "Navicat Premium" and "Navicat for ..." come with these, as do any other frontend; I didn't look for it in "Modeler" but probably it has got it, too. Also, you need named, stored queries, which are available in Navicat, but not in every other frontend, or then, in some not very practical way. In "AnySQL Maestro/... Maestro" for example, you store these, but clicking on them then doesn't run the query but just opens another pane in which you must click on a "run" button or something.

"Navicat" should do three things:
- implement better organization (see above)
- show real interest in the specifics of the database languages it covers
- introduce into "Modeler" the most important ones of my suggestions above (functional layers or functional color sorting; making comments available without "real selection" needed, with visual indicators; making all field attributes available by mouse over or at the very least by simple click, either in the Properties pane or in a floating, mouse-over pane (not only in the extra, cumbersome "Table Design" window as currently done); then, in a second step, the position of the link lines.
And don't call a demo "Essentials".

97
General Software Discussion / Navicat Review
« on: April 22, 2017, 12:44 PM »
In my tries with SQLite, I played around a little bit with the "Navicat Data Modeler" which is not cheap; in fact I played around with the free "Essentials" version which is not a lite version but a demo one, in fact, you don't get any data in nor any data out, or perhaps then with the 14-day trial.

Playing around with it was lots of fun, but some points I didn't like at all, so I tried to have their point of view about them on their forum. The sign-up for their forum does not take several minutes, but several days (!), and then I wrote:


Navicat Data Modeler is graphically very pleasing and functional, but I miss some functionality which would greatly enhance my productivity with it:

There are no visual comments indicators for fields (see Microsoft Excel such for indicators). This makes a lot of unnecessary clicking (see next point) and mouse-overs, in order to detect possible comments.

Reading comments by mouse-over needs the respective table to be really selected, just the pre-selection does not display anything. Or you call it pre-activation and activation, respectively, or something else, anyway: the "pre" thing will color a frame around the table by mouse-over, but will NOT display any comments; for that you must really activate the table by clicking. This is counter-intuitive since you cannot freely move the mouse over the whole canvas, in order to read a comment here, then another there, in another table, and again in another one. It's all a lot of unnecessary clicking, and then even searching for possible comments (see previous point).

Link-lines (foreign keys) are not field-to-field (column-to-column), but just table-to-table.

The field (column) fields are too big, so the tables become too big, too, and take too much room (screen real estate). I've seen flowcharts where these symbols were less big, so you get many more table symbols on the screen of a given size.

The grouping of tables does not work correctly in all instances; more often than not some tables will not follow when moving around. Also, I would prefer named layers for table groups, with one table being able to belong to several named layers (!), and with a multi-selection layers list, i.e. the user could display just one layer, two or several of them concurrently. Ideally, outgoing or incoming link-lines to/from tables not visible currently would end in sort of an end point with the name of the table which currently isn't displayed, and ideally even with the target/source field name.

Navicat Data Modeler is ideal for constructing databases with 100 tables and more, but it's precisely with such big projects the realization of the above wishes would help enormously.


Which gave:

Error
You are not authorized to create this post.

This is different from:

Error
The string you entered for the image verification did not match what was displayed.

Of course, I tried on several occasions, on several days...

For the above, it's important to know that their "Modeler" does of course not add comments to SQLite, even if that would be terrific to have (for example by an SQLite database in Navicat which would load the comments for display, and could even write them into the SQLite code, into some comment lines block for example), but I discovered the joy of having field comments by playing around, selecting "MySQL" instead of "SQLite".

That being said, I also tried their "Navicat for SQLite" and discovered BIG bugs, by trying to insert a column into an existing table, instead of inserting the column, Navicat wrote the data from another column into the rowid column and so destroyed the whole table, no "undo". Inserting columns into an existing SQLite database is not that easy, as I then learned from forums, but both SQLite Maestro (trial) and SQLite Expert "Personal" (free and highly recommended) correctly do the necessary intermediate steps (and in no time) in order to execute this task faultlessly.

If this 1000$-plus-VAT program (updates for 3 months included) "Navicat Premium" does do similar mishaps in other databases or when translating databases from one format into another, that'll be fun!

Fact is, building a database from a graphical representation is real fun, but only when you can organize that work according to what I say above.

Current "Essential Premium" is 160$ plus VAT (had been 40$ when the "Navicat for..." subsets were 10$, they are now 40$ each), but if you're willing to live with a quite ugly, old version instead:

Extensive search had me find the only (?) surviving download link for the last version of Navicat Lite 10.0.3:
http://www.chip.de/downloads/c1_downloads_hs_getfile_v1_70358375.html?t=1492872478&v=3600&s=e4dfd9f627b57e81fa40c05dc3d1cb76 (download dialog for NavicatLite-10.0.3.exe will appear in about 5 seconds)

Edit May 8, 2017: Title Change

98
Cranioscopical, if I'm not very mistaken, I even trialled that editor, and it does not display all occurrences of a search string at the same time. If I'm mistaken, please rectify.

NoteCase Pro is what they call an outliner, right?

And then a wiki even.

I had not thought of such alternative software forms. So here the task would be "display all items which have one or several key words/strings in them" - I currently do not see their advantage over a more traditional database, all the more so since the latter could be queried by simple sql queries, and a general translation problem would subsist.

In fact, I have tried to do some planning for exporting my text files into a database, and I have found that this task is not that easy, because, as described, I currently organize my data into "pages", for - 3-column - printing and also for searching/looking up on the screen: When I see some entry, it's within a vicinity of similar entries, and all of which are below some title or some title/subtitle titling/header hierarchy (1-3, sometimes 4 only).

If I put my data into a database, this titling/subtitling will be lost, or I will have lots of work to do: For every text line/record, I need the hierarchy of its respective titles in additional fields - or spread over several several tables, with foreign keys -, and if then I want to look at some record together with similar records, I would need an sql query with the respective titles AND subtitles, OR I refine the titles/subtitles in a way that I need less hierarchy for these, or in other words, I could try to replace my title hierarchy by flat tagging, with some tag combinations when necessary, in order to simplify the queries and especially in order to simplify the "typing" when searching for some group of entries.

In other words, I had not been aware before that if I want to transfer my titling hierarchy into a database, my queries would become very worded, since I the subtitles are neither unique nor do they come with sufficient info, that info being within the titles higher-up. In other words, I become aware of a difference in the organization of a hierarchical text file and a database: The database can select by many more criteria, but the criteria in your subtitles will be lost if you don't recode all info which has been in your titling hierarchy, now optimized for database usage - it's simply not realistic to put, and then into a mobile device, sql "where" strings which add up to 150 characters, while in title/subtitle combinations, that many characters do no harm.

So I will have to find very short codes/tags instead, but which I can memorize at the same time, or/and even reorganize my current titling hierarchy and with that the textlines grouped by them. This is fascinating, but comes totally unexpected.

If I put my data into an outliner or a wiki instead, these titles/(sub)categories were either lost, or then, instead of putting each line into an outliner/wiki item - as I would put each current line into a distinct database record then -, I would need to create the titles/subtitles as items, and put multiple textlines into those items, in other words, they would remain grouped as they are now in the text files. This would be quite messy, as it is now - but with better search IF the outliner or wiki displays lists of search results - and I would not take advantage of what databases could do additionally.

SQL allows for searching/grouping of any records that contain value x in field a, value y or z in field b, and so on, and at the very least value x or/and value y in LINE a; RegEx search provides the latter functionality at least also for text files. But if I put my textfiles into an outliner or a wiki, I even lose this functionality of combining values x and/or y in the same line, meaning the same record, since the records in an outliner or wiki are not the text lines in an item, but the items, and "search for value x and/or y within item a" would NOT display just the corresponding textlines then, but any outliner/wiki item in which ANY of the textlines/records would comprise these values, which is obviously not the needed result.

So outliners/wikis seem to be an alternative, but a lesser one, or then, you put every text line into its own item, which technically would be possible I suppose, but which probably doesn't make too much sense since these instruments seem to have been created for more developed texts, not for single text lines - but for processing text lines, editors are a very natural solution.

Of course, there is always the problem of currently having combined info in ONE text line, an example from just one of my files being one author for several book titles but which are separated by ";" which do not occur otherwise, so technically it should be possible - if not easy - to distribute this info into several text lines, each with its own, repeated author information, there being a ":" after the author. Then, often, there are several authors, in my example file, but here again, RegEx probably could help, since in other cases, there is no "," before the ":".

Similar for other such files: All of them are sufficiently organized (some with special characters like "[]" for example) in order for some automatic reorganization appearing possible, before translation into database.

But it's quite a project.

So in my requirement above for a Android/iPad text editor - just a list for the search results all together -, I mistakenly had left out my requirement for Boolean search, so there should be "or" and "any" and perhaps "not", and all that for the line, not for the file/grouped item.

It's evident that's too much asked for in a mobile editor, and it also brings to light the enormous advantages of a database - or an Excel/spreadsheet file, but to a lesser degree, since as I said above, a flat database would need a descriptions hierarchy to replace current titling, while in a correct database, you would put the descriptions into additional tables, then just put the keys into the core table. I do not know yet what the creation of such a mobile database would imply, but I currently play around a little bit on my desktop, SQLite and several frontends being available. In fact, it's from trying to plan the database that I discover that my source file is far from being database-ready, it's really two very different formats, from the conception on.

99
Thank you for your forum hint. It's correct that in specialized forums, chances are much higher; before my post here, I had just searched by searching and reading, not by posting the problem.

Also, the Elevated-Account hint is a very valuable one. Of course, looking out for any account solution would be the wrong approach since I have to run those tools with respect to my regular computing, so an account change before and afterwards would be out of the question, and doing it all in an elevated admin account, incl. web browsing, is not recommended by anyone. In the few days I had that pc, I did it all in that regular admin account, since three thirds of my doings were of the administrive kind, but I was decided to settle down to a regular user account afterwards.

So for SPECIFIC things which can not harm, a regular user account, even in Win10 Professional, should be able to be tweaked that way, and that's the problem I should from now on try to solve. Their folder permission control seems to be a step into that direction, meaning it's an exception for folders but which seems to apply to actions done in/upon those folders, not to actions done by exectables FROM such folders.

So my question has been better clarified, Thanks!

100
As I had said, I had tried to work with a Windows 10 Professional machine during some days, and for probable motherboard problems, I didn't get that system stable. But I also I had problems with command line tools which W10 Prof. systematically refused to execute, notwithstanding all my tweaking tries from hours of reading respective tips and tricks on web forums; I suppose that W10 Home would accept these tools executing but for other reasons (establishing a little LAN network mid-term), I would like to get W10 Prof. instead, so I would like to find a solution to these problems.

Those tools are for example in the form (from the "run" window, then the following, then enter/return)

toolpath\toolname.exe someattributes sourcefilepath\sourcefilename.suffix targetfilepath\targetfilename.suffix

From this input into the commandline, those tools then are expected to open a command window (the thing which is black as a DOS window).

With sourcefile, I mean the file upon the tool will work, and by targetfile, I mean the file the tool will then create from it will have done upon the data from the sourcefile, so in reality, even with misfunction, there is no harm done to the source file which is just read, and the newly-created file will be some changed copy of the source file, not some ".exe" or other harmful file, but W10 Prof just refuses to execute those command lines.

I tried this with an administrator account, but not successfully. I tried to put the tool into other directories and for example into its own directory

c:\toolname\toolname.exe

instead of

c:\toolname.exe

or c:\programs\toolname\toolname.exe

and I also tried to put the sourcefile and the targetfile into other directories than ones that Windows constantly checks. And as said, I messed around with the UAC settings, then was not even able to reset them them, after all that messing around with it, according to those hints and tricks had not been successful.

Also, I did not even create an ordinary user before sending back the pc, just just the administrator account which should be allowed much more, permission-wise, than an additional user account.

It goes without saying that with Windows XP Home, all this works smoothly, and also, the tools in question work also with W10 or are specific versions to work with W10.

So I suppose now that I missed some core concepts in this permission control, since neither directory permissions nor user permissions did not work, for these command line tools.

To begin with, when Windows speaks of folder permissions, it's not evident if that means the folders in which the tools-to-run are put in, and/or the folders in which the files-to-be-worked-upon / files-to-be-accessed-from-these-tools, and the coordination of folder access in general and of account control - what some account is permitted to access / to do - is not evident for me.

Also, I do not understand why the administrator - not some additional user - would not have the right to run some tool from the commandline, independently from the storage folder of that tool, when on the other hand, any program installed into the programs folders - c:\programs (x86)\ and c:\programs\ is executed when run from the start panel, but a tool put into a folder c:\programs\toolname\toolname.exe, when I try to execute it from the commandline, which means from the Windows "execute/run" dialog - which is necessary to enter the necessary attributes, does not run.

I suppose that any program in c:\programs\specificprogramfolder is executed when triggered from the system, is sort of "object attributes heritage" from folder c:\programs, but as said, when I install those tools into such a folder, then try to run them from commandline, which is refused, so it becomes evident that there are possibly 3 or more security concepts which come into play: folder permissions, account permissions, and then also permissions depending on from which system internals a tool/program has been triggered, even when folder and/or account is/are identical, or then also, if has been triggered with or without attributes.

No help I found - and I tried hard, just finished a 1200-pages-W10-Prof.(!)-book without getting any help on this from there either - specifically treated this run-from-commandline (and/or with attributes) permission problem, which, as said, is probably specific to the Prof. versions of Windows in general and/or to the W10 Prof. version in particular.

(If I hadn't bought so many programs for Windows which I then had to leave behind, or would need to run from within a virtualization tool which probably will not be practical, I would jump from XP to iOS, not from XP to W10, but it's not only about buying anew, it's especially about finding, choosing, learning all those new programs then, so I'd better learn some Windows internals.)

Pages: prev1 ... 15 16 17 18 19 [20] 21 22 23next
Go to full version