topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday November 8, 2024, 6:52 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - DK2IT [ switch to compact view ]

Pages: [1]
1
Living Room / Re: If not Cubby, then what?
« on: October 25, 2016, 05:05 PM »
Some alternative can be:
Hubic (cheap but slow?)
Tresorit (expensive)
PCloud
Resilio Sync

2
Wouldn't the internal html viewer also be less likely to be a malware vector simply because the IE control is a rich target for malware?
Yes - in fact in the option they write "not susceptible to Internet Explorer vulnerabilities".
I'm using it and it's very good, maybe complex email with css or advanced formatting are displayed not so good, but, if you need it, you can always open the email into your preferred browser.

3
The Bat! has tags support and can use internal html viewer (faster but sometime not so accurate) or use the more compatible Windows Internet Explorer control (similar to Outlook).
TheBatHtmlViewOptions.png
The Bat! is a powerful email client, many functions and deep customizations, has a plugin system and a template language for automatically create standard replies.
Maybe the interface is not so "modern" but it's faster and the program do not require so much ram (generally, for me, is using between 70 and 130MB).
Crash resistant - sometime I've terminated it and no email is lost - and I've folder with several GB of email with thousand of email.
Maybe too much functions for a basic use and this can be a little confusing for a newbie that must explore many menu and options.

4
Post New Requests Here / Re: IDEA: Hand Tool | Grab and Drag
« on: July 01, 2015, 01:56 PM »
Sorry to resurrect this "old" post, I've see only now and made me remember a software I've used long time ago on Window98, MouseImp, as mentioned here.
Try also DragToScroll, seems equally good.

5
Here a quick normal windows script
@echo off
set /a total=0
for /f "tokens=5 delims=," %%a in ('tasklist /FI "IMAGENAME eq chrome*" /FO CSV /NH') do set /a total+=%%a
echo %total%MB
set total=
Of course instead a fix name like "chrome" you can use %1 to use argument on commandline.

For Process Hacker, you must hover the mouse pointer on the systray icon of the program for some seconds, and that's show up the window, unfortunately only in this windows the process are grouped.

6
Well, you can try to get something similar with tasklist and an AWK for windows or similar (like advanced calculation using FOR command).
For task manager, try Process Hacker, work quite well.

7
First time also I have got several error with PowerShell and this script  >:(
If I remember well, you must enable some authorization, and when you launch by commandline you must enter the FULL path of the script.

8
To get the total memory of a multi-process you can use a little PowerShell script (taken from here)
$chrome = get-process chrome -ErrorAction SilentlyContinue
if ($chrome -ne $null)
{
$m = ps chrome | measure PM -Sum ; ("chrome Physical Memory {0:N2}MB " -f ($m.sum / 1mb))
$m = ps chrome | measure WS -Sum ; ("chrome Working Set {0:N0}MB " -f ($m.sum / 1mb))
}
I've found that Working Set seem to give a better indication of real memory occupation.
Of course you can change process name and obtain memory usage for others multi-process applications, like opera, iexplore etc.
(with this system you can also sum several process like Firefox + plugin-container + FlashPlayerPlugin)

Or, you can use Process Hacker, the latest release (v2.36) include a tray popup window (can be sticky) that show the memory / cpu usage of most active processes,
and for multi-process app show the sum of all process (also for cpu!).

9
Where many of the entries are variations on the same base, user01 user02 user1979 user1980 etc..  my last suggestion would be only store the "base" of the dictionary entry and generate the variations.
And that can be an interesting idea  :Thmbsup:

10
Of course this is a quick and fast solution for bhuiraj, it's not optimal, but do not require special software. I've tested 1.5Gb of data for over 227 millions of words, and the DB is quite big and the search are not so fast. But, of course, if you need speed you can use DB like mysql or oracle using a fine tuned configuration (memory index, cache query, partition table, etc.).
In this case, however, is possible create an optimal solution (without the generic DB overhead), but you need to create a specific software to handle a very very big dictionary.

11
How would I use/apply this? :)
Just use some DB manager for SQLite, like this SQLite Database Browser, or the command line version or there are many other programs.

I don't see what you suggested that I didn't already in this post:
https://www.donation....msg245865#msg245865
Nothing of new, just a real implementation, because we don't know how fast is a DB with a keyword as a key. And I can say that is very fast and do not need so much ram, but need hard disk space. Maybe enterprise DB (like Oracle/MySQL/etc.) can handle GB of data better than SQLite, but the system is the same.
Of course, you must find the right program to handle, because some GUI App (like SQLite DB Browser) load the file into ram and need over 1GB for that file of 100MB. The command line version, need only about 3MB instead.

12
Well, I've made a test.
Maybe SQLITE is not so optimized as filesize, but it's very fast to insert and find words.
The filesize is about two times the TXT. And using index to make faster search, the size is four times!
However, with big storage size and on a NTFS volume using file compression, the filesize should not be a problem.
Here's my test: a file (in sqlite) with about 5,6 millions of words - maybe there are duplicates, but I've made a very quick import.
The search is very quickly, also using the "slow" LIKE.

13
The trouble with using a relational database in this case is the key is the only data.  You're not saving anything or creating any efficiency.  If I have 3 paragraphs of text with key "ProfIrwinCoreySpeach" then I can use the key to get the data.  With the dictionary, there is no other data.  There's nothing to optimize.
What do you mean?
According to your example, you have this huge wordlist:
ab
aba
abaca
abacas
abaci
aback
abacus
abacuses
...
etc. etc.

that can be insert into a database creating a table with only one field, something like that:

TABLE wordlist (
word VARCHAR
)

That's all.
Or, I do not have well understood your problem  :(

You could split the files according to first character as you say.  But the only good way would be to build as you go along.  Having 33 GB of spaghetti data to unravel at the start makes it unwieldy.
Of course, once you have the file well organized, the next time you add new keyword you must insert in the correct way. Or, you can have some background process to sort the file for binary search. Or, you can create index file (to make a fast search) to unordered data.

14
Well, there is no problem to have a DB with the keyword as index key, the same system is the base for a search engine.
So I think you can find some DB software (I can suggest SQLite based that is a DB very fast and small, like SQLite Database Browser) and start to insert the keywords.
And if you create the field for keyword as unique, you cannot insert duplicates.
Of course, if you create an index on the keyword to have more speed in the search, that index will take up additional disk space.
I think this is the most efficient system.
Otherwise, you can use your system, maybe splitting words in several files, named with the starting letter of the keyword (A.txt for all the words starting with A, B.txt for words starting with b, etc.).
And if you think can be useful, I already have some little tool to compare and merge two text files with no duplicate. Of course haven't tried on very BIG text file.
However, I strong suggest you the first solution, that's more pratical, I think.

Pages: [1]