avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Wednesday July 24, 2024, 10:00 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jity2 [ switch to compact view ]

Pages: prev1 2 [3] 4 5 6next
Dear All,

Here are my tests results so far comparing : Copernic Desktop Search vs. X1 Search vs. Dtsearch vs. Archivarius 3000 (limited to 10,000 files due to trial limit).
I tested them on only : one of my archive folder (year 2008) + one folder containing some emails (.eml) + one folder containing some big pdf files and one .epub file).
Note: IMHO file numbers are to be taken with a grain of salt as depending on the default software configuration some extensions that I don't care much about could have been included or excluded (I care about htm html doc pdf xls ... see images).

See joined images. ;)

Conclusion so far for me: I think I am going to buy dtsearch once I have tested it with my full archives.
For now dtsearch is faster doing the index plus the size of the index is smaller.
And one thing that I like is that it displays some extracts (not for all pdf or html files. I don't know why ?) of each results (alas not for all file extensions) with the keyword highlighted ala (see option: "First hit in context") ! ;)

Archivarius 3000 test : I also like the fact that extracts of keyword are displayed but displays only results as txt ! ;(

ps: It is not in the images, but I also tested them with the same folders containing only zip files. Dtsearch was again the winner for indexing speed (2 time faster than with unzipped version!). Alas for me I have too many zip files inside zip files, it finds the keywords ok but when I want to open the file I am let with one zip file opened but not the related file opened. Anyway I'll keep my unzipped folders. ;)

Hoe this helps ;)


I have tried X1 with the big pdf file inside your link : https://www.donation....msg220457#msg220457
Indeed, to my surprise, the pdf is not indexed in full by X1 ! I have tested it into another big pdf file of mine and I have the same result ! ;(
I don't know why but X1 stops indexing pdf files after some number of characters. Maybe 1Million or something else like Google Drive? I don't know !

Also I noticed that X1 did not index some of my yearly subfolders (see above).

I going to test dtsearch ! ;)

See ya ;)

General Software Discussion / Re: Desktop search; NTFS file numbers
« on: January 11, 2015, 08:15 AM »
Hi Peter,
Thank you for your comments and stackoverflow link. ;)

It's evident 2 million (!) files cannot reach any "NTFS limit"
I agree. I was obviously not clear enough !
In fact here what I did to reach some kind of NTFS limit in one hard drive when I unzipped a monthly archive a few months ago. (The solution was to remove the unzipped version of some previous months and add them into another HDD) :
First, I have 2 version of my data:
- one is zipped (no problem here. Total size here is more than 2 To).
- one is the same but unzipped version.
In the unzipped version has (from memory !) more than 10 Millions files in total, but I chosen to index the content of only some of the files : (about 2millions) For instance I index pdf, doc, many small eml files... etc But I do NOT index the filename or content of images .jpg,gif.. .js files...).
Since the last 15 years I have saved a lot of html files (https://www.donation...opic=23782.msg215928).

Thanks again for your link and especially this comment which I think is relevant to me ! ;)

Re:X1 outlook.
 Sorry I can't help as I don't use it (I use gmail).

See ya ;)

Dear all,
Thanks for all your help. ;) I have updated my previous message : https://www.donation....msg373068#msg373068
See ya ;)

My pleasure. ;)


To be fair I can add that Backblaze, a few months later after my problems, increased the size of their among other technical things.
So more than 1.5 year later, I am still a customer probably because broadly the reasons given here https://www.donation....msg283957#msg283957 are still valid.
I have now about 2 To of data saved in backblaze (about 700,000 files and 2 Go for
But by that time, beside keeping local backup on several places, I have added copies of my data into other cloud providers : Google Drive (1To limit - I hope they increase it soon ! ;) ) and (*) (it saves my Gmail emails and google Drive daily with unlimited storage with amazon cloud. I currently have more than 2To saved there).
By the way I also use Syncdocs for (only) uploading data into Google Drive as it is far more reliable than the Google Drive uploader !

So I now don't use anymore Raid mirror on my pc as I have several physical copies + several online copies of my data.
I hope this helps ;)
See ya

(*) (Note: you & I get a $5 discount if you follow this link : ). They save once (if you have lot of data) or twice a day your Gmail inbox,draft,trash...+ Google Drive (but not google plus photos or script or forms because they can't be exported from google with google API) so you can restore (or export on your computer) your old emails and Google Drive files easily. Once saved, they can't delete anything and keeps all data incrementally. You will probably have to wait a few days so that they do a first full backup of your data.

Update: one day after. I have updated several contents of this post. It may be better to read it again in full ! ;)
Dear all,

1) For keyword search in files content - Locally on my PC
I was using Copernic Desktop Search (CDS) for years (since V2 I think). A month ago I was still using their V3.7 released a long time ago.
I have bought their V4 more than one year ago but even recently it was too buggy for me. Note when V3 was released years ago, I remember that I had to wait one year so that it was not buggy ! Well, I don't think I can say they care about their customers (note: it costs a one time fee of about $50. And 50% coupon when a new version was released)!

Now I am testing X1 Search V8 for the last 10 days (I remember having tested unsuccessfully X1 when it was with yahoo. Apparently they changed the full code of their program around 2010 which I have missed!). So far I quite happy overall especially since it displays faster in the preview pane xls, pdf and html files.
The only problem I had is that it doesn't display correctly accents in the preview pane html files (see

With CDS V3.6 size of the index was 85 Go with about 2,000,000 files indexed (Note: In one hdd drive I even hit the NTFS limit : too much files to handle ! edit: see https://www.donation....msg373167#msg373167) . It took about 15 days to complete 24/24 7/7.
I still haven't finished indexing all my contents (mainly xls doc ppt mdb csv zip pdf eml msg files) with X1. So far speed of indexing and size of the index seems to be about the same. But I'll try to update that info here. ;) edit1: Apparently I have some locally saved email (.eml) that were not indexed. I need to try to find why ! Edit2: I have tried several thing but it won't index some eml files ! I also checked and discovered that some subfolders were not indexed even if X1 says all the index is up to date. I have tried 3 times to re-index and the last time it found alone some yearly folders that it had missed the first time !  I have still some yearly subfolders not indexed !

Update : Done. In my case with the same computer and the same data content (about 2,000,000 files) to be indexed, X1 did it in 10 days (5 days less than CDS V3.6) and the size of the index is 52 Go (less 33Go than CDS) ! See edits above! Note;: I had some strange problems yesterday. X1 was stuck and then suddenly it started to use twice the same amount of the size index ! Grrr I moved the index on another HDD from a SSD. I let it finish the indexing and moved back to the SSD where the index size is again the normal size (52Go)! Now I need to test extensively to see if everything is ok with X1 ! I'll try to update again about it here !

I haven't tested dtsearch yet (so I don't know if it is better/faster. Dtsearch index size seems to be 15% the size of the indexed content. so my guess is that the index will be bigger than with CDS or X1). Price seems high $200 but X1 is $50 +$25 each year. So maybe I'll give it a try. ;) In fact after watching some videos about it, I won't try it will try it even if I don't use regex for searching keywords, and because the interface seems not very enough user friendly (I don't want to click many times just to do a keyword search !).

1 bis) For keyword search in files content - online in the cloud

I have also uploaded my data in Google Drive (see my experiments here https://www.donation....msg373077#msg373077 ). There, it also does index the content of the files with some limits : for instance :
- index only the first 100 pages of pdf files - But if you open a 1000 pages pdf files and do a keyword search in it, it will find the keyword !
- index only the first million characters of any file (
- may not be able to open very large xls files (note: I already created Google Drive Sheets close to the 2,000,000 cells limit but it consumed close to the 2Go RAM limit of Firefox 32Bits !). In reality I stay closer to less than 20 MB for xls files.
- doesn't display small extracts of the files (like we see when we do a keyword search in So you have to open each files to see if it is the document you were looking for !
- you have to wait a few seconds for the UI to preview or open the file.
- doesn't display correctly html files. In fact it displays the html code only !! This is strange as gmail can do so properly without images when you send a joined html file in an email !
If you have millions of small files (html plus their related gif etc files..) it may very difficult (it creates easily orphaned files without telling about it...https://productforum...ns/drive/qM_Wdt6ElRQ) ! My next goal is to convert my html files in txt so I can do searches on it inside google drive. But I hit another problem : Google drive folders are not folders like in your pc but labels (see https://productforum...ns/drive/qM_Wdt6ElRQ). It can take ages when you want to upload such many thousands files in Google Drive. It loads somewhere in a server memory all the google drive files, before adding new ones.  Plus you can't search yet easily in a folder.

2) For keyword search in filenames only
I also use Everything for file searching. I like their folder filters shortcut (search only in some pre-defined folders).
I also use Listary files searching when I have opened a folder with many files. It can search inside very fast (I use it also for selecting very fast folders that I have previously bookmarked).

Well, as someone wrote hereafter, I think I am closed to limits for indexing all my data ! But day after day, it becomes better ! ;)
See ya ;)
My computer : win 8.1 64bits french +4cores + hdd + ssd +16Go ram.

Hi nkormanik,

I still use unziplify in windows 8.1 64bits. And it still works fine for me. Apparently you can find it here now :

See ya ;)


In case this helps : I have found a way to do that with this freeware : DNgrep

I select the folder where the htm files are located. Then I click on the right icon and I write "*.htm".

Here are the regex :

1) cut everything after keyword2 :
with regex +multiline + dot as newline checked
+ hit search then hit replace

2) cut anything before keyword1 :
with regex +multiline + dot as newline checked
+ hit search then hit replace

voila ! ;)
see ya


I have many htm files in a folder.
I would like that an Autohotkey script opens one file, then delete anything before keyword A and after keyword B. Then save and close the htm file.
And repeat the same process for all the other htm files in the folder.

note: Keyword A & B can be a long phrase like "var this_is<_a long: "phrase" "

Thanks by advance ;)

Dear all,

In the past years I have already made some similar requests that are (still) working with Winrar :

A) zip all main sub-folders at once with Winrar
see : https://www.donation...57.msg33048#msg33048

B) For unziping I use Unziplify (it checks several times if a new zip is inside the just unzipped folders).

Now, I would like to do the same with 7zip as it creates very small 7z files in my case Example: 5.7 Go not zipped files becomes 1 Go rar file (note : I can't do it in .zip file as 2 Go limit reached) or a smaller 50 Mo only 7z file !! Time spent to do the zipping is the same aprox 10min.
1) zip all main sub-folders at once with 7zip
I guess I have to change this row :
Code: Autohotkey [Select]
  1. RunWait,c:\Program Files\WinRar\WinRar.exe a -r -ep1 -m3 "%TargetPath%\" "%A_LoopFileFullPath%\*.*"
But I am lost ! ;(

2) unzipping with 7zip

A problem for me is that it does not delete .7z files once they have been unzipped

Thanks in advance ;)

Win 8.1 64bits


@IainB : Thanks for your suggestions. ;) I have already tried Update scanner in the past. It is not enough useful for me.

My request is old but it is still a needed one! ;)
For the moment I am still using yahoo pipes (I often use it to keep only feed titles ; or getting the full content of a few urls) and website watcher in order to receive emails (about 2000 bookmarks monitored - I can also get for instance : only one big email every 12 hours for rssfeed bookmarks that are checked every hour - the current website watcher maximum limit being 99 items for a bookmark). (*)
I also use Yahoo Pipes with (in order to receive emails when a new item is available in the rss feed created with yahoo pipes).

Furthermore, Yahoo pipes have some usage limits :

200 runs (of a given Pipe) in 10 minutes

200 runs (of any Pipe) from an IP in 10 minutes

If you exceed the 200 runs in a 10 minute block, your Pipe will be 999'ed for a hour.

We currently are not raising rate limits. This may change in the future.

So sometimes, ifttt has some errors. See in the activity log I often have "General Trigger Error" or "Trigger Initialization Timeout". Note: They check rss feeds every 15 minutes and it is free. ;)

note:  I also use the free-ad-supported plan for other rss feeds (outside yahoo pipes, because it can only check it once a day https://blogtrottr.c...elp/#limited_domains).

The main idea is still to create something similar for windows to yahoo pipes rss feeds or the more visual and easier (but often offline!)  ! ;)

If a software developer comes by here : be welcomed ! ;)

Thanks in advance ;)

(*) In order to do that, the basic trick in Website Watcher is to : double left click on a rss bookmark checked every hour / choose the "new version" / copy the url starting by C:\ / create a new bookmark and paste the url and check it every 12 hours/ then remove the send an email with the one hour check bookmark + keep the most recent 99 articles in both bookmarks. ;)

Dear all,
I have updated the Firegestures script today as it didn't work with Firefox 25 !

I had the error :
"ReferenceError: getShortcutOrURI is not defined"
Thanks to https://bugzilla.moz...ow_bug.cgi?id=938401 for the help. ;)

I hope this helps  ;)

See ya ;)

General Software Discussion / Re: Cut and paste list of files ?
« on: July 12, 2013, 03:10 AM »
Wow! Many thanks "4wd". ;)
I'll do more testing but this is working great so far. I even don't have to rename all my files. ;)
Thank you again. ;)
See ya

General Software Discussion / Re: Cut and paste list of files ?
« on: July 11, 2013, 02:53 AM »
Many thanks "4wd" ;)

I  have removed "@echo off".
I have tried several things and it seems to work for simple folders and filenames. But strangely it doesn't does not work for all.

C:\prog\CopyFileList.cmd G:\transfer\ C:\1\!Not_Searchable.txt

I am adding an example that is not working for me (only one file is copied. If I have similar more paths, it copies only the first file).

I am on Win7 64 Bits. My language is french. I have removed accent from filenames.

Thanks in advance ;)

edit: file uploaded as a zip file this time

General Software Discussion / Re: Cut and paste list of files ?
« on: July 07, 2013, 08:17 AM »
Many thanks "4wd". ;)
I have tried your script.
I added "CopyFileList.cmd" into the root of C:\prog\ (I also tried C:\)

and execute (run) : C:\prog\CopyFileList.cmd G:\transfer C:\prog\PDFTextChecker\1.txt

It copied only the first listed pdf file (with its full directory structure) and no other pdf file.

Thanks in advance ;)

General Software Discussion / Re: Cut and paste list of files ?
« on: July 07, 2013, 03:41 AM »

Many thanks "4wd" ! ;))

This is very good ! ;))

I have another request if it is  possible :
Sometimes I have I have one file in a folder that has the same name (but not the same content) than another one into a subfolder. Then the command line asks me to choose : would I like to overwrite such file or not.
The problem being that it can erase a good pdf file.
It would be great if it could keep the same folder and sufolders structure than the source. Like that I would be sure that there won't be any overwriting. ;)

Or if that is not possible, always rename automatically the new filename by appending some numbers at the end for instance.

[The following is just for fun as I think I have a solution :
- I did a test with one accent inserted in the filename of a valid pdf file :

Code: Text [Select]
  1. chcp 65001
  2. for /f "tokens=* usebackq" %I in (`type "C:\prog\PDFTextChecker\1.txt"`) do copy "%~I" G:\transfer
(By the way I have used this tip for copying the above code on the command line : http://www.techspot....te-cmd-using-ctrl-v/ )

note: If I remove the accent "é", the file can be copied. ;)

I have tested with :
- UTF-16 => error I when I type chcp 1200 ("coding page not valid")
- UTF-8
- chcp 1147 (France)
- chcp 20297 (France)

Alas each time the file was not copied. ;(

My solution is to use a renamer : the great freeware "Bulk Rename utility" and remove "accent" and "Symbols". ;)

Thanks in advance ;)
See ya
ps: Thanks for the idea "MilesAhead", I'll prefer "4wd"'s way for now. ;)

General Software Discussion / Re: Cut and paste list of files ?
« on: July 06, 2013, 04:58 AM »
Dear all,

Thanks for your answers. ;) After several tried I still blocked ! ;(

I can add my list of files but I don't know which command line to use.

It is working but not for all files : non-ascii characters.
"0" files are copied in my command line. see image :

Thanks in advance ;)

General Software Discussion / Cut and paste list of files ?
« on: July 04, 2013, 01:26 PM »

I have a list of files ("!Not_Searchable.txt") created using the great freeware PDFTextChecker (https://www.donation...ex.php?topic=27311.0).

Here is an example of what is inside the text file (it may contains accents or non-ASCII characters) :
F:\1_B\2003\01 2003\important\publication 01 2003\Béec2002.pdf
F:\1_B\2003\01 2003\important\publication 01 2003\Béec2001.pdf
F:\1_B\2003\01 2003\important\publication 01 2003\Béec2000.pdf
(note: there are thousands of rows)

I would like to cut and paste those files into another folder.
I have tried the freeware Puresync ( but it is too buggy alas (not transferring every files...). ;(

Any other idea ?
Thanks in advance ;)

Ko means "kilooctet (Ko)" ! http://en.wikipedia....ctet_%28computing%29

See ya

I have solved this with the freeware "everything".  (search in the big folder + keyword ".pdf" and rank by size. then delete all the zero ko files at once). ;)
See you


In a big folder and its subfolders, I have among other files thousands of small pdf files that have in fact a size of 0 ko.
Do you know a way to delete those ?  (bonus: and delete the folder if it becomes empty after that?).

Thanks in advance ;)


Hi !

About a month ago one of their support member answered me twice. He labeled my ticket as "high priority" (see then I took many time to write some detailed answers. Atfer 11 days without answer, he deleted my ticket.
I thought that I was alone. But I have found that I am not the only one see :
@Mozy to @Backblaze and now @Crashplan - a deteriorating state of online backup

I wrote a new ticket explaining that I had no answer. Somebody else was assigned to it but I received no answer !

See ya

Dear all,

An online backup company is supposed to back my data. I am customer since April 2011. I have recommended their service on this website (see :
https://www.donation....msg283957#msg283957 ).

Here is my (not so good) experience :

1) March 2012 : Too bad ! I've chosen the wrong option and I just lost my uploaded data (500 Go).
I did not make this error the first time I have used it.


Here is why I think the backblaze guidelines misleaded me : for computer‑hp[3] , I was in a hurry and read fast :
see the following part at the bottom of this link :
"COMMON QUESTIONS : • My computer died and I have a new one, what should I do?"

I have thought : does my computer has failed ? Yes. So I have followed the text : this was my error (I didn't use "Transfer Backup State") !

IMHO you should make your explanations more clear i.e. : like this :

If your computer has crashed : you have 2 options :
#1 : you don't want to keep your previous uploaded data and install blackblaze software on another (new) computer, follow these guidelines :
"My computer died and I have a new one, what should I do?"


#2 : You want to keep your previous uploaded data as one of your hard dives has failed or you are adding a new hard drive to your computer ? Follow the guidelines under "How do I transfer a license to a different computer?"


I asked them to correct this and add a simple delay if one customer makes an error. But they didn't change their process or their guidelines ! (even if they now add the current date to the name of the computer when you add a new one - that was another of my suggestion...). I even wrote to their CEO and received... no answer !

2) April 2013 : Sorry there is a bug in our software : you are forced to delete your uploaded data !   What ? I have just lost (1,100 Go) ! Too bad for me : just start over from zero !
I have spent more than one year to upload my data (DSL line) ! ;(

bzfileids.dat too large ? :


Thanks for contacting us. I think I know what is going on, but you'll have to help fill in some details. Something has gone slightly haywire on your system that we are seeing in very few customers backups related to the bzfileids.dat file.

There is a very specific file on your disk that is part of Backblaze which has bloated up to be too large. It is called "bzfileids.dat" and is found here:

Mac: /Library/Backblaze/bzdata/bzbackup/bzfileids.dat
Windows Vista/7: C:\ProgramData\Backblaze\bzdata\bzbackup\bzfileids.dat
Windows XP: C:\Documents and Settings\All\Application data\backblazebzdata\bzbackup\bzfileids.dat

This is a very simple file, it is a mapping from your filenames to a totally unique integer ID that is anonymous that we use to identify your files in the Backblaze datacenter. This means we never know any of your file names, or file contents.

For some reason, your computer wants to backup a few million files and your bzfileids has grown very large (yours is over 1 GB). When bztransmit (the process that runs once per hour) starts up, it reads this bzfileids.dat file into RAM. On a normal machine, this is about 20 MB, but on your machine something has gone haywire and bzfileids has grown far too large.

Now, there are several things that contribute to this file being large, so you can think about how this happened and let us know. We're trying to understand this situation better:

A) If you have ever renamed a Time Machine folder at the top of your hard drive, Backblaze will bloat up trying to back it up. It is absolutely not supported to "back up a back up" and Backblaze can only function properly backing up the originals.

B) Lots of files. If you knew of a folder with hundreds of thousands of small files that didn't change much you could back them up differently or exclude them from Backblaze backups.

C) Renaming top level folders with a lot of files. For example, if your top level folder name is "/my_music" and it contains 100,000 file names in it, then when you rename it "/my_great_music" Backblaze needs to add all of those filenames to that bzfileids.dat file which bloats it up. So the best thing you can do is keep your enormous folders the same over a long period.

D) Shorter Path Names. It would be best if your hundreds of thousands of files are on a disk called "d" instead of "disk_that_contains_files" and the top level folder is called "f" instead of "folder_for_lots_of_files". Etc. The shorter the paths, the smaller the bzfileids.dat file is.

It's possible to shrink the bzfileids.dat in case they've been temporarily bloated by one of the above situations, however it requires reuploading all data to the Backblaze servers. You can follow these steps to do that:
1. Visit and sign in to your Backblaze account with your email address and password.
2. Click on the "Account" link in the upper left hand corner
3. Select your "old" computer from the list of computers.
4. Click the "Delete Computer" link next to it. This will delete the backed up data, the bloated bzfileids.dat and free up the paid license.
5. Click on "Overview"
6. Click the download link for your operating system in the bottom right corner.
7. Install Backblaze.

Here's the problem. If you cannot reduce the number of files or path names significantly, then you absolutely are going to encounter this issue again. We are working on a fix for it, but it is proving to be very difficult and currently our engineering team has no ETA on a fix. If you uninstall and reinstall without changing anything, then your backup might start working again, but you will reencounter this problem a little ways down the road.

Here is why I think I had this problem : yesterday I have added a new drive F:\ to be backup.
Alas it has removed all the folders I have added previously in the the exclusion tab (I was aware of this bad backblaze client behaviour, but I did not remember it!)  .
Futhermore I think clicked on ok too fast and forgot to exclude some folders on the new drive that contains millions of files.

Then just after I have tried to exclude them as I was hearing my drive (E:\ and F\ are on the same physical drive) running like crazy during a lot of minutes. This was too late to be stopped !
Then I rebooted my computer and got the backblaze's error message.

Usually I just mainly upload zip files of my 10+ years archives (big zipped files containing many htm and their related files : gif etc...) and keep an unzipped folder on my pc (on F:\) just to make keywords searches on it.
I just keep the current month unzipped in backblaze and a few big video files. Like that I have about 500,000 files to upload (about 1To size). This can be managed by backblaze fine. ;)

But now I am blocked : I can't change the amount of files the Backblaze client has counted !
Even if I am sure (with the same folders excluded) I have 'only' 500,000 files to upload in total (1.1 To), it keeps saying :
Selected : 3.2 Millions files (2 To)
Remaining : 2.7 Millions files (0.9 To)
Please see screenshots.

- If I remove the F:\ drive, it changes nothing. (*)

- If I had some folders in E:\ in the exclusion tab, it changes nothing. (I cant' remove C:\ !) (*)

- And this is the same thing if I remove fully the E:\ drive ! (*)

(*) Even if I hit ALT+restore [here the strange thing is that I am not hearing my E drive running (I am sure of this as my C:\ drive is a ssd)] or if I choose "only when I click backup" and it hit backup. Or if I let it on continuously !

I have tried to download a new backblaze client and install it. Alas I have the same problem !

Here is what I finally did [EDIT: In the end this did not help me but I had no other option!] :
I made a copy of bzfileids.dat.
I deleted the original one.
Then hit ALT+restore. This seemed to do something but it didn't work.
Then I installed backblaze client again. And it created a new bzfileids.dat !
Now I just have to wait x days and all my data will be again online (like if I had a new hard drive).
In 1 day it has found 400Go out of 1100Go. And 582,000 files out of 588,000. ;) (see screenshots).
And the is only 60MB (not 1 Go anymore)


Unfortunately, manually editing the bzfileids.dat file is not a viable option to resolve the bzfileids issue. Manually editing this file will likely cause issues with your backup, possibly resulting in lost or corrupted backup data. While it will appear to work, all the pointers for the existing files in your backup to the corresponding file on your system have been severed, as well as any versioning history. The only current solution we have to the issue is the upload of a fresh backup. We are working on a software update to resolve this specific issue, but until that update is complete, this is the only means of ensuring a reliable, secure backup.

Because the bzfileids file has already been manually edited and attempted to synchronize with the Backblaze backup, I would be extremely suspect of the viability of the existing backup.

To uninstall and reinstall the Backblaze application and start a fresh backup, please follow these steps:

1. Reboot your computer to make sure all Backblaze files are unlocked.
2. Uninstall Backblaze.
• Mac -- Hold down option and click on the Backblaze menu bar icon and choose Uninstall.
• Windows -- use the Add/Remove Programs utility to remove Backblaze.
The uninstaller will warn you that your backed up data will be deleted, but you can disregard this warning. The deletion does not occur immediately, so as long as you reinstall Backblaze within a week, you can continue with this process.

3. Visit and sign in to your Backblaze account with your email address and password.
4. Click the "Home" link at the top left hand corner of the browser.
5. Click the orange "Download Again" link center screen
6. Run the installer you downloaded.

The new installation will register as a 15 day trial under your new account. You can let the trial run it's course to allow the client to upload as much data as possible before removing the old backup and transferring the paid license. Once you are ready to remove the old backup and transfer the license to the new backup, you can do so by following these steps:
1. Visit and sign in to your Backblaze account with your email address and password.
2. Click on the "Account" link in the upper left hand corner
3. Select your "old" computer from the list of computers.
4. Click the "Delete Computer" link next to it. This will delete the backed up data, and free up the paid license.
5. On the Overview page click the "Use License" button next to your new computer.

Please let me know if you have any other questions.


So I had no other option than to start over from zero !  >:(
Any way to speed my new backup ? No !  :(
Even as it is backblaze's fault they should provide a solution like for instance : I send them a crypted hard drive (they give me a special software for that). They add it to their datacenter and they send me back my hard drive.

I must add that their support answered me within one day most of the time (* see next message).

Would I recommend backblaze ? Hard question ! I am still a customer (only one computer now. I had two before) as I have paid one year in advance. I don't know what I would do next.

I hope this helps (backblaze included) ! ;)

N.A.N.Y. 2013 / Re: NANY 2013 Release Find Long Names
« on: January 13, 2013, 09:05 AM »
I tried out Path Scanner, but it merely crashed.
It works for me (version with win7 64bits and WinXP.
I checked and it counts the total path (spaces included).
Example: this is counted as 189 caracters (It is correct when I use the tools/statistics in word).
C:\Users\Ex2\Documonts\NEWS125\5 uo\06\tableau-tests-tcs-de-sic3a8ges-ppqurs-2004-2012.pdf (application_pdf Object)_201301e6_202509_files\tableau-tests-tcs-de-sic3a8ges-ppqurs-2004-2012.pdf

Maybe once installed try to change the property settings ( C:\Program Files (x86)\Path Scanner\pathscan.exe ) to "Run as Administrator" ?

Pages: prev1 2 [3] 4 5 6next