Latest posts of: f0dder -
HOME | Blog | Software | Reviews and Features | Forum | Help | Donate
Click here to
donate and join now!
Welcome Guest.   Make a donation to an author on the site April 18, 2015, 08:06:25 PM  *

Please login or register.
Or did you miss your validation email?

Login with username and password (forgot your password?)
Why not become a lifetime supporting member of the site with a one-time donation of any amount? Your donation entitles you to a ton of additional benefits, including access to exclusive discounts and downloads, the ability to enter monthly free software drawings, and a single non-expiring license key for all of our programs.

You must sign up here before you can post and access some areas of the site. Registration is totally free and confidential.
The N.A.N.Y. Challenge 2012! Download dozens of custom programs!
  Forum Home Thread Marks Chat! Downloads Search Login Register  
  Show Posts
      View this member's profile 
      donate to someone Donate to this member 
Pages: Prev 1 ... 10 11 12 13 14 [15] 16 17 18 19 20 ... 352 Next
351  Main Area and Open Discussion / Living Room / Re: silly humor - post 'em here! [warning some NSFW and adult content] on: January 25, 2013, 11:44:55 AM
I kinda like offensive t-shirts, but I'm adult enough to (mostly) know when to not wear them.
352  Main Area and Open Discussion / General Software Discussion / Re: The Best Of: text editors on: January 25, 2013, 01:56:51 AM
OO!!  One of the very few editors with ftp access built in.  Very tempting just for that.
I see that as a kind of anti-feature, aimed at disorganized PHP developers smiley
353  Main Area and Open Discussion / General Software Discussion / Re: Are you going to wait for Windows 9? on: January 24, 2013, 02:45:25 PM
Hm, it makes sense to offer the new Win8-style "advanced boot" selections in addition to the hotkey spamming. On my system, Windows boots so fast that I'd have trouble hitting F8 at the right time (not to mention that I have to wait the few 100msec where the BIOS/UEFI listens to that key for selecting boot device, but need to hit the key before "too late" in the bootloader tongue).

That's one of the real bad decision, MS.

Anyway, can malware completely block the "restart with advanced boot" thingy? Iirc one of the ways you enable it is shutdown from Ctrl+Alt+Delete - CAD is supposed to be pretty hard to trap.
354  Main Area and Open Discussion / General Software Discussion / Re: Are you going to wait for Windows 9? on: January 24, 2013, 01:24:38 PM
PS: Anyone got any idea how to get into advanced startup mode on a laptop with an old fashioned BIOS?
F2 or Delete
I was about to post that as well - but then I realized Carol probably means the Windows boot settings, not the BIOS?
355  Main Area and Open Discussion / Living Room / Re: Would a 41 megapixel camera get you to buy a Windows 8 phone? on: January 24, 2013, 01:49:24 AM
I don't know why people have gotten this unhealthy idea that more MP means better images. It seems unlikely they can make a lens for a phone that would give any kind of usefulness to 41 MP. Personally, I would much rather have a phone with a 5 MP camera and a stellar lens and better flash.
This! Thmbsup

Also, while it's nice having a decent camera in your phone to take a quick snap of whatever, with current lens technology, there's just no way to fit anything "awesome" into a phone that's small and comfy. So I'd much rather have just a decent phone camera, and an IXUS or similar compact for taking better pictures.

(If I had any photography skills, I'd of course go for a real camera, but since I don't, an IXUS fits my needs perfectly smiley).
356  Main Area and Open Discussion / Living Room / Re: Computer science student expelled for testing university software security on: January 22, 2013, 09:16:36 AM
The job offers are starting up now.
He may have fast-tracked his career!

Report says even Skytech is offering.
Hm, I think there will be more info sometime tomorrow.

Hrm, did he actually do anything interesting, or did he just run some scriptkiddeialready-existing tools?

If the latter, something smells fishy wrt. job offers...
357  Main Area and Open Discussion / General Software Discussion / Re: Tips for Windows 8 (got any?) on: January 21, 2013, 04:03:19 PM
That will make a complete shutdown though, not a "Windows 8 Shutdown".
Doesn't that require adding "/hybrid"? (On my win7 workstation right now, so can't check shutdown.exe arguments smiley).
358  Main Area and Open Discussion / Living Room / Re: Computer science student expelled for testing university software security on: January 21, 2013, 11:22:43 AM
From my sysadmin perspective all I can say is: A predictable and avoidable outcome.  I'm hardly surprised at the response.  Nor should he be.

If you don't have a (written) agreement with your target, you're not pentesting - you're hacking.

Is it piss-poor behavior from the uni? Yes. But if you're not going to play by the rules (which might very well be necessary sometimes, whistleblowing incompetent lying bastards comes to mind), you'll have to expect unfavorable outcomes.

Which is why you run such scans from a VM on a laptop with a faked MAC address, through TOR on a public WiFi.
359  Main Area and Open Discussion / General Software Discussion / Re: It's about ... an interesting Win8 view (video) on: January 21, 2013, 07:52:04 AM
That's been posted before in one of those win 8 threads here, but I aint going looking for it ;-)
Here you go - also, could we keep the discussion of that piece of manure in one thread? :-)
360  Main Area and Open Discussion / Living Room / Re: Facebook Turns to Spam on: January 20, 2013, 02:04:19 PM
If you're not paying for it, you are the product.

I hope that what facebook is doing (ad-spam, data mining you all the way up your hiney, ...) doesn't come as as a surprise to anybody? Anyway, AdBlockPlus + Ghostery does a nice job of making facebook not too awful, at least for the time being smiley
361  Main Area and Open Discussion / Living Room / Re: MEGA Almost Online - Misses Deadline on: January 20, 2013, 07:58:23 AM
just out of curiousity: was your browser uptodate?
It shows even for the latest firefox - doesn't pop up right away, you have to upload some files for it to show.
362  Other Software / Announce Your Software/Service/Product / Re: Bvckup 2 on: January 19, 2013, 02:49:26 PM

I actually got around to running some benchmarks last weekend, but got sidetracked and forgot to post anything smiley. So far I've only run warm-cache tests - for cold-cache, I really really really want to be able to automate the process. I want to collect a lot of data sets, but I'm way too lazy to manually do all the reboots necessary :-)

First, specs:
   Corsair XMS2 2GB DDR2 800MHz (2x1GB)
   Intel Core2 E6550 @ 2.33GHz
   Western Digital 3.5" 74GB Raptor

   Corsair 16GB DDR3 1600MHz (4x8GB)
   Intel Core i7 3770 Ivy Bridge
   INTEL SSD 520 Series 120GB
   Western Digital 2.5" 300GB VelociRaptor

For the workstation, I ran the test on the VelociRaptor which is a big dump of all sorts of crap smiley. The testbox was freshly installed with Win7-x64 enterprise, LibreOffice 3.6.4, PiriForm Defraggler (didn't defrag it, though), Chrome, and all Windows Updates as of, well, last weekend. I furthermore copied some ~33gig of FLAC music from my server to get some meat on the filesystem - there's ~2.3gig free. The Windows partition is only ~52gig, as I didn't want to nuke the Linux test install I had on the disk - so the Windows partition starts ~18gig into the disk. Furthermore, I've disabled the following services: Defrag, Superfetch, Windows Search (hopefully turns off indexing?). Other than that, it's a pretty vanilla install, I even left the 2gig pagefile in place.

Anyway, I started by running a warmup, then I generated output files by running the following quick hackjob batch file - it does 16 identical passes of 1 to 16 threads, and both depth and breadth - so 512 totalt runs. Oh, and it also starts each pass with a single verbose run:

It would seem that the difference between depth- and breadth-first are pretty small for the warm-cache tests, and that there's not much to be gained from using more threads than CPU cores (makes sense for the warm cache scenario). It doesn't seem like there's a lot of penalty to using more threads than cores, though - but it obviously uses slightly more system resources.

I'm attaching a zip file with the raw output from the hackjob batch file, and pondering a decent way to visualize it. I guess the 16 consecutive runs should be processed into {min,max,avg,mean} values - should be easy enough to do the processing, but how to handle the rendering? Some LibreOffice spread sheet, some HTML + JavaScript charting? Got any good ideas? smiley

Also, if I find a way to automate the cold-cache testing (suggestions would be very welcome!), I'll throw in stats from my old dualcore-with-SSD laptop.
363  Main Area and Open Discussion / Living Room / Re: MEGA Almost Online - Misses Deadline on: January 19, 2013, 01:56:17 PM
Not sure I like that domain name. --> Mega Conz --> Mega Cons?
Priceless cheesy
364  Main Area and Open Discussion / Living Room / Re: Java Update on Tuesday on: January 18, 2013, 10:28:56 AM
WHahahaha! Wink Very subtle. Almost CRied laughing! cheesy
Who is #3?
At the moment (well, for a pretty long time), Microsoft. The list is based on a mix of evilness, douchebaggery, (wrong) public opinion, and market influence.

The exploits in question only affect JDK 7, not JDK 6, which is much more secure, to say nothing of more stable.
Ah yes, there were never any exploits for Java 6?

If you have the Java browser plugin, no matter which version, you shouldn't feel safe. End of story.

Also, these exploits only affect in-browser user, so there is no reason to dump any software that is written in Java and runs on your local system, rather than in a browser.
True - no reason to dump Eclipse or Minecraft, you just need to get rid of the browser plugin smiley. Sure, there's very likely other security holes in the JRE, but if an attacker has reached the level where he's going to compromise non-browser JRE, you've got more serious security issues.
365  Main Area and Open Discussion / Living Room / Re: Doom 3 Source Code - The neatest code I've ever seen on: January 18, 2013, 10:24:23 AM
Renegade: "verbSomething" isn't necessarily always the best, though, and especially not in the case of getters...

if( getOptionEnabled() ) versus if( isOptionEnabled() ) versus if( optionIsEnabled() ) smiley

IMHO option #3 quite clearly reads best, but #2 is probably the pragmatic solution wrt. IntelliSense support.
366  Main Area and Open Discussion / General Software Discussion / Re: Disable Win+V in Windows 8 on: January 18, 2013, 10:18:15 AM
Sounds like Win+... shortcuts can be "overwritten" by an application then. Maybe an API that has changed with Windows 8.
I don't think there's any changes - it's just that Win8 added more Win+X shortcut keys.

AutoHotKey (and probably AutoIt?) is able to override the shortcuts that Windows (explorer.exe, I assume?) set up. Dunno the technique behind, perhaps global keyboard hook - I seem to recall that a hotkey override wasn't effective when focus was on a program launched with administrative privileges, that would at least support the keyboard hook theory.
367  Main Area and Open Discussion / Living Room / Re: Doom 3 Source Code - The neatest code I've ever seen on: January 17, 2013, 02:56:47 PM
He has some decent points, but on other points I'd say "he'll get wiser" smiley - Carmack himself has also replied, stating that "In some ways, I still think the Quake 3 code is cleaner, as a final evolution of my C style, rather than the first iteration of my C++ style" and also "In retrospect, I very much wish I had read Effective C++ and some other material." - which to me translates as "this is not how I'd do it today" and definitely not being idiomatic C++ all the way through.

A few comments...

Unified Parsing and Lexical Analysis - i.e., using (the same) text format for all resources). Shawn praises that, but here's what Carmack has to say about it:
Fabien Sanglard - So far only .map files were text-based but with idTech4 everything is text-based: Binary seems to have been abandoned. It slows down loading significantly since you have to idLexer everything....and in return I am not sure what you got. Was it to make it easier to the mod community ?

John Carmack - In hindsight, this was a mistake. There are benefits during development for text based formats, but it isn't worth the load time costs. It might have been justified for the animation system, which went through a significant development process during D3 and had to interact with an exporter from Maya, but it certainly wasn't for general static models.
...might have been a decent compromise keeping source material in text format, but create a binary representation as well - not necessarily fully specialized formats for each resource type, but a model like XAML/BAML might have been a natural fit?

Const and Rigid Parameters - pretty much spot on. C++ style const specifiers is something I miss in other languages. It's also nice to see that Carmack uses const-ref for input and pointers for output, it's IMHO good practice. It does mean you need null-checking, but IMHO it's an OK compromise (the stuff I do with output parameters tends to be hard to end up with a nullptr for).

Minimal Comments - pretty spot on, IMHO.

Spacing - disagree. The additional code-on-screen I'd get from putting braces on the same line doesn't matter too much... the readbility drop from cramped code and not being able to line up braces visually weighs a lot more. And I like blank lines between logical chunks of code. Dunno if there's been done any studies on this or if it's just down to personal preference, but my approach works a lot better for me :-). Oh, I fully agree with always using braces, even for single-line statements.

Minimal Templates - I'm a bit mixed with regards to this. Parts of the STL are somewhat sucky (remove+erase is a good example), and before C++11's lambdas, using std::algorithm was often extremely clunky and ugly. OTOH, for the most part the STL datatypes are easy to use and you get decent enough performance out of the box. Now, if you have code that's extremely sensitive to LORw or benefits massively from pooled allocation (either for speed or for avoiding heap fragmentation), it might make more sense to roll your own rather than mucking around with allocators and whatnot. But I'd definitely default to STL for 'anything normal'. And auto is a really great new feature, it doesn't make code hard to read (quite the opposite!) unless abused.

Anyway, Carmack being sceptical of STL probably made a lot of sense back when they started the doom3 codebase (game released in 2004, so several years before that - that would probably mean VC++ from VS.NET2002 (at least during start of development, perhaps VS.NET2003 for release?)), there's been several bugs and performance problems in STL implementations over the years... but it's 2012 now.

Remnants of C - getters/setters are often overkill, but I'm not fond of Shawn's examples. For immutable objects, having fields public can be OK (though one might argue that for the sake of binary compatibility for future upgrades, it might be better to use an accessor function anyway). But direct access to mutable fields? Ugh. I guess it's mostly a code smell to me since I tend to belive mutable objects implies "complex stuff", where you'd want some logic attached to the action of mutating.

StringStreams are ugly, but printf is unsafe - solution? use some safe formatting code. Been a while since I took a look, but there's several to choose from depending on your speed/flexibility needs.

Horizontal Spacing - pretty much agree.

Method Names - somewhat agree. I do prefer function names that read like English, but for simple & common & well-defined methods like size() and length(), I prefer not having the get prefix. In general, I'm not fond of getters/setters, I find that they read less naturally - still not sure what the most elegant solution is. I've toyed around with the idea of simply naming the accessor methods from the field name, which does read nicely... but is somewhat non-standard. Oh, and it feels wrong that the 'setter' functions are hard to discern from other functions, and you lose the value of having getXxx and setXxx methods grouped in auto-completion (which is nice for discoverability in a big codebase). ObjectPascal and C# properties are nice.

And finally,
Yes, it's Beautiful - the codebase might very well be, but I don't find any of Shawn's examples beautiful in themselves, more along the lines of "this looks like decently engineered code" smiley

368  Main Area and Open Discussion / General Software Discussion / Re: Disable Win+V in Windows 8 on: January 17, 2013, 12:28:46 PM
Use AutoHotkey or AutoIt.
That has worked for me in the past to get control of Win+whatever shortcuts.

While I understand that Win+SingleLetter are reserved for Microsoft, it would still be nice of them if they had a place where you could enable/disable those built-in hotkeys at will.
369  Main Area and Open Discussion / Living Room / Re: Java Update on Tuesday on: January 16, 2013, 02:09:54 AM
They've been bundling the Ask toolbar for a while, btw, it's not introduced with the security fix.

But yeah, it's whOracle - #2 on my list of really evil software companies, where crApple still reigns supreme.

370  Main Area and Open Discussion / General Software Discussion / Re: MagicRAR Drive Press - worth anything? on: January 15, 2013, 01:32:50 PM
Because of that I request that both related threads be locked.
Dunno if they need to be locked - they're pretty dead now from my viewpoint.

One last thing coming up in a few, though, since I promised it: working on a small test to see what happens wrt. very small files (MFT-resident) when you apply compression.

Here, results from testing some very small files on an NTFS volume with 1k clusters. The files were highly comrpessible (filled with A's). The lines with "x is UNcompressed" (etc) are from a small tool I whipped up, the middle parts is the output from Microsoft's COMPACT.EXE.

small100.txt is UNcompressed, 100/100, (MFT resident), 1 fragments
small500.txt is UNcompressed, 500/500, (MFT resident), 1 fragments
small1000.txt is UNcompressed, 1000/1000,  1 fragments
small5000.txt is UNcompressed, 5000/5000,  1 fragments
 Compressing files in R:\temp\z\

small100.txt              100 :       100 = 1,0 to 1 [OK]
small1000.txt            1000 :      1000 = 1,0 to 1 [OK]
small500.txt              500 :       500 = 1,0 to 1 [OK]
small5000.txt            5000 :      1024 = 4,9 to 1 [OK]

4 files within 1 directories were compressed.
6.600 total bytes of data are stored in 2.624 bytes.
The compression ratio is 2,5 to 1.
small100.txt is compressed, 100/100, (MFT resident), 1 fragments
small500.txt is compressed, 500/500, (MFT resident), 1 fragments
small1000.txt is compressed, 1000/1000,  2 fragments
small5000.txt is compressed, 5000/1024,  2 fragments

1) MFT-resident data stays resident - good!
2) The really small files aren't actually compressed (GetCompressedFileSize == GetFileSizeEx, see MSDN) - they are flagged compressed, though, so will be compressed once they grow.
3) For compressed files, we get "size on disk" (taking clusters into account), not "actual numCompressedBytes" - which makes sense.
4) When compressing non-resident files, we get one excess fragment.
371  Main Area and Open Discussion / General Software Discussion / Re: MagicRAR Drive Press - worth anything? on: January 15, 2013, 11:06:03 AM
^ If the developer doesnt care to respond to the last points made; ignores responses made by the investigator; misrespresents said investegator's research; and makes baseless claims spurious at this stage of discussions - I would say the case is closed.
Indeed - I don't have more to add to this thread, the facts are on the table.

I predict the other thread is just about fizzled out as well.
372  Other Software / Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge on: January 15, 2013, 11:02:57 AM
If the progress bars reached completion only a few minutes off, I am glad to hear that - it is very difficult to get them working properly, and a few minutes on hour/day long tasks is a very reasonable rounding error that I'm happy to live with.
How so? You're running an "analyze" pass over the entire drive, so you're able to get both a count of files as well as size in bytes - it's true that things can happen on the filesystem while you're compressing, but the VM was fairly idle... showing progress at 100% for 6 minutes before really done seems like an interesting bug.

I realize you personally may not test this, but if you actually test (...) on a production system (by running it after letting Windows do the initial work), you will still see two to three times the space savings compared to Windows itself. This is because Windows misses a majority of the files that are completely safe to compress (and were included in previous Windows versions). Yes, those files that Windows fails to compress do make that big of an impact.
On a "production system", those protected Windows files would be a much smaller percentage of the total amount of files. Your claim of "two to three times" is a dubious marketing strategy - since you use built-in NTFS compression (and thus offer no algorithmic improvements), it would be more honest to represent the absolute amount of gigabytes saved for typical systems. There's enough gains that this honest representation is still a fine number.

And actually, (...) somewhat under-reports the space it has freed by about 30% - this is because after a compression call to Windows has been made and it returns success, the compression (and space savings) still happen in the background for a few more minutes. It was not possible to definitively determine when Windows would be ultimately finished with compressing a file, so the under-reporting bug was left in-place. Better to under-promise and over-deliver, rather than the opposite. You may always compare the drive charts before and after a compression for the best results, as we have done on our home page.
Interesting. I haven't checked when the DeviceIoControl() call returns, but Windows' built-in COMPACT.EXE utility doesn't return until the file is compressed (so it's definitely possible to do without too much work) - and IMHO, watching the thread status in your product while compressing, it looked like your threads didn't progress to the next workitem before it was fully done with the current. Perhaps you have a bug in your code - like, not handling hardlinked files properly?

I don't see a point in debating whether this Windows bug is really a bug or not. To me, it was clearly a bug because it was preventing me from compressing all of my drive, which was possible in previous Windows versions, and still remains possible.
Got an URL to your Microsoft Connect bug report? :-)
373  Other Software / Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge on: January 15, 2013, 07:44:00 AM
Could also be C++ builder.
Ah yes, that can use the VCL (and other Delphi components) as well - didn't look closely, just saw some .pas references.

And yes, there are limits for the reason you stated.  It's an int (16 or 32-bit depending on the version of comctrl32.dll [ref].
That reference mentions 64k limit - I wonder if comctrl uses signed or unsigned integers? It's been ages, but I seem to recall doing 32k clamping?
374  Other Software / Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge on: January 15, 2013, 07:27:12 AM
1) If you really belive it's a bug that the Shell doesn't temporarily remove protection of critical system files, you should file a but report on MS Connect, instead of making spurious claims in your marketing material - I'm pretty sure this is a by-design decision from Microsoft. I do agree it's probably harmless to compress those files, but calling a security feature a bug is misleading marketing, IMHO. And you deliberality keep your wording vague enough (combined with your "three times smaller", which is obviously only valid if there's not much else on the disk than Windows) to give the impression that the "bug" would be somewhere else (like, the core NTFS compression routines). THIS is why I'm pursuing this aggressively - you're using snake-oil salesman tactics. Which is a shame, since you obviously do get a better compression rate (and you really do ought to warn users that you're doing it by messing with critical OS files).

2 & 3) There's nothing wrong with what I've stated here. I do acknowledge in SSD speedup in #4, but for obvious reasons there's no way in hell I'll be NTFS compression any of my SSDs. The HDD backing my VM disk image is a 10k rpm velociraptor. I plan on running a single-threaded DrivePress later today to compare with the 2-thread version.

4) first, again my problem with compression on an SSD isn't the speed hit caused by fragmentation (it's several scores lower than the speed hit on a HDD, but it's still real) - it's (to some degree) the reduced speed and hindered wear-leveling on (at least, but probably not limited) drives with SandForce controllers, and (to a fairly large degree) the heavily increased amount of block erases caused by how NTFS compression is implemented. Having NTFs compression on often-modified files approaches suicidal tendencies for an SSD.

Because an SSD can read from/write to all parts of the drive at the same time (think of a hard disk platter rotating at infinite speed), that is why fragmentation is of absolutely no consequence for SSDs - be it NTFS compression induced, or the "normal" fragmentation that happens on NTFS inevitably. There is no delay, because all areas of disk are equally accessible at all times.
This is patently wrong - take a look at some benchmarks. For instance, the 120GB Intel 510 drive does ~50MB/s for 4k random reads, whereas it does ~380MB/s for 128kb sequential reads (4k sequential would be slower, but should still be quite a lot faster than the random reads). You'll notice that it does 4k random *writes* faster, which is obviously because the drive has internal cache and can do (sequence-optimized) writes at leisure - and some of the other drives handle this even better.

a. Do NOT attempt to manually acquire file permissions just to be able to compress them. Doing this will create a huge security hole on your system (one that MagicRAR Drive Press does not create, because it restores all permissions as has been confirmed in this third party report).
This is very good indeed - the way you handle the permissions is something to give credit for. I didn't look too closely at the code, but it seemed like you even throw in exception handling to do the permission-restore? You do leave files potentially vulnerable during the compression process... not much of a real-world problem, but could be reason enough for Microsoft consciously choosing not to do it. I still feel it wrong to classify the behaviour as a bug.

b. There will always be some files/folders that would be locked by the system/applications, and as such incompressible. If there is demand for it, we could also automate the conversion of those parts by building a boot time version of MagicRAR Drive Press - however, in my research, the additional space savings would be negligible.

So while the MagicRAR Drive Press Challenge technically remains unmet
Ho, humm, you still chose not to properly address any of the points of the original thread, most of which I showed to be clearly true. As I see it, only the points regarding interactions with SSD speed/lifetime can be debated... and for those points, I indeed do believe that I'm correct; what can be debated is to which degree lifetime and performance will be affected. For the current crop of SSDs, I definitely wouldn't do gung-ho NTFS compression, and I would recommend people against it.

Selective compression of static files would be fine, though. I wonder if it would make sense to apply compression on the files on another (and preferably HDD) partition, then move the files to the SSD target? I haven't tested, but it might result in less fragmentation of the target files.

Oh, and one last thing: your progress bars are severely bugged - they reached 100% several minutes before the actual operation was done (bugged both in analyze as well as compress phase). Looks like you use Delphi, and I haven't touched that since Delphi2, so dunno if there's limits on it's current/max values... but iirc the win32 controls are/were clamped to pretty low values, meaning you definitely shouldn't be using currentBytes/maxBytes - or even currentNumFiles/maxNumFiles for modern filesystems.
375  Main Area and Open Discussion / General Software Discussion / Re: MagicRAR Drive Press - worth anything? on: January 15, 2013, 06:39:49 AM
As you have now seen for yourself, none of our claims are false and the product works exactly as it is being marketed. While I would welcome an apology from you, I happily accept all your time spent researching, as well as your accurate report of your findings, in its stead. Thank you for being open minded!
1) you claim Windows is buggy - this is false.
2) All my statements before testing were correct, so there's nothing to apologize for - and considering your own tone, I will not even apologize for the language I've used.

Please note that fragmentation is not an issue for SSDs due to zero impact on random access times throughout the disk.
SSDs have vastly better random I/O characteristic than HDDs, but you still incur overhead from fragmentation - claiming anything else is bullshit, and easily verifiable by checking benchmarks of sequential vs. random I/O. I'm not worried about performance, though, but the other problems compression & fragmentation poses for SSDs.

First, you should read up on write vs. erase block sizes, wear leveling algorithms, and how various SSD controllers optimize - the TL;DR version is 1) that you do want to minimize small & scattered writes (hint: write-block sizes are larger than on HDDs, and there's the even-larger erase-block sizes to consider as well). And 2) for a several SSD controllers, compressed data both means lower speed and worse wear-leveling.

Second, you should read up on how NTFS compression is implemented. (TL;DR: for 4k cluster size, compression is done in 64kb chunks). It means a lot more fragmentation - by design. Now, imagine what happens if you change data in the middle of a compressed chunk? Just how the split is done depends on the compressibility of the data you're writing, but there's a decent chance you end up needing to allocate two chunks - which will cause extra fragmentation, and will turn a possibly perfect-eraseblock-aligned write into several eraseblock-updates. Oops, you've just reduced the lifetime of the SSD a bit more than necessary.

And while the filesystem fragmentation is already bad after initial compression (531k excess fragments for a tiny 134k filesystem, ouch!), it's only going to get worse over time.

I've yet to check what happens when you enable compression for tiny files (MFT-resident data) - I'll take a look at that when I get home from work. But if turning on compression for tiny files means they're moved non-resident, you're adding a lot of additional waste. From the list of uncompressed files are running DrivePress, it seems like you go gung-ho and indiscriminately compress everything on the filesystem (apart from those few folders that you can't access, and the very few boot-related files in your protection list).
Pages: Prev 1 ... 10 11 12 13 14 [15] 16 17 18 19 20 ... 352 Next | About Us Forum | Powered by SMF
[ Page time: 0.059s | Server load: 0.24 ]

Share on Facebook
submit to reddit