topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday December 18, 2025, 1:35 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: prev1 ... 23 24 25 26 27 [28] 29 30 31 32 33 ... 364next
676
The job offers are starting up now.
He may have fast-tracked his career!

Report says even Skytech is offering.
Hm, I think there will be more info sometime tomorrow.

http://news.national...es-to-reinstate-him/
Hrm, did he actually do anything interesting, or did he just run some scriptkiddeialready-existing tools?

If the latter, something smells fishy wrt. job offers...
677
General Software Discussion / Re: Tips for Windows 8 (got any?)
« Last post by f0dder on January 21, 2013, 04:03 PM »
That will make a complete shutdown though, not a "Windows 8 Shutdown".
Doesn't that require adding "/hybrid"? (On my win7 workstation right now, so can't check shutdown.exe arguments :)).
678
From my sysadmin perspective all I can say is: A predictable and avoidable outcome.  I'm hardly surprised at the response.  Nor should he be.
Agreed.

If you don't have a (written) agreement with your target, you're not pentesting - you're hacking.

Is it piss-poor behavior from the uni? Yes. But if you're not going to play by the rules (which might very well be necessary sometimes, whistleblowing incompetent lying bastards comes to mind), you'll have to expect unfavorable outcomes.

Which is why you run such scans from a VM on a laptop with a faked MAC address, through TOR on a public WiFi.
679
General Software Discussion / Re: It's about ... an interesting Win8 view (video)
« Last post by f0dder on January 21, 2013, 07:52 AM »
That's been posted before in one of those win 8 threads here, but I aint going looking for it ;-)
Here you go - also, could we keep the discussion of that piece of manure in one thread? :-)
680
Living Room / Re: Facebook Turns to Spam
« Last post by f0dder on January 20, 2013, 02:04 PM »
If you're not paying for it, you are the product.

I hope that what facebook is doing (ad-spam, data mining you all the way up your hiney, ...) doesn't come as as a surprise to anybody? Anyway, AdBlockPlus + Ghostery does a nice job of making facebook not too awful, at least for the time being :)
681
Living Room / Re: MEGA Almost Online - Misses Deadline
« Last post by f0dder on January 20, 2013, 07:58 AM »
just out of curiousity: was your browser uptodate?
It shows even for the latest firefox - doesn't pop up right away, you have to upload some files for it to show.
682
Announce Your Software/Service/Product / Re: Bvckup 2
« Last post by f0dder on January 19, 2013, 02:49 PM »
Right,

I actually got around to running some benchmarks last weekend, but got sidetracked and forgot to post anything :). So far I've only run warm-cache tests - for cold-cache, I really really really want to be able to automate the process. I want to collect a lot of data sets, but I'm way too lazy to manually do all the reboots necessary :-)

First, specs:
Testbox:
   ASUS P5K-VM
   Corsair XMS2 2GB DDR2 800MHz (2x1GB)
   Intel Core2 E6550 @ 2.33GHz
   Western Digital 3.5" 74GB Raptor

Workstation:
   ASUS P8Z77-V PRO
   Corsair 16GB DDR3 1600MHz (4x8GB)
   Intel Core i7 3770 Ivy Bridge
   INTEL SSD 520 Series 120GB
   Western Digital 2.5" 300GB VelociRaptor

For the workstation, I ran the test on the VelociRaptor which is a big dump of all sorts of crap :). The testbox was freshly installed with Win7-x64 enterprise, LibreOffice 3.6.4, PiriForm Defraggler (didn't defrag it, though), Chrome, and all Windows Updates as of, well, last weekend. I furthermore copied some ~33gig of FLAC music from my server to get some meat on the filesystem - there's ~2.3gig free. The Windows partition is only ~52gig, as I didn't want to nuke the Linux test install I had on the disk - so the Windows partition starts ~18gig into the disk. Furthermore, I've disabled the following services: Defrag, Superfetch, Windows Search (hopefully turns off indexing?). Other than that, it's a pretty vanilla install, I even left the 2gig pagefile in place.

Anyway, I started by running a warmup, then I generated output files by running the following quick hackjob batch file - it does 16 identical passes of 1 to 16 threads, and both depth and breadth - so 512 totalt runs. Oh, and it also starts each pass with a single verbose run:
Spoiler
@echo off
FOR /L %%I IN (1,1,16) DO CALL :ONEROUND %%I
GOTO :EOF

:ONEROUND
SET OUTFILE=results-run-%1.txt
ECHO ********** ROUND %1, Verbose Stats for 4 threads
ECHO ********** ROUND %1, Verbose Stats for 4 threads > %OUTFILE%
bvckup2-demo2-x64.exe -t 4 -v e:\ >> %OUTFILE%

FOR /L %%I IN (1,1,16) DO CALL :ONEBENCH %1 %%I
GOTO :EOF

:ONEBENCH
ECHO ========== ROUND %1, Breadth, %2 Threads
ECHO ========== ROUND %1, Breadth, %2 Threads >> %OUTFILE%
bvckup2-demo2-x64.exe -q -t %2 --breadth-first e:\ >> %OUTFILE%

ECHO ========== ROUND %1, Depth, %2 Threads
ECHO ========== ROUND %1, Depth, %2 Threads >> %OUTFILE%
bvckup2-demo2-x64.exe -q -t %2 e:\ >> %OUTFILE%
GOTO :EOF


It would seem that the difference between depth- and breadth-first are pretty small for the warm-cache tests, and that there's not much to be gained from using more threads than CPU cores (makes sense for the warm cache scenario). It doesn't seem like there's a lot of penalty to using more threads than cores, though - but it obviously uses slightly more system resources.

I'm attaching a zip file with the raw output from the hackjob batch file, and pondering a decent way to visualize it. I guess the 16 consecutive runs should be processed into {min,max,avg,mean} values - should be easy enough to do the processing, but how to handle the rendering? Some LibreOffice spread sheet, some HTML + JavaScript charting? Got any good ideas? :)

Also, if I find a way to automate the cold-cache testing (suggestions would be very welcome!), I'll throw in stats from my old dualcore-with-SSD laptop.
683
Living Room / Re: MEGA Almost Online - Misses Deadline
« Last post by f0dder on January 19, 2013, 01:56 PM »
Not sure I like that domain name. mega.co.nz --> Mega Conz --> Mega Cons?
Priceless :D
684
Living Room / Re: Java Update on Tuesday
« Last post by f0dder on January 18, 2013, 10:28 AM »
WHahahaha! ;) Very subtle. Almost CRied laughing! :D
Who is #3?
At the moment (well, for a pretty long time), Microsoft. The list is based on a mix of evilness, douchebaggery, (wrong) public opinion, and market influence.

The exploits in question only affect JDK 7, not JDK 6, which is much more secure, to say nothing of more stable.
Ah yes, there were never any exploits for Java 6?

If you have the Java browser plugin, no matter which version, you shouldn't feel safe. End of story.

Also, these exploits only affect in-browser user, so there is no reason to dump any software that is written in Java and runs on your local system, rather than in a browser.
True - no reason to dump Eclipse or Minecraft, you just need to get rid of the browser plugin :). Sure, there's very likely other security holes in the JRE, but if an attacker has reached the level where he's going to compromise non-browser JRE, you've got more serious security issues.
685
Living Room / Re: Doom 3 Source Code - The neatest code I've ever seen
« Last post by f0dder on January 18, 2013, 10:24 AM »
Renegade: "verbSomething" isn't necessarily always the best, though, and especially not in the case of getters...

if( getOptionEnabled() ) versus if( isOptionEnabled() ) versus if( optionIsEnabled() ) :)

IMHO option #3 quite clearly reads best, but #2 is probably the pragmatic solution wrt. IntelliSense support.
686
General Software Discussion / Re: Disable Win+V in Windows 8
« Last post by f0dder on January 18, 2013, 10:18 AM »
Sounds like Win+... shortcuts can be "overwritten" by an application then. Maybe an API that has changed with Windows 8.
I don't think there's any changes - it's just that Win8 added more Win+X shortcut keys.

AutoHotKey (and probably AutoIt?) is able to override the shortcuts that Windows (explorer.exe, I assume?) set up. Dunno the technique behind, perhaps global keyboard hook - I seem to recall that a hotkey override wasn't effective when focus was on a program launched with administrative privileges, that would at least support the keyboard hook theory.
687
Living Room / Re: Doom 3 Source Code - The neatest code I've ever seen
« Last post by f0dder on January 17, 2013, 02:56 PM »
He has some decent points, but on other points I'd say "he'll get wiser" :) - Carmack himself has also replied, stating that "In some ways, I still think the Quake 3 code is cleaner, as a final evolution of my C style, rather than the first iteration of my C++ style" and also "In retrospect, I very much wish I had read Effective C++ and some other material." - which to me translates as "this is not how I'd do it today" and definitely not being idiomatic C++ all the way through.

A few comments...

Unified Parsing and Lexical Analysis - i.e., using (the same) text format for all resources). Shawn praises that, but here's what Carmack has to say about it:
Fabien Sanglard - So far only .map files were text-based but with idTech4 everything is text-based: Binary seems to have been abandoned. It slows down loading significantly since you have to idLexer everything....and in return I am not sure what you got. Was it to make it easier to the mod community ?

John Carmack - In hindsight, this was a mistake. There are benefits during development for text based formats, but it isn't worth the load time costs. It might have been justified for the animation system, which went through a significant development process during D3 and had to interact with an exporter from Maya, but it certainly wasn't for general static models.
...might have been a decent compromise keeping source material in text format, but create a binary representation as well - not necessarily fully specialized formats for each resource type, but a model like XAML/BAML might have been a natural fit?

Const and Rigid Parameters - pretty much spot on. C++ style const specifiers is something I miss in other languages. It's also nice to see that Carmack uses const-ref for input and pointers for output, it's IMHO good practice. It does mean you need null-checking, but IMHO it's an OK compromise (the stuff I do with output parameters tends to be hard to end up with a nullptr for).

Minimal Comments - pretty spot on, IMHO.

Spacing - disagree. The additional code-on-screen I'd get from putting braces on the same line doesn't matter too much... the readbility drop from cramped code and not being able to line up braces visually weighs a lot more. And I like blank lines between logical chunks of code. Dunno if there's been done any studies on this or if it's just down to personal preference, but my approach works a lot better for me :-). Oh, I fully agree with always using braces, even for single-line statements.

Minimal Templates - I'm a bit mixed with regards to this. Parts of the STL are somewhat sucky (remove+erase is a good example), and before C++11's lambdas, using std::algorithm was often extremely clunky and ugly. OTOH, for the most part the STL datatypes are easy to use and you get decent enough performance out of the box. Now, if you have code that's extremely sensitive to LORw or benefits massively from pooled allocation (either for speed or for avoiding heap fragmentation), it might make more sense to roll your own rather than mucking around with allocators and whatnot. But I'd definitely default to STL for 'anything normal'. And auto is a really great new feature, it doesn't make code hard to read (quite the opposite!) unless abused.

Anyway, Carmack being sceptical of STL probably made a lot of sense back when they started the doom3 codebase (game released in 2004, so several years before that - that would probably mean VC++ from VS.NET2002 (at least during start of development, perhaps VS.NET2003 for release?)), there's been several bugs and performance problems in STL implementations over the years... but it's 2012 now.

Remnants of C - getters/setters are often overkill, but I'm not fond of Shawn's examples. For immutable objects, having fields public can be OK (though one might argue that for the sake of binary compatibility for future upgrades, it might be better to use an accessor function anyway). But direct access to mutable fields? Ugh. I guess it's mostly a code smell to me since I tend to belive mutable objects implies "complex stuff", where you'd want some logic attached to the action of mutating.

StringStreams are ugly, but printf is unsafe - solution? use some safe formatting code. Been a while since I took a look, but there's several to choose from depending on your speed/flexibility needs.

Horizontal Spacing - pretty much agree.

Method Names - somewhat agree. I do prefer function names that read like English, but for simple & common & well-defined methods like size() and length(), I prefer not having the get prefix. In general, I'm not fond of getters/setters, I find that they read less naturally - still not sure what the most elegant solution is. I've toyed around with the idea of simply naming the accessor methods from the field name, which does read nicely... but is somewhat non-standard. Oh, and it feels wrong that the 'setter' functions are hard to discern from other functions, and you lose the value of having getXxx and setXxx methods grouped in auto-completion (which is nice for discoverability in a big codebase). ObjectPascal and C# properties are nice.

And finally,
Yes, it's Beautiful - the codebase might very well be, but I don't find any of Shawn's examples beautiful in themselves, more along the lines of "this looks like decently engineered code" :)

688
General Software Discussion / Re: Disable Win+V in Windows 8
« Last post by f0dder on January 17, 2013, 12:28 PM »
Use AutoHotkey or AutoIt.
That has worked for me in the past to get control of Win+whatever shortcuts.

While I understand that Win+SingleLetter are reserved for Microsoft, it would still be nice of them if they had a place where you could enable/disable those built-in hotkeys at will.
689
Living Room / Re: Java Update on Tuesday
« Last post by f0dder on January 16, 2013, 02:09 AM »
They've been bundling the Ask toolbar for a while, btw, it's not introduced with the security fix.

But yeah, it's whOracle - #2 on my list of really evil software companies, where crApple still reigns supreme.

690
General Software Discussion / Re: MagicRAR Drive Press - worth anything?
« Last post by f0dder on January 15, 2013, 01:32 PM »
Because of that I request that both related threads be locked.
Dunno if they need to be locked - they're pretty dead now from my viewpoint.

One last thing coming up in a few, though, since I promised it: working on a small test to see what happens wrt. very small files (MFT-resident) when you apply compression.

Here, results from testing some very small files on an NTFS volume with 1k clusters. The files were highly comrpessible (filled with A's). The lines with "x is UNcompressed" (etc) are from a small tool I whipped up, the middle parts is the output from Microsoft's COMPACT.EXE.

small100.txt is UNcompressed, 100/100, (MFT resident), 1 fragments
small500.txt is UNcompressed, 500/500, (MFT resident), 1 fragments
small1000.txt is UNcompressed, 1000/1000,  1 fragments
small5000.txt is UNcompressed, 5000/5000,  1 fragments
====================================================
 Compressing files in R:\temp\z\

small100.txt              100 :       100 = 1,0 to 1 [OK]
small1000.txt            1000 :      1000 = 1,0 to 1 [OK]
small500.txt              500 :       500 = 1,0 to 1 [OK]
small5000.txt            5000 :      1024 = 4,9 to 1 [OK]

4 files within 1 directories were compressed.
6.600 total bytes of data are stored in 2.624 bytes.
The compression ratio is 2,5 to 1.
====================================================
small100.txt is compressed, 100/100, (MFT resident), 1 fragments
small500.txt is compressed, 500/500, (MFT resident), 1 fragments
small1000.txt is compressed, 1000/1000,  2 fragments
small5000.txt is compressed, 5000/1024,  2 fragments


Observations:
1) MFT-resident data stays resident - good!
2) The really small files aren't actually compressed (GetCompressedFileSize == GetFileSizeEx, see MSDN) - they are flagged compressed, though, so will be compressed once they grow.
3) For compressed files, we get "size on disk" (taking clusters into account), not "actual numCompressedBytes" - which makes sense.
4) When compressing non-resident files, we get one excess fragment.
691
General Software Discussion / Re: MagicRAR Drive Press - worth anything?
« Last post by f0dder on January 15, 2013, 11:06 AM »
^ If the developer doesnt care to respond to the last points made; ignores responses made by the investigator; misrespresents said investegator's research; and makes baseless claims spurious at this stage of discussions - I would say the case is closed.
Indeed - I don't have more to add to this thread, the facts are on the table.

I predict the other thread is just about fizzled out as well.
692
Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge
« Last post by f0dder on January 15, 2013, 11:02 AM »
If the progress bars reached completion only a few minutes off, I am glad to hear that - it is very difficult to get them working properly, and a few minutes on hour/day long tasks is a very reasonable rounding error that I'm happy to live with.
How so? You're running an "analyze" pass over the entire drive, so you're able to get both a count of files as well as size in bytes - it's true that things can happen on the filesystem while you're compressing, but the VM was fairly idle... showing progress at 100% for 6 minutes before really done seems like an interesting bug.

I realize you personally may not test this, but if you actually test (...) on a production system (by running it after letting Windows do the initial work), you will still see two to three times the space savings compared to Windows itself. This is because Windows misses a majority of the files that are completely safe to compress (and were included in previous Windows versions). Yes, those files that Windows fails to compress do make that big of an impact.
On a "production system", those protected Windows files would be a much smaller percentage of the total amount of files. Your claim of "two to three times" is a dubious marketing strategy - since you use built-in NTFS compression (and thus offer no algorithmic improvements), it would be more honest to represent the absolute amount of gigabytes saved for typical systems. There's enough gains that this honest representation is still a fine number.

And actually, (...) somewhat under-reports the space it has freed by about 30% - this is because after a compression call to Windows has been made and it returns success, the compression (and space savings) still happen in the background for a few more minutes. It was not possible to definitively determine when Windows would be ultimately finished with compressing a file, so the under-reporting bug was left in-place. Better to under-promise and over-deliver, rather than the opposite. You may always compare the drive charts before and after a compression for the best results, as we have done on our home page.
Interesting. I haven't checked when the DeviceIoControl() call returns, but Windows' built-in COMPACT.EXE utility doesn't return until the file is compressed (so it's definitely possible to do without too much work) - and IMHO, watching the thread status in your product while compressing, it looked like your threads didn't progress to the next workitem before it was fully done with the current. Perhaps you have a bug in your code - like, not handling hardlinked files properly?

I don't see a point in debating whether this Windows bug is really a bug or not. To me, it was clearly a bug because it was preventing me from compressing all of my drive, which was possible in previous Windows versions, and still remains possible.
Got an URL to your Microsoft Connect bug report? :-)
693
Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge
« Last post by f0dder on January 15, 2013, 07:44 AM »
Could also be C++ builder.
Ah yes, that can use the VCL (and other Delphi components) as well - didn't look closely, just saw some .pas references.

And yes, there are limits for the reason you stated.  It's an int (16 or 32-bit depending on the version of comctrl32.dll [ref].
That reference mentions 64k limit - I wonder if comctrl uses signed or unsigned integers? It's been ages, but I seem to recall doing 32k clamping?
694
Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge
« Last post by f0dder on January 15, 2013, 07:27 AM »
1) If you really belive it's a bug that the Shell doesn't temporarily remove protection of critical system files, you should file a but report on MS Connect, instead of making spurious claims in your marketing material - I'm pretty sure this is a by-design decision from Microsoft. I do agree it's probably harmless to compress those files, but calling a security feature a bug is misleading marketing, IMHO. And you deliberality keep your wording vague enough (combined with your "three times smaller", which is obviously only valid if there's not much else on the disk than Windows) to give the impression that the "bug" would be somewhere else (like, the core NTFS compression routines). THIS is why I'm pursuing this aggressively - you're using snake-oil salesman tactics. Which is a shame, since you obviously do get a better compression rate (and you really do ought to warn users that you're doing it by messing with critical OS files).

2 & 3) There's nothing wrong with what I've stated here. I do acknowledge in SSD speedup in #4, but for obvious reasons there's no way in hell I'll be NTFS compression any of my SSDs. The HDD backing my VM disk image is a 10k rpm velociraptor. I plan on running a single-threaded DrivePress later today to compare with the 2-thread version.

4) first, again my problem with compression on an SSD isn't the speed hit caused by fragmentation (it's several scores lower than the speed hit on a HDD, but it's still real) - it's (to some degree) the reduced speed and hindered wear-leveling on (at least, but probably not limited) drives with SandForce controllers, and (to a fairly large degree) the heavily increased amount of block erases caused by how NTFS compression is implemented. Having NTFs compression on often-modified files approaches suicidal tendencies for an SSD.

Because an SSD can read from/write to all parts of the drive at the same time (think of a hard disk platter rotating at infinite speed), that is why fragmentation is of absolutely no consequence for SSDs - be it NTFS compression induced, or the "normal" fragmentation that happens on NTFS inevitably. There is no delay, because all areas of disk are equally accessible at all times.
This is patently wrong - take a look at some benchmarks. For instance, the 120GB Intel 510 drive does ~50MB/s for 4k random reads, whereas it does ~380MB/s for 128kb sequential reads (4k sequential would be slower, but should still be quite a lot faster than the random reads). You'll notice that it does 4k random *writes* faster, which is obviously because the drive has internal cache and can do (sequence-optimized) writes at leisure - and some of the other drives handle this even better.

a. Do NOT attempt to manually acquire file permissions just to be able to compress them. Doing this will create a huge security hole on your system (one that MagicRAR Drive Press does not create, because it restores all permissions as has been confirmed in this third party report).
This is very good indeed - the way you handle the permissions is something to give credit for. I didn't look too closely at the code, but it seemed like you even throw in exception handling to do the permission-restore? You do leave files potentially vulnerable during the compression process... not much of a real-world problem, but could be reason enough for Microsoft consciously choosing not to do it. I still feel it wrong to classify the behaviour as a bug.

b. There will always be some files/folders that would be locked by the system/applications, and as such incompressible. If there is demand for it, we could also automate the conversion of those parts by building a boot time version of MagicRAR Drive Press - however, in my research, the additional space savings would be negligible.
Agreed.

So while the MagicRAR Drive Press Challenge technically remains unmet
Ho, humm, you still chose not to properly address any of the points of the original thread, most of which I showed to be clearly true. As I see it, only the points regarding interactions with SSD speed/lifetime can be debated... and for those points, I indeed do believe that I'm correct; what can be debated is to which degree lifetime and performance will be affected. For the current crop of SSDs, I definitely wouldn't do gung-ho NTFS compression, and I would recommend people against it.

Selective compression of static files would be fine, though. I wonder if it would make sense to apply compression on the files on another (and preferably HDD) partition, then move the files to the SSD target? I haven't tested, but it might result in less fragmentation of the target files.

Oh, and one last thing: your progress bars are severely bugged - they reached 100% several minutes before the actual operation was done (bugged both in analyze as well as compress phase). Looks like you use Delphi, and I haven't touched that since Delphi2, so dunno if there's limits on it's current/max values... but iirc the win32 controls are/were clamped to pretty low values, meaning you definitely shouldn't be using currentBytes/maxBytes - or even currentNumFiles/maxNumFiles for modern filesystems.
695
General Software Discussion / Re: MagicRAR Drive Press - worth anything?
« Last post by f0dder on January 15, 2013, 06:39 AM »
As you have now seen for yourself, none of our claims are false and the product works exactly as it is being marketed. While I would welcome an apology from you, I happily accept all your time spent researching, as well as your accurate report of your findings, in its stead. Thank you for being open minded!
1) you claim Windows is buggy - this is false.
2) All my statements before testing were correct, so there's nothing to apologize for - and considering your own tone, I will not even apologize for the language I've used.

Please note that fragmentation is not an issue for SSDs due to zero impact on random access times throughout the disk.
SSDs have vastly better random I/O characteristic than HDDs, but you still incur overhead from fragmentation - claiming anything else is bullshit, and easily verifiable by checking benchmarks of sequential vs. random I/O. I'm not worried about performance, though, but the other problems compression & fragmentation poses for SSDs.

First, you should read up on write vs. erase block sizes, wear leveling algorithms, and how various SSD controllers optimize - the TL;DR version is 1) that you do want to minimize small & scattered writes (hint: write-block sizes are larger than on HDDs, and there's the even-larger erase-block sizes to consider as well). And 2) for a several SSD controllers, compressed data both means lower speed and worse wear-leveling.

Second, you should read up on how NTFS compression is implemented. (TL;DR: for 4k cluster size, compression is done in 64kb chunks). It means a lot more fragmentation - by design. Now, imagine what happens if you change data in the middle of a compressed chunk? Just how the split is done depends on the compressibility of the data you're writing, but there's a decent chance you end up needing to allocate two chunks - which will cause extra fragmentation, and will turn a possibly perfect-eraseblock-aligned write into several eraseblock-updates. Oops, you've just reduced the lifetime of the SSD a bit more than necessary.

And while the filesystem fragmentation is already bad after initial compression (531k excess fragments for a tiny 134k filesystem, ouch!), it's only going to get worse over time.

I've yet to check what happens when you enable compression for tiny files (MFT-resident data) - I'll take a look at that when I get home from work. But if turning on compression for tiny files means they're moved non-resident, you're adding a lot of additional waste. From the list of uncompressed files are running DrivePress, it seems like you go gung-ho and indiscriminately compress everything on the filesystem (apart from those few folders that you can't access, and the very few boot-related files in your protection list).
696
General Software Discussion / Re: Video rant against Windows 8
« Last post by f0dder on January 14, 2013, 01:01 PM »
The video was annoying - rhetoric as well as voice. He also obviously tries to cash-in on the  ZeroPunctuation style (colors, graphics, hyperbole) - but while it works wonderfully for Yahtzee and is a fine way to review games, it makes me want to smack the Boyko dude in the face.

He claims he's spent time with a bunch of other operating systems, even Linux back in the days... so, like, he did no RTFM'ing or googling or anything for any of those? Then he sits down with Win8, does no RTFM'ing (a few minutes on google would have solved any of this problems), and apparently doesn't even apply prior Windows knowledge (like Alt+F4)? Come on.

I'm not sure if it's plain stupidity or pageview-bleargh. I'd say a healthy dose of both.

Win8 still going strong on my work laptop, btw, and I'm hardly ever seeing Metro pop up.
697
Living Room / Re: SGS3 Advertising Fail
« Last post by f0dder on January 14, 2013, 12:29 PM »
Power glitch at the store? Lazy employees that pulls plugs instead of proper shutdown? :)
Um... shouldn't a tablet type device having a battery negate those options? *Shrug* I got no problem calling it a crash ... Shit happens, Ya know?
Sure, if it was a tablet - that thing looks more like a man-sized advertising flatscreen (driven by some commodity windows software) with a tablet/phone-like frame on it? :)
698
Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge
« Last post by f0dder on January 14, 2013, 09:45 AM »
Ah yes, almost forgot this:
6. "The software probably does nothing else than force the compression flag on all files, even on those, where it would make no sense"

False. If you indiscriminately compress all files on your computer, you will actually end up with an unbootable system. Drive Press is completely safe to use and will not compress files that will jeopardize your operating system's integrity or ability to boot.

After running DrivePress, FindCompressed "Found 1008 uncompressed files in 131695 items examined." - a bit of this seemed relatively random (Vmware Tools, some WinSxS folders), Windows Defender was also untouched (I wonder if MS applies special protection to those folders?), and obviously some "heavily locked in-use files" (like registry hives) were untouched.

Also, as I previously mentioned, there's indeed some files you shouldn't compress if you want a bootable system. Those are a very small subset of the files left uncompressed by DrivePress. Searching for "ntldr" in the executable gives the following list, which makes sense:
{ "NTDETECT.COM", "ntldr", "boot.ini", "bootmgr", "BOOTNXT", "\Boot\" }
699
General Software Discussion / Re: MagicRAR Drive Press - worth anything?
« Last post by f0dder on January 14, 2013, 09:35 AM »
I've posted some preliminary findings.

TL;DR: it doesn't do anything magical (it's DeviceIoControlFSCTL_SET_COMPRESSION) just as I expected), and as I expected it does indeed go gung-ho on your partition - temporarily disabling NTFS security while doing so. Does end up with (quite a fair bit) more compressed files than you get by ticking Windows' "compress this drive", but it's also leaves you with heavily fragmented and really can't be recommended for SSDs.
700
Announce Your Software/Service/Product / Re: The MagicRAR Drive Press Challenge
« Last post by f0dder on January 14, 2013, 09:32 AM »
Right, so I played around with DrivePress.

I'll have to some more testing before posting anything detailed, but my findings so far:

1) DrivePress achieves more compression than "Windows' built-in" compression, sure. But it's not at the level of NTFS compression itself (so nothing magical - DrivePress does indeed, as I proposed, simply uses DeviceIoControlFSCTL_SET_COMPRESSION)), and claiming that "Windows compression, is buggy and fails to compress a majority of your hard disk" is a blatant lie incorrect. More on this below.

2) Runtimes for Windows' default compression can't be compared directly to DrivePress, since it processes far fewer files - DrivePress would come out the loser, anyway. Futhermore, for a standard HDD, increasing the processing threads from 2 (the default) to 4 (the cores I allotted my virtual machine) from ~31 to ~34 minutes, which is to be expected given that a standard Windows installation has a whole bunch of very small files, which results in lots of random I/O... HDDs hate that.

3) I've yet to test with a single thread - that might actually yield better performance than two threads, since there'll be even less random I/O, and since the CPU usage was generally in the 1-digit ballpark, except when processing executables (hello there, Windows Defender).

4) SSDs can probably benefit from running multiple threads, since they're much better at random I/O than HDDs. But as I've already mentioned, NTFS compression on SSDs is A Very Bad Idea (except for applying it to specifically chosen mostly-static files). Filesystem stats after running DrivePress:
134067 files, 132637 compressed (98.9%)
Sizes - compressed: 12234652699, uncompressed: 18137110610 (67.5%)
Total fragments: 665541 (531474 excess frags)
Also, because of the way NTFS compression works, even if you do a defrag after compression (defrag and SSD? Ouch again!), you will end up generally suffering more compression than when dealing with uncompressed files (ask yourself how writes into the middle of a compressed file has to be handled - or go read up on how NTFS compression is implemented).



So, "Windows compression, is buggy and fails to compress a majority of your hard disk"?
Not really. Since Vista, security on Windows has been ramped up quite a bit. Lots of stuff has happened, but relevant to this discussion is file permissions. If you go look at the NTFS permissions for core stuff in Program Files, or Windows\System32\DriverStore or Windows\WinSxS you'll see that they have quite restrictive permissions - heck, even the SYSTEM Windows account (the one that normally has the most privileges) only has read-only access to WinSxS, leaving write access to the TrustedInstaller account!

Now, if you're a member of the Administrator group, and go through UAC (which DrivePress does), you can grant yourself access to those files - (which is what DrivePress does - at least it doesn't make those permissions permanent). So nothing magic here, no bugs, just temporary bypassing of Windows' security.

Is it a bad thing to do? Probably not. But it's hardly rocket science.
Pages: prev1 ... 23 24 25 26 27 [28] 29 30 31 32 33 ... 364next