3526
Living Room / Re: Interesting Discovery Involving Rented Servers
« Last post by f0dder on May 01, 2009, 10:23 AM »Oh, I didn't mean just overwriting the MBR, I meant "place a disk-wiping tool in the MBR bootstrap code" 






- so that takes care of DOTT, MI, ZakMcKracken, et cetera. I do agree that it's a thing that's hard to agree on, and it comes down to what you had your great hours with. And yup, tiny little subpages like that suck bigtime.
(probably mostly because it's extremely sick humor
)One category would be software that places files or notations below the Windows OS, we know there are categories of software that are defrag-sensitive, perhaps in the virtualization or sandbox world, perhaps with some programs that write directly to disk, perhaps .. conceivably .. with special markers like serial #'s placed hidden that were ultra-security sensitive (I am guessing a bit).I don't think this is an issue today. The only time I've seen software that needed stuff to be on special locations on the disk has been with software protection, and I haven't seen that since the Win9x days... except for a very few protections that probably aren't used today, and those depended on writing to the "reserved first cylinder" of the drive, which isn't touched at all by defragging.
or looking for bad sectors (remember how the Windows defragger would often simply not function due to wanting the perfect chkdsk)That was on Win9x and didn't have to do with bad sectors, but rather the filesystem metainfo. This was because Win9x didn't have a defragging API, and the defraggers had to access stuff directly (and thus re-read the FS metainfo if they sensed changed). Almost a bit amazing that there were so little disk writes going on that this worked at all

One way to look at it is that on our systems file I-O is so far below memory usage and CPU exhaustion and internet connections and other bottlenecks in causing any actual noticeable speed bumps .. that tweaking a bit faster file I-O, while nice, will make little practical difference.Dunno if I agree with that - in most systems I'd say that disk is actually the bottleneck. And once things get fragmented enough (or you run multiple I/O threads), even a raptor disk that can do 90MB/s sustained drops to less than 1MB/s
- of course that's on an über-pessimized system, you won't really see much advantage from defragging a 100-fragment 10-gigabyte file to one single fragment.This scheme makes it so the OS doesn't need to worry about heads and platters, as we used to have to do with MFM and RLL drives.IDE drives can still be addressed through Cylinder/Head/Sector notation (until you hit the max size limit and have to go with LBA), but even then the drive internally convers the CHS to a LBA, and then to it's internal physical structure

I don't think I like the idea of karma and voting or anything that creates 'elitism' or divides members in any way.I think it works kinda OK on Stack Overflow, although it does mean there's a tendency for people to try and be first with answers, to get upvotes, and then gradually edit more details in later. On DC, the tendency is for people to write some decent posts from the start.-Gothi[c] (April 30, 2009, 11:01 AM)
).@f0dder: It restores the clipboard after the drop is complete.


This paragraph is very telling:That is... kinda cuteAs a final note regarding the Ask toolbar, feel free to install Comodo with all three checkboxes unselected and then download the Ask toolbar separately. When the download process is over, Comodo will detect the Ask toolbar as Unclassified Malware@8305287 and require confirmation for copying it to your download folder. Any other comments on this matter would be redundant.-Deozaan (April 28, 2009, 11:56 PM)

drives that are cooled excessively actually fail more often than those running a little hot

I've never seen an accounting app that needed much from the GPU, so I doubt that will be a concern.Funny that something as simple (OS features used, not business logic) as an accounting application can be written so crappily that it doesn't work across the whole Win95 -> Win7 range-Stoic Joker (April 28, 2009, 10:33 PM)

- it's not like they're going to rewrite the kernel for dotNET anytime soon, and I bet the framework ultimately ends up calling win32 and not the NT native API.Yeah, I was afraid of that. VirtualPC is really behind the competition regarding graphics acceleration support, and that's even with the leaders in that area offering lackluster performance compared with the real thing. It would be really nice to have it, though.To be fair, it's a pretty darn complex thing to get right. Trying to emulate a GPU and getting acceptable speeds would likely be unfeasible. So instead you'd have to come up with some "passthrough" mechanism, possibly by intercepting DirectX/OpenGL calls and routing them outside the VM... this is OS-specific, hard to get right, and opens the possibility for breakout scenarios - which you don't want happening-Lashiec (April 27, 2009, 01:14 PM)

I wonder if that 100% compatibility figure also includes games...It's done through Virtualization, so it won't be exactly the same as running on a real physical XP machine. The answer to your question will likely be the same as "does VirtualPC support DirectX hardware acceleration properly".

Explorer (or TC) do not list file permissions. (!), at least that I could find. What's wrong with listing say -rwx------ like in unix?Explorer is geared towards normal users, who don't need to see this kind of stuff. And given how permissions work on NT, I wonder how you'd represent the permissions. Perhaps calculate the effective permissions for the current user?
Changing permissions recursively sucks. You have to use cacls.exe, which is very limited.Limited how? And ugly compared to chmod+chown how? Longer commandlines, sure, but beyond that?
Changing permissions is extremely slow. In unix, it rarely takes seconds, even for a huge tree. In windows, it's been minutes already for a not-so-big tree! Any reason for this madness?Hm, using cacls is pretty fast for me - going through the GUI might be slower (like mass-deletes through explorer is slow because it wants to report progress etc), but I've never used the GUI for large trees so wouldn't know

You are allowed to do crazy things like erradicate the administrators group. You read that right: you can make it so some user has full permissions on a file, but the admins don't. I have no idea how I managed to do this feat... and I fixed it now. But I'm really curious about what purpose this may fulfillIt's called flexibility. Traditional unix user/group permissions are extremely limited compared to NT-style ACLs. Granting users and denying administrators might not be a useful thing to do, but stuff like being able to grant multiple groups access to a set of files can be useful - with *u*x permissions, you'd have to create a separate group allowing access to those files, then adding users to that group; messy.
Not to mention that every action that requires admin privs will prompt for a passwd. So, in a normal day, you can easily type the admin passwd about seven billion orders of magnitude more than on unix.When running non-root linux, don't you need to sudo when doing administrative tasks? How is this different from Windows?