topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday December 19, 2025, 6:28 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: prev1 ... 49 50 51 52 53 [54] 55 56 57 58 59 ... 364next
1326
Living Room / Re: Would you trust this ... ?
« Last post by f0dder on September 25, 2011, 01:52 PM »
An SSL connection is encrypted by default, so even though the password goes into the comms channel in plain text, because of the entire channel being encrypted, nobody but the receiving end can read it. It's just like with https vs http websites.
...unless there's a man-in-the-middle with a forged certificate... used to be "zomg tinfoilhat!" stuff, until the latest CA hacks (diginotar, anyone?). Also, HTTPS == HTTP-over-SSL.
1327
Developer's Corner / Re: An experiment about static and dynamic type systems
« Last post by f0dder on September 14, 2011, 02:35 PM »
C# allows dynamic typing, but I'll be damned if I'll use it if I can at all avoid it. (dynamic and var)
'var' is not dynamic typing, it's used for static type inference which is super useful for DRY reasons.

'dynamic' is something to be very careful about - it tends to ripple out if you start using it.
I stand corrected. :)

But close enough. When you look at it, var looks like a dynamic type. e.g. var acme = "Rocket Skates"; vs. var acyou = 500;
Sure, it does look like dynamic types at first, but the distinction is super-important. I personally loathe dynamic typing, but I'm a big fan of type inference, as it makes my life easier without giving up static typed goodness.

I'm not using it everywhere, though - I don't use 'var' when dealing with simple built-in types, and whether to use it when assigning a variable to a method return value isn't always clear-cut either. The goal to strive for is reducing unnecessary clutter, while not adding ambiguity; remove noise so you can focus on the important parts.

Your strings and numbers" samples show that implicit type conversions and operator overloading isn't always a good idea - and, especially since C# has such nice string formatting, it int-to-string conversion should IMHO have been explicit.

I've noticed that with dynamic -- it tends to be infectious. I've found it's useful for dealing with JSON and all that gooey webishness. :)
Can't get by just with anonymous types? :)
1328
Developer's Corner / Re: An experiment about static and dynamic type systems
« Last post by f0dder on September 13, 2011, 01:54 PM »
C# allows dynamic typing, but I'll be damned if I'll use it if I can at all avoid it. (dynamic and var)
'var' is not dynamic typing, it's used for static type inference which is super useful for DRY reasons.

'dynamic' is something to be very careful about - it tends to ripple out if you start using it.
1329
General Software Discussion / Re: Windows 8 Fast boot time ? Check this out...
« Last post by f0dder on September 13, 2011, 01:44 PM »
7 seconds? Damn. I might consider rebooting more often then (or not). I only reboot every few (3 - 6) months for Win updates depending on how critical what patch is. Hell I've only shutdown my main comp about 4 times since I built it.
Unfortunately, I "reboot" much more often than that thanks to wonderful video drivers. a.k.a. BSOD. Grrr...  :mad:
Still stuck on XP, you poor soul?
1330
Developer's Corner / Re: An experiment about static and dynamic type systems
« Last post by f0dder on September 12, 2011, 07:00 PM »
Haven't had the time to look at this in detail, but the following caught my attention:
One issue is that experimenter, to reduce variables such as familiarity or different IDEs, developed his own language, Purity, in two variants.
Does that mean everybody were stuck with 'dumb' text editors? In that case, the study is pretty useless... some of the really big advantages statically typed languages offer over dynamic ones is all the assistance you get from your development tools, which is extremely hard to implement for dynamic languages.

If you add some decent type inference into the mix (C#, C++2011, Scala, ...) you gain several of the brevity benefits associated with dynamic languages, without the clusterfsck messup potential.
1331
General Software Discussion / Re: Windows 8 Fast boot time ? Check this out...
« Last post by f0dder on September 12, 2011, 06:50 PM »
I have never even tried to hibernate my PC - I always close/reboot - so this is fantastic to me.
It's great if you have 2gigs of RAM... it's OK if you've got 4... above that, hibernation starts taking a bit too much time for my liking.

But on a developer laptop running heavy cra stuff like Day Adobe CQ5, the accompanying CRXDE "lite", a zillion chrome tabs, a few firefox and IE windows, a bunch of console windows, text editors, and... all that jazz... well, trust me, hibernation is a love affair. About a minute to boot into a fully-working desktop, or ~15min to start up all the crap from a cold boot? Geeh, let me see... :P

The win8 "hibernate just the core" idea seems interesting, but dunno how much it's going to matter - the Windows boot sequence doesn't seem to take a lot of time compared to POST and post-boot-load-all-the-startup-crap on most systems I've dealt with.
1332
Living Room / Re: Six Levels of Apple Fandom
« Last post by f0dder on September 08, 2011, 05:13 PM »
(As I type all this out on my iPad.)
I've got one of those now as well.

It's too expensive, but the competing tablets are even worse (paying a bit extra for openness might be worth it though, hmm). That crApple products "just work" and are "supah intuitive" is bullsh1t, I've already run into several weird things... but it's a decent product overall. The worst parts are of course related to how closed-down they are... but I'll probably attempt setting up a PIRATED OS X in vmware with a PIRATED XCODE (yeah, fsck you, crApple!) and see if I can do a little development fun.
1333
Official Announcements / Re: 2011 Fundraiser Giveaway Extravaganza - WINNERS POSTED
« Last post by f0dder on September 08, 2011, 05:09 PM »
Oh, those are beautiful :)
1334
Living Room / Re: Six Levels of Apple Fandom
« Last post by f0dder on September 07, 2011, 12:31 PM »
Kill me now. I'm surprised Steve Jobs hasn't filed copyright suits against these folks.
I really like those haircuts.

Makes for nice, clear, "bullet goes here" markings.
1335
Living Room / Re: Migrating Win7 installation to SSD
« Last post by f0dder on September 07, 2011, 07:58 AM »
Sounds like it would work, but it's a bit much mucking around - and I'm not a fan of their "let's shuffle data around a lot to get the partition aligned".

Personally, I do my OS setups in vmware, then migrate to physical hardware with Paragon Virtualization Manager 2010 Personal - works like a charm. For migrating an existing install it would be even more mucking around than the lifehacker way, but it still offers the benefit of getting everything juuuuust right in a virtual environment before doing the switchover to physical.
1336
I do know the SATA solid-state drives fail much more than their mechanical counter parts, and they fail without warning. That seems odd to me for a flash failure, but it would make sense in a DRAM design if the battery backup suddenly failed.
It's definitely not the flash cells that are worn out, those spurious drive deaths happen much too fast for that. For the OCZ drives, it seems to be the firmware that goes into a panic-state after some power-cycle. The Intel bug seems to be along the same lines. But it's really discouraging that these things happen, as solid state drives were supposed to fail gracefully. Oh, btw, only the "enterprisey" SSDs have battery backup.

FWIW, I doubt they "fail much more than their mechanical counterparts" though, the shop I bought my Vertex2 from told me they had around 1% fewer RMAs on SSDs than mechanical disks.

Come to think about it, it makes more sense to use DRAM over flash in a SATA drive design just because flash write speeds are so slow and their write times can be somewhat non-deterministic because of the MM firmware execution involved.
If flash memory really was that slow, a ram buffer (which all the drives do have) would only allow for high burst speeds - but the decent drives allow for high sustained speeds. My X25-E does something like 150MB/s sustained.
1337
General Software Discussion / Re: New Folder 2 2.0
« Last post by f0dder on September 06, 2011, 02:52 PM »
the most handy solution is to create a new subfolder folder when you double click on the empty area of a folder view, imo
That means moving your hand from the keyboard to the mouse and then back to the keyboard - definitely not very handy.

The best is a decent filemanager, but MilesAhead's solution is decent for those who stick to explorer.
1338
General Software Discussion / Re: data binding
« Last post by f0dder on September 04, 2011, 08:25 AM »
I assume you don't have source code for the 'second software'?

If not, you'll have to simulate input for the secondary app; exactly how to do it depends on the application, sometimes you can use SetWindowText for some parts, other times you have to send character by character... and it's generally a relatively fragile exercise.
1339
Living Room / Re: Anyone else using Ramdisk in Windows 7?
« Last post by f0dder on September 02, 2011, 09:36 AM »
Don't see the point of those fixed size partitions these day, really - for the same reasons as my arguments against the fixed-size windows paging file. There were technical reasons for it back in the olden days, but Linux has supported file-based swap for a while now.
It supported file based swap when I was using it. It's just that partition based is more efficient. Read how it works with partition based swap before making assumptions.
Care to back that up with facts, for recent kernel versions? :). Same as with Windows: allocate a intelligently sized swap file, and it won't fragment. As for access, here's from LKML:
> 3. Does creating the swapfile on a journaled filesystem (e.g. ext3 or
> reiser) incur a significant performance hit?

None at all.  The kernel generates a map of swap offset -> disk blocks at
swapon time and from then on uses that map to perform swap I/O directly
against the underlying disk queue, bypassing all caching, metadata and
filesystem code.
(The question is a bit different, but the implications are the same).


I notice your post is filled with implications such as "blindly follow" etc.

So your remedy is to blindly follow you instead of my 16 years of experiences using and watching my systems? Tsk tsk.  Debate tactics rather than argument.
It's a piece of opinion - take it for what you like. IMHO it's got good arguments going for it, and it's worked fine on my laptop (which doesn't have endless amount of memory) for years. The fixed-size argument is something I've seen regurgitated for years, and I don't agree with it - so obviously I'm going to object when I see it given as as a suggestion to others.

"What's best" shouldn't even be asked until you ask "how to you use your system?"  Otherwise it's just tail chasing.
Indeed. And while YOU might not run out of memory, you can't really know about other people's usage patterns... and thus suggesting that setting a maxsize isn't really a good idea.

PS: the one argument for swap partitions I can think of, is if you want to control the physical location on disk for access time reasons... but if you're about to do that, then you have a server with severe memory problems, and should be investing in more RAM, seriously. And just as a preemptive snarky comment safeguard: system pagefile != database scratch areas.
1340
General Software Discussion / Re: Linux kernel.org hacked
« Last post by f0dder on September 02, 2011, 09:25 AM »
Interesting and embarrassing, eh? I wouldn't worry:

How to inject a malicious commit to a Git repository (or not)
http://git-blame.blo...s-commit-to-git.html
Please re-read my post. Like, the first paragraph that mentions Git and tarballs.
1341
Living Room / Re: Building a home server. Please help, DC!
« Last post by f0dder on September 01, 2011, 05:28 PM »
Also, just because some people believe NAS is not acceptable for "enterprise use" doesn't mean A: everyone thinks that B: that "enterprise use" applies to you. Plenty of businesses (where do you draw the line between small business and "enterprise"?) use NAS products and there are business-oriented systems that have reasonable reliability, configurability, etc. The Synology units are among them.
...not to mention that some of the really heavy enterprise storage systems are actually NASes and not SANs :)
1342
Living Room / Re: GOD IS DEAD~! =P
« Last post by f0dder on September 01, 2011, 05:19 PM »
He sewed your eyes shut, because you were afraid to see. He tried to sure did tell you, what to put inside your PC. He had the answers to ease your curiosity; he dreamed a god up, and called it a-p-ple-TV! He flexed his muscles to keep his flock of sheep in line; he made an iOS that would kill off all the swine! his perfect kingdom of apps, lock-in and pain - demands devotion, atrocities done in his name.

...man, that song adapts way too easily - even if I perhaps stretched it a bit to make it rhyme :)
1343
Living Room / Re: Anyone else using Ramdisk in Windows 7?
« Last post by f0dder on September 01, 2011, 04:33 PM »
The one time I disabled the page file, I did have some problems with a couple of graphics programmes, that I presume were related to it's absence, so I went back to it. In fairness that was with 2GB ram - now have 8 so could try it again, but it'll be down my list a bit...
I wouldn't be surprised if there's a few crappy shoddy programs that depend on having a pagefile present, even if there's no real reason for it as long as you have enough RAM.
1344
Living Room / Re: Anyone else using Ramdisk in Windows 7?
« Last post by f0dder on September 01, 2011, 03:35 PM »
If the page file size never even approaches the minimum what's the point of having a larger maximum? I can see if people run memory hogs like giant spread sheets.  But for my usage there's no need for it.
If you never go over the minimum, no space wasted, no harm done. If you run into a once-in-a-blue-moon situation where you need the memory, you'll probably be happy your application (or production server? :)) doesn't crash.

It's my experience that people argue about swap more than they actually use it.
Yep, and you see a lot of old crap regurgitated over and over, with either the "ZOMG SET TO A MAX SIZE TO AVOID FRAGMENTATION!" or some weird magic formulas that probably made sense 15 years ago when they were first invented, but... yeah.

I only have 2 GB ram on this machine and ran for 4 years with no swap.
I did that back with 1gig of ram (which was a bit low when gaming), but after I upgraded to 2gig (and then all the upgrades after that) without a hitch. But running pagefile-less isn't something I'd advise to everybody.

So why should I subscribe to your formula?  On my side I have about 16 years of experience with my method.  On your side I have theory.
Because the theory makes sense? :) - or perhaps you can point out a flaw in the theory? There might be some scenario I haven't though of... but at least there's none of those magic voodoo numbers.

A better solution all around is a swap partition a la Linux.
Don't see the point of those fixed size partitions these day, really - for the same reasons as my arguments against the fixed-size windows paging file. There were technical reasons for it back in the olden days, but Linux has supported file-based swap for a while now.

The Windows options are almost laughable.  Everything is squeezed through the straw of a file system.  They could fix it but they don't care.
How is the file system a straw? As long as your paging file isn't fragmented, there's no I/O difference between file swap and partition swap... and I bet you'd be hard pressed to measure the computational overhead between handling writes to a file vs. to a partition even on many years old hardware.

Now, paging options might aren't as comprehensive on Windows, that's for sure. But that's how it always is with Windows: it caters to the majority :)

I really just got tired of trying to follow and formulate an opinion on the 0 PF safety debacle and just - said the hell with it - split the difference.
The most reasonable advise I've seen by a techie the last few years was "don't blindly follow advise, measure your needs". I still don't see the obsession with with fixed size, though :)
1345
Living Room / Re: Anyone else using Ramdisk in Windows 7?
« Last post by f0dder on September 01, 2011, 02:43 PM »
Run your machine your way.  This has worked for me across many machines across many years. No maintenance no crash. The "so what" is having to reboot the machine and defrag the page file for no reason that I can think of. If you really think "so what" then just let Windows manipulate the page file in the first place. No stress. :)
1) I can't remember if windows will only shrink the paging file on shutdown, you might be right on that point... but you're going to do that in due time. Can't really see why you'd reboot just for the shrinking?
2) you don't need to defrag - when the pagefile is shrunk to minsize, the additional extent(s) are simply removed, and you're back to your 1-fragment file.

I really don't understand the reasoning behind setting a fixed size. Either you have a ludicrously large pagefile, or you acknowledge you can run OOM. IMHO it's pure logic to set a minsize to "somewhat more than you expect to see (and have measured) under normal use", and without maxsize (or a "sanity limited" maxsize if you must) - you get the best of both worlds:
1) no fragmentation under normal working conditions
2) perhaps fragmentation, but temporary so (without needing a defrag), instead of running OOM.

So, is there a flaw in my reasoning? Or are you just sticking to "that's the way I've always done it, because I read on some tech site that's it's the thing to do"?  :P
1346
General Software Discussion / Linux kernel.org hacked
« Last post by f0dder on September 01, 2011, 02:33 PM »
"Oops."

shot-2011-09-01@21.28.34.png
Kernel.org Server Rooted and 448 users credentials compromised

Now, as mentioned in the article there's no reason to worry about the Git source repository, due to the nature of Git itself... but the kernel tarballs could be affected, and we won't know the details until after an audit is done. (Yes, there's signatures for those tarballs, but who checks the signatures? And is there any guarantee that the tarball signing key hasn't been compromised?).

What does this mean? If you've downloaded tarballs from kernel.org the previous month or so, be sure to audit your systems and follow the news very carefully. Hopefully all sane distributions get their kernel sources from Git and not kernel tarballs, so people upgrading kernels from their distro vendor should be safe - but stay tuned.

Interesting news, anyway. Seems to be a combination of trojanizing an Intel kernel committer (social engineering or haxxor of his system?), and then a bit of local->root privilege escalation.
1347
Living Room / Re: Anyone else using Ramdisk in Windows 7?
« Last post by f0dder on September 01, 2011, 12:36 PM »
Fixed size pagefile is a bit silly, by the way. Set it to a large-ish minimum size to avoid fragmentation, but why set a fixed upper size? (OK, the one reason I can think of is shutting down a runaway leaking 64bit process before it fills your drive... but that's about it).
Setting the maximum guarantees it won't be fragmented. I've removed page file, defragmented the disk.  Then enabled paging with min=max.  Months afterward checking the page file for fragmentation shows it never needs to be defragmented.  Zero maintenance.
Setting the pagefile to a reasonable minimum size means you'll never get fragmentation under normal working conditions, but if you should need the extra swap... however unlikely... it'll be available rather than your application running OOM. If you've got little enough memory (or extreme enough applications) that you need swap, that seems to be far the superior solution to me.

And even if you do get into the extreme situation and it causes fragmentation... so what? Once system is back to normal memory usage, the file will be shrunk and you're back to your minimal-size file in one fragment.
1348
No wonder that random-writes are slow,...
Writes are slow in flash memory for important reasons. For today's flash, you can only write in the same physical memory location 15,000 times before that location will fail (because of a silicon metal state change). (That number was 10,000 times about 6 years ago.)
I wonder if anybody have reliable numbers for the erase cycles - I've seen a lot of different figures mentioned. And with the move to smaller production scales, those numbers tend to go down and not up!

So to prevent writing in the same location all the time, there's memory management (MM) firmware to map the writes evenly across the entire physical address space. So you need to factor the execution time of that MM firmware into the write speed.
I do wonder if the usb pendrive style flash devices do any of this remapping? At least in my mind, there's a big difference between those and solid-state drives with a SATA interface... even if they both use MLC modules :)

quote author=superticker link=topic=27369.msg259223#msg259223 date=1314067094]This is why solid state drives are purchased primarily for read applications (e.g. a web server) and not write applications (e.g. a database server). The solid state drive will really speed up any read-application heavy work.[/quote]I can assure you that today's solid-state drives are purchased for write-intensive applications as well, and they really do shine there compared to magnetic storage drives too :) - the management firmware doesn't "just" do remapping to reduce wear & tear, they also stripe the data across flash channels to achieve higher speed.

But there's that detail with pendrives vs. sata devices again.
1349
General Software Discussion / Re: Macrium Reflect Question In Re: To FAT32 drive
« Last post by f0dder on September 01, 2011, 11:57 AM »
I've successfully reformatted the 500Gb drive to NTFS, and am creating another image with Macrium Reflect Free.  I anticipate it will be a single file, and should work fine.
Next time, perhaps check out convert.exe :)

convert.exe is great if you really need to save the data that's on the drive. But it's not going to guarantee an ideal cluster size unless the drive was prepped properly with the oformat utility (also in the support tools folder).
True, true - a format can be a better choice... moving data back to a freshly formatted drive also serves as a defrag :)

IIRC, it's not exactly fast/faster either (been a while (decade...) since I've had to use it).
It's been very fast all the times I've used it, basically only having to convert FS metadata. Sure, that can take a bit for huge filesystems, but definitely a lot less than moving data off and back. (OK, so you should always do a backup when messing with filesystems, so you only cut off 'moving back', but... 8))
1350
Living Room / Re: Anyone else using Ramdisk in Windows 7?
« Last post by f0dder on September 01, 2011, 11:54 AM »
What's your opinion on this:
*snip* DisablePagingExecutive *snip*
Not sure if the memory manager honors the settings these days, or how big an effect it has - but it was a huge advantage for me back in the Win2k days. Without the setting, after quitting a game I'd have a lot of page-in activity before my system was usable again. With the setting, much less.

After I got enough memory (the WinXP days and ever since), I've turned off pagefile.sys entirely, so the setting would be superfluous. And I don't agree that better performance from turning off the paging file is a myth, even with plenty free memory Windows tends to trim process working sets a bit aggressively - at least it definitely did so in the XP era, might be better at 'wasting' memory with Vista and Win7.

I prefer chugging enough memory in my system and not worry about the pagefile - but it's only an option if you can always having enough physical memory available.

Fixed size pagefile is a bit silly, by the way. Set it to a large-ish minimum size to avoid fragmentation, but why set a fixed upper size? (OK, the one reason I can think of is shutting down a runaway leaking 64bit process before it fills your drive... but that's about it).

I don't use office tools but I've heard rumors some won't run without a page file.
Not true for office2000, 2003 or 2007.

As for RAM drives... as mentioned previously, putting your pagefile on a ramdrive is utterly moronic - don't do it. They can be nice for other purposes, though. I keep my %TEMP% there, and apart from a few badly designed installers and the stupid way Flash caches videos, it works very decently. Putting my firefox profile there also made the fox a lot less sluggish (but of course you have to have a backup scheme if you don't want to lose stuff on a power outage).

Oh, and then there's specialized use cases like compiling Boost or grepping through huge codebases :)
Pages: prev1 ... 49 50 51 52 53 [54] 55 56 57 58 59 ... 364next