topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday December 18, 2025, 3:16 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: prev1 ... 26 27 28 29 30 [31] 32 33 34 35 36 ... 364next
751
General Software Discussion / Re: Are you going to wait for Windows 9?
« Last post by f0dder on January 03, 2013, 01:02 AM »
Took quite some time to find another pop-out menu along the right side - wiggling the mouse all up and down that side would occasionally cause it to appear. I finally clicked on an image of what looked like battery level bars - of course they were actually signal strength bars - and that allowed me to select a network, etc.
It's looked like that since Win7 - and is a pretty standard icon on phones as well :)

Anyway... one tap of the 'Windows' key, type "net", click settings, see "Network and Sharing Center" - that should take you to something familiar. Figuring that out took me all but a few minutes without RTFM'ing on a fresh Win8 install. And after a minute googling "Windows 8 hotkeys" or "Windows 8 shortcuts", you'll see that Win+Q will take you directly to "search apps" and Win+W takes you directly to "search settings" (you can obviously only do the google search if you have another device available, or after setting up networking, though.

It really isn't all that bad.
752
Living Room / Re: Ubuntu Linux smarthphone coming this year?
« Last post by f0dder on January 02, 2013, 05:55 PM »
I really don't see the need, business case, user interest (apart from a few hardcore geeks), or anything else, really.

Android is already running a linux kernel, and android apps can include native code. I don't really see why you'd want something that's probably closer to a normal distro on neither phones (perhaps on tablets, but even there I'm not convinced). If it runs traditional linux applications with no sandboxing, it's going to be a security nightmare, and if it adds sandboxing it'll be duplicating android functionality.

So, what's the deal? Sell it to me.
753
General Software Discussion / Re: UEFI and Linux in 2013 - the list so far
« Last post by f0dder on January 02, 2013, 04:02 PM »
It's not MS-exclusive, Zaine. And as long as you're buying x86 and not ARM, it's a MS requirement that your UEFI either has key management facilities, or at least allows disabling secure boot, in order to get the MS logo thingy.

Let's stop the FUD and stick to facts - but still keep the slippery slope in mind.
754
Living Room / Re: Parallella, the $99 supercomputer
« Last post by f0dder on January 02, 2013, 03:58 PM »
From the specs and interviews with the designer, it's a 64-core Epiphany co-processor sitting next to a dual-core ARM CPU, so I wonder how much is managed by the system, and how much is bare metal. It's got expected virtual speeds up to 50GHz.  I know, in modern computing terms, Gigahertz is a trivial benchmark, but for a chip that consumes less than 2 watts, it's impressive.
I'm not saying it's not a nice chip, and it does sound like it packs quite a punch - and I love that they're saying they want to be open about it all. But you're not going to reach that top speed unless you have something that's massively parallel - lots of things are hard to split across threads (and the single-core performance of Parallella is low compared to x86). Other things are hard to do without synchronization which, apart from hard to program correctly, can mean massive performance drops (I hope the shared memory / inter-core communication is very efficient!). Then there's also the thing about GHz by itself being mostly meaningless, you also need to know how many cycles the various instructions take, and an amount of other factors :)

Apparently though, the object isn't exactly speed, but functionality; an inexpensive platform for learning how to program for parallel computing, almost like a hardware emulator of more serious iron, to make it easier for students to get into parallel and multi-threading concepts now, just when it's starting to grow.
And that's what I wish people would focus on, instead of the silly "supercomputer" claims :) - your quote from Supercomputing for the masses is spot on the sugar. Parallel computing is important, and reducing what used to take a cluster down to a single chip is awesome! Heck, even if the chip didn't deliver more performance than an octa-core x86, it would still be more usable for teaching scale-out parallelism.
755
General Software Discussion / Re: Good coding conventions - Discussion
« Last post by f0dder on January 02, 2013, 03:44 PM »
Fair amount of sensible things have already been said - I agree with a lot of it.

I prefer well-named and small (private) functions/methods over comments. Comments are fine for documenting your public APIs, as well as some implementation details, quirks and hacky workarounds - but other than that, I prefer to split into small and well-defined steps, and let the code speak for itself. If you end up with method names that are long or weird, that's often a good indication of a code smell.

However, what one person might find "clever" is what another finds "DRYw". I don't advocate using esoteric language features just for the heck of it, but I'm also not a fan of avoiding anything-but-the-basics because you need to cater to throwaway unskilled outsourced programmers. I routinely use the ternary operatorw, as I find it can make code easier to read (yes, sometimes even with nested ternaries!). It isn't suitable in every situation, and bad use can make code unreadable. I often use it when declaring final variables conditionally, when the logic isn't bad enough to warrant factoring out to it's own method.

Use of the ternary operator doesn't mean I'm in the "as few lines of code as possible" camp, though. I often break condition of if-statements into (properly named) booleans or method calls, which obviously results in more code lines... but also in better readability.

As an example of what some people might find "clever", here's a small snippet of code for dealing with JCRw nodes. It's a somewhat low abstraction to be operating at, but sometimes it's the right tool for the job - but there's so much red tape and overhead to deal with, when all you really want is "possibly the value at some subnodes property, or a default value if subnode or property doesn't exist or something goes wrong." - Java doesn't have Lambdas, but it does have the (much clunkier, syntax-wise) anonymous inner classes.

Without further ado:
Code: Java [Select]
  1. public class NodeHelper {
  2.         public static final Logger log = LoggerFactory.getLogger(NodeHelper.class);
  3.  
  4.         // ...other overloads omitted
  5.  
  6.         /**
  7.          * Returns a property (or, on error, a default value) from a subnode of a given node.
  8.          * @param node root node
  9.          * @param path "path/to/subnode@property"
  10.          * @param defValue default value
  11.          * @return given property of subnode, or default value
  12.          */
  13.         public static Calendar safeGetProperty(Node node, String path, Calendar defValue) {
  14.                 return safeGetPropertyInternal(node, path, defValue, new Transformer<Calendar>() {
  15.                         public Calendar transform(Property prop) throws Exception {
  16.                                 return prop.getDate();
  17.                         }
  18.                 });
  19.         }
  20.        
  21.         private interface Transformer<T> { public T transform(Property prop) throws Exception; };
  22.         private static <T> T safeGetPropertyInternal(Node node, String path, T defValue, Transformer<T> transformer)
  23.         {
  24.                 try {
  25.                         final String[] pathAndProperty = path.split("@");
  26.                         final String subPath = pathAndProperty[0];
  27.                         final String property= pathAndProperty[1];
  28.                        
  29.                         if(node.hasNode(subPath)) {
  30.                                 final Node subNode = node.getNode(subPath);
  31.                                 if(subNode.hasProperty(property)) {
  32.                                         return transformer.transform(subNode.getProperty(property));
  33.                                 }
  34.                         }
  35.                 }
  36.                 catch(Exception ex) {
  37.                         log.error("safeGetPropertyInternal: exception, returning default value", ex);
  38.                 }
  39.                 return defValue;
  40.         }
  41. }

It's pretty quick-and-dirty code, but it works well enough for production (*knock on wood*). There's still a lot of Java red tape, Scala or C# (or even C++11) lambdas would have made the various safeGetProperty() into oneliners. Still, no superfluous exception handling all over the place, a fair armount of code lines saved (but most importantly, the core logic is centralized - one place to bugfix). And the "cleverness" is hidden as an implementation detail behind a simple-to-use public API.
756
Living Room / Re: Parallella, the $99 supercomputer
« Last post by f0dder on January 02, 2013, 02:11 PM »
Claiming that $99 gives you a "supercomputer" is IMHO a bit of a marketing stretch, but the project is pretty interesting - and the architecture seems interesting. Will be interesting to see what real-world performance is like (including perf/$ and perf/Watt), not least compared to GPUs and Intel's Xeon Phi.

Alluring that you can program it in "standard C++", but you obviously still have to be able to parallelize your code (each core is relatively slow), and I wonder to which degree you have to be locality-of-reference aware (and to which degree you have to be Parallella-architecture-aware, possibly reducing the portability), given the small amount of per-node memory, the shared memory and the mest structure.
757
Living Room / Re: MS Blocks Ability in Windows 8
« Last post by f0dder on January 02, 2013, 01:15 PM »
I don't get the big fuzz about booting initially to the Metro tiles thingy. It's one additional keypress to get past it. On a laptop you're going to be using Sleep, Hibernation or Hybrid Shutdown - which means you'll see the tiles screen just how often? (sure, after Windows Update, and some times during the initial install-apps-and-reboot-frenzy).

"Boot straight to desktop" - does that mean skipping the user login, and account password? I hope not O_o

"Start screen" is 100x better than the cruddy old (pre-Vista) start menu, you can actually find things. And having it full screen gives you so much more Information At Your Fingertips ( :P ) than a limited Vista/Win7 menu. I'm very sensitive to animation nonsense and usually turn it all off, but the start-screen transmission is smooth & fast, even on limited Intel graphics. Forcing it on people is a bit meh, but without it there might never have been any progress.

Metro<>Desktop switching indeed feels schizophrenic, but I just don't see it in my everyday work (apart from the startscreen). The only metro elements I see are when I change wireless networks, and I find the networkbar a lot nicer than the cruddy old little dialog window in past Windows versions.

I'm probably gonna turn off the charms thingy though, as I don't ever use it. It doesn't pop up by accident often enough that it's a nuisance, though, so I haven't bothered to google how to get rid of it.

As I see it, Win8 is "mostly for the better" (even if I'd like a clean separation of Metro and Desktop - while it works wonderfully on tablets and phones, it's too alien on desktops and laptops). The only things I'm worried about are the slippery-slope "political" things that are happening, crApple copycat style.
758
Have you ever run into any problems with installers or windows update like the ones mentioned here and here?
Nope - installers that fail when %TEMP% is on a different partition than the install folder would be majorly b0rked. Ones that rely on %TEMP% existing across a reboot are also lame, IMHO, but they do exist - doesn't bite me since I use a persistent ramdisk, though.

Only issue I've run into has been lame installers that insist on extracting their (huge) payload to %TEMP% before moving/copying to target folder, instead of extracting directly - the nvidia drivers (damn huge mess, GPU drivers these days are a mini-OS of their own O_o) fall into this category. But that's (at least in case of nvidia setup) fixable by launching the installer with TEMP/TMP pointing to a location with more free space.

Also: the thread from which the specific guru3d post you link is to about storing the pagefile on a ramdisk. This is utterly moronic, unless you're on a 32bit Windows and have memory Windows refuses to use (because of the 4GB-physical memory limit). Being on 32bit Windows with >4GB ram is also pretty moronic, at least if you have enough >4GB ram that you could be interested in putting the pagefile there... <=4GB and you typically wouldn't have enough "shadow" memory that it's safe for pagefile usage.
759
General Software Discussion / Re: Ram Disks with Dynamic Memory Management?
« Last post by f0dder on December 31, 2012, 08:09 AM »
Didn't know there were actually any product that did this - the only I've seen that was in the same ballpark was vRamDir back in the Win9x days, but that wasn't a fixed-size ramdisk... you pointed it at a folder and then it was basically "try to keep as much as this as possible in RAM".

I'm not sure it's a good idea for a ramdisk anyway? To be able to de/allocate memory like that, it needs more complex code than "just" a ramdisk... filesystem filter driver? And when you have a ramdisk, you generally expect access to it to be instantaneous - if you use a product that does dynamic de/allocation, you risk having to page out other memory in order to satisfy an allocation request. Depending on use case it might be smarter to simply *create* a ramdisk when you need it, and dismount it afterwards?

That's just my :two: - interesting to hear about Primo, though :)
760
Living Room / Re: Happy New Year~!
« Last post by f0dder on December 31, 2012, 07:56 AM »
Cheers to all y'all - be careful with fireworks, and don't try to do the flaming Sambucca trick.
761
General Software Discussion / Re: UEFI and Linux in 2013 - the list so far
« Last post by f0dder on December 30, 2012, 08:27 AM »
IMHO it doesn't seem so difficult to get Secure Boot support - you just use Matthew Garrett's shim?

If you feel you need to be able to recompile the shim, you spend $100 on a VeriSign SSL/CodeSigning certificate, and use that to sign up for a (free) Microsoft SysDev account, which will let you sign stuff.

And while I haven't seen any "ready for Win8" laptops, so I cannot comment on the key management features of their BIOS/UEFI, my Secure Boot capable ASUS P8Z77-V PRO motherboard has full key management capabilities.

I still do believe that Secure Boot is fundamentally a good thing, technically... but I don't like the thought of the slippery slope.
762
But perhaps you can do some really black magic with bcdedit?
Wouldn't make any difference if you did fiddle around with BCD, it only contains the info concerning reading the hiberfil.sys file, (and apparently editing it won't make it work neither will creating links - it's been tried).
You're probably right - at best, you might have been able to move it to the root of another partition (and/or perhaps changing name) - but the minimal NTFS driver is probably limited to the boot partition as well. Iirc it also is limited to the root directory, and doesn't support anything but filesystem basics (no junctions and symlinks, at least used to not support compression in XP... hardlinks might be supported since they're a very fundemental feature).

Writing of the hiberfil.sys file is performed by some other function, (ie. hard-coded), so it will always go to the same place.
I wonder what level of NTFS support is available to that code? Perhaps the part can either be hacked to write to some other location, or supports symlinks or junctions? :)

Changing the boot drive makes no difference - I currently have a multiboot setup (using BCDedit) with an SSD copy of Windows 7 and another on a hard disk. The hard disk is the boot device in the bios but Windows still insists on having a hiberfil.sys for each windows setup. Frustrating.
Is the BIOS boot drive the same as the Windows boot drive, though? I would expect the Windows boot drive to be the one that you're booting Windows from :)

You'd think it wouldn't be hard to have a tiny hiberfil.sys on the C: drive for each windows that points to a file on a user defined partition if required.
Except that the hibernation file needs to be loaded pretty early in the boot process, and while a few extra features could probably be squeezed in, you have to strike a balance somewhere - the full NTFS.SYS is ~1.6meg on my system.

I guess I should find the time to read up on the NT boot process, I just realized that I've forgotten exactly what happens at which time :)
763
Tuck, I think I just chose "move folder" for My Documents, can't really remember. For the most part, I leave AppData and the likes in place, but for a few programs that for some unfathomable reason store way too much data there, I've done some NTFS Junctions - either to my SSD data partition, to my Velociraptor (for large stuff), or to my ramdisk (firefox profile, Website Watcher).

Anyone any idea how to move the hibernation file off the C: drive?
By choosing another partition as the boot partition :) - I'm not sure there's any other way. Haven't done intensive research, but look here - basically, Raymond Chen says it can't be done, and...
Again, it's another chicken-and-egg problem: to load the hibernation file, you need the file system driver, but the file system driver is in the hibernation file. If you keep the hibernation file in the root directory of the boot drive, the miniature file system driver can be used instead.

But perhaps you can do some really black magic with bcdedit?
764
To be honest, that sounds like a bit of a silly comment. I can see how that would set someone off. What APIs an application accesses isn't really relevant - only that it DOES access the API.
Yes, that comment is somewhat silly and unprofessional (even if I do think it sounds strange that PulseAudio should be calling V4L APIs...). But at any rate, it's not something that deserves the reply it got. If you look through the full thread, Mauro seems pretty reasonable and level-headed.

Being straightforward and avoiding sugarcoating is fine, but IMHO you can do so in a respectful manner.
Sarcasm, f0dder; just sarcasm!
Yeah, I got that :) - that line wasn't directed at you, but at the slashdot comments... those guys seem to equate rudeness with directness, and applauds Linus' way of behaving O_o
765
Kind of bizarre how the f-bomb is front page news in real life, but you hear it 20x in any given TV show/movie and don't think twice about it.
I don't think there'd have been a lot of fuzz if it was just that word - it's the general level of rudeness (perhaps even hostility?) being shown by Linus that does it.
766
Yet another example of Linus acting like a total bastard.

Yes, breaking userspace is bad. And "it looks tha pulseaudio/tumbleweed has some serious
bugs and/or regressions.
" wasn't a very well-thought-out answer. And the -ENOENT return value is sorta bizarro.

But none of that warrants going thermonuclear on the guy, unless (and probably not even then) he has a track record of introducing bugs and blaming it on other people (I haven't researched that - but given how Linus tends to behave, well...)

Being straightforward and avoiding sugarcoating is fine, but IMHO you can do so in a respectful manner.
767
I wouldn't worry about partitioning the SSD though. SSDs don't suffer from fragmentation like HDDs, so you have no worries there
Fragmentation does reduce performance on SSDs - not nearly as much as on a HDD, but we're still talking significant performance drops. Like, a 240gig Intel-520 that drops from ~512MB/s sequential read to ~25MB/s 4k-random. Such massive drops would obviously require heavily fragmented files, though :)

I wouldn't defragment an SSD, though. Perhaps defragmenting single heavily-fragmented files (SysInternals' contig.exe is nice). If the entire filesystem ended up heavily fragmented (and  performance measuring indicated it needed defragmenting), I would make an image, defrag the image, and transfer it back (caveat: this might mess up the wear allocation stuff unless you happen to find an imaging program that can do something intelligent with the SSD TRIM command. I haven't had the need for filesystem defrag, so I haven't done the research.)

4wd: thanks for the link, lots of data there :). It's scary how the drivers tend to just... die. I wonder if there's a lot of stress on electronics during a power cycle, and whether that's a more likely death cause than wearing out the NAND cells, at least for non-heavy use?
768
I'd say disable the pagefile (or relocate to your HDD; it shouldn't be hit that much with 12 gigs of ram, so you shouldn't see a speed hit). RAM disk for %TMP% and %TEMP% (both the system and user environment variables) is nice, not only does it reduce wear&tear on the SSD, but can be a nice speed increase of some things. I find that at 1gig ramdisk works pretty well for my TEMP and FireFox profile (backed up, of course). I use SoftPerfect now, persistent disk, and it works pretty well.

As for other stuff, do what suits you best :) - I've got my home workstation SSD split in a ~64gig partition for Windows + most applications, and a ~47gig for my documents, sourcecode, et cetera. Makes OS reinstall a bit easier, but with a small SSD micro-managing free space can be annoying. Games and "bulk" data goes on a 300gig (well, 279gig in non-SI units :)).

For my work laptop, I have one single partition on my SSD for most stuff, but an I/O busy (and huge!) content repository on the old HDD; the software might benefit somewhat from the (much) faster SSD I/O, but I'm afraid it's so write-busy that the SSD would be worn down too faster.

I've read that newer SSD drives are very durable, so is r/w operations and drive longevity still a valid concern?
I do wonder - haven't heard of any (normal) cases where erase cycles have been used up (which ought to happen gracefully, and still allowing you to read the cells), all the deaths I've seen (and the two I've experienced myself) have been random out-of-the-blue deaths with no warning (HDDs usually start sounding weird, or drop from UDMA to PIO speeds which you will notice).

So when you move to SSD, backups will be even more important than with HDDs. Be sure to use something that runs continuously.
769
Announce Your Software/Service/Product / Re: Bvckup 2
« Last post by f0dder on December 27, 2012, 10:04 AM »
Cool :)

Too lazy to do proper testing with cold-cache right now (haven't got my testbox hooked up at the moment, and don't feel like rebooting my workstation a zillion times - sucks that windows doesn't have a way to "discard read cache", alloc-boatloads-of-memory isn't reliable enough).

190k files, 20k folders. Relatively flat hierarchy (haven't measured nesting level, but average is probably 4).

For warm-cache, there's negligible differences between breadth- and depth-first, the same goes for x86 vs x64. That was kinda expected, though :). The speed difference between 1 and 8 threads is a factor 3, after 4 threads there's no quantifiable speed increase (quadcore i7 with HyperTHreading - wonder where the bottleneck is, HT itself or some OS locks?). ~1200 vs 400 milliseconds, though, so not something that matters a lot - at least for relatively modest filesystems and higher-end systems :)

Cold-cache tests is what interests me most, anyway, since those tend to be slow. I'll see if I can find some time & energy to hook up my testbox and run some tests on it - the testbox also has the benefit of being a modest dual-core with a slow disk, compared to my workstation which is quadcore i7 with an SSD and a VelociRaptor :-)
770
Developer's Corner / Re: Some Programming Levity
« Last post by f0dder on December 27, 2012, 09:01 AM »
*giggle*

- in C/C++, you could do...

double d = *((double*) "awesome");

Well, HELLO THERE, 9.53555e-307 :)
771
General Software Discussion / Re: Acronis Backup
« Last post by f0dder on December 26, 2012, 04:01 PM »
For sync I still prefer rsync (and products built on top of it such as DeltaCopy) over all others. The ability to sync to a remote location (including via SSH) makes rsync worth its weight in gold to me.
The DeltaCopy UI screenshots makes my eyes bleed :P - their Syncrify program sounds potentially useful though.
772
Living Room / Re: silly humor - post 'em here! [warning some NSFW and adult content]
« Last post by f0dder on December 26, 2012, 03:48 PM »
Wonderful purr-agram, Renny! :)
773
Microsoft have improved the task manager a lot in Win8, though - on that OS, I wouldn't recommend normal people to install Process Explorer. Not that I'd recommend against it, it's just that it does so much that ProcExp isn't really necessary. Once in a while, the behemoth does something right :P
774
General Software Discussion / Re: Password Managers
« Last post by f0dder on December 22, 2012, 07:45 PM »
Thanks to everyone for the suggestions! I will check some out............appreciate your help!
And Merry Christmas to ALL!!
  Ain't we just a friendly bunch?   ;)
NO!

We're a secretive bunch!

... :P
775
Living Room / Re: Broadband Caps
« Last post by f0dder on December 22, 2012, 07:44 PM »
Wyden's got it right - the spice must flow.

The greedy bastards should rethink their strategies. What netflix is doing is interesting - offering colo boxes at ISPs... you install a few racks at your site, and end up having to do less random (paid) peering with other ISP companies - this gets you a cache of the most wanted content in your own network, which is effectively free to send to your customers, whether they have 128kbps ISDN (yeah right), 20mbit ADSL2+, 100mbit VDSL or 1gbit fibre.

We need net neutrality and we need unmetered access in order to keep the internet alive, as well as to foster innovation.
Pages: prev1 ... 26 27 28 29 30 [31] 32 33 34 35 36 ... 364next