topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday December 19, 2025, 5:55 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: prev1 ... 78 79 80 81 82 [83] 84 85 86 87 88 ... 364next
2051
General Software Discussion / Re: Why the aversion to .NET Frameworks?
« Last post by f0dder on November 18, 2010, 04:29 PM »
The hostility towards VB is... probably because the entry barrier is so low. Too many people who shouldn't be programming (at least not before bumping up their skill level) are able to churn out programs. The visual designer introduced back in prehistoric days definitely has a lot of blame there (and hey, it rocked!). Especially because it's so damn easy double-clicking a button in the visual editor and write business logic in the OnClick handler.

VB.net today might not be as horrible as VB6, but... well, I definitely don't like the syntax, and code written without "option strict" and "option explicit" make me shudder. With that on, however, VB isn't all that bad - although I'll never get to terms with the syntax.

The stuff Eóin mentions above sounds like it's pretty useful for COM interop... which is especially important for MS Office stuff written in VB, which there's probably still a pretty big market for :)
2052
Like, I don't get what's so great about using Ctrl-<letter> instead of the arrow keys to navigate through a document
The idea is that you don't have to move a hand away from the main keyboard to navigate (similar to the argument vi users make).
And on some laptop keyboards, this becomes extremely important - perhaps not so much for arrow keys, but often pgup/pgdown and home/end are located at fscked up places, or require pressing a function-modifier key. I can definitely see why WordStar keybindings has appeal - and they're logically placed.
2053
General Software Discussion / Linux webserver du jour?
« Last post by f0dder on November 18, 2010, 07:04 AM »
I'm currently working on a project that among other things includes a part written in Ruby, running on a linux box. Up to now we've been running it under WebRick on the test server, but I'd like moving to a real web daemon - and now I'm wondering what my choices are.

Apache is a no-go. It's not entirely for rational reasons, but I feel it's too big and clunky and dusty.

For my own little webserver I've been using lighttpd which has served me pretty well, and is probably what I'll end up using unless there's better suggestions. This thread is mainly to see if there's something even better, since I haven't shopped around for httpds for several years :)

Also, is there any particular stuff I should know about having ruby running under a httpd? I managed getting it running on my own server, but have no idea whether it's running interpreted or in jit'ing vm - afaik the default for ruby is interpreted, but there's several VMs available?
2054
General Software Discussion / Re: Why the aversion to .NET Frameworks?
« Last post by f0dder on November 18, 2010, 06:41 AM »
The original versions of C# and .NET were lackluster - that's a pretty awfully bad reason for dismissing them today, though.

With C# 2.0 it became usable - generic and iterators.
With C# 3.0 it became interesting, and worthwhile to use - lambdas, extension methods, LINQ, anonymous types, and implicitly typed variables.

Yep, C# has borrowed a lot from Java, just like Java borrowed a lot from languages that came before it. It's my impression that C# (the language) has surpassed Java (the language) by now, and I definitely find it more comfortable to develop in C# than Java (which is partly because of the language, and partly the tools and framework).

The .NET framework definitely isn't perfect, there's bugs here and there, and it can be frustrating to figure out how all the parts play together... some of the standard interfaces you need to implement are pretty horrible (INotififyPropertyChanged is HORRIBLE, but at least you can abstract the horrors away somewhat). But there's also a lot of good stuff in there!

Even with some of the quirks, it's definitely a LOT more pleasant working with WPF than other GUI toolkits I've played with... With databinding leading to the MVVM design pattern and a convention-based MVVM framework, it becomes possible and almost pleasant to decouple GUI and model code, and get clean and testable applications.

Using .net properly is harder than using C++.
The one thing from C++ I miss in C# is deterministic object destruction - RAII is great. Unfortunately, and for good technical reasons, that's just not going to happen. So we're stuck with IDisposable and using - but it's not really that bad once you wrap your head around it, and consider alternative ways to solve problems. Example: instead of returning an IDisposable object from a method, perhaps you should consider taking in a "do-work" delegate as input to the method instead...

* It misuses the C name. How does Ms sell the new VB? It renames it.
Oh, that's flamebait if I ever saw it. Using C in the name is, imho, entirely appropriate - it's a C-based language after all, calling it a VB is really lame.

The problem is of course the collision of managed and unmanaged code and you still cannot avoid it, because Windows is unmanaged. I could tell you stories about memory and resource leaks caused by totally harmless and unexpected things.
Some examples would be interesting, otherwise it's just FUD to me. And hey, p/invoke is a lot more pleasant than JNI.

2055
General Software Discussion / Re: Why the aversion to .NET Frameworks?
« Last post by f0dder on November 18, 2010, 06:09 AM »
... Especially people clinging on to C (and using the insecure libc functions) really shouldn't be doing any sort of networked code, please. ...

You mean people writing such non-networked code as; Linux, Apache and PHP? (all of which are still "clinging" to straight C).
Yep.

Thanks to clining on to C (C++ isn't nearly as bad, but isn't necessarily the best choice either), we have wonderful buffer overflows, memory leaks, double-free exploits and what have you. Even when those problems are fixed, for long-running daemons you get memory fragmentation issues unless the developers have been very careful and use pooled allocators - or rely on the cruddy old one-process-per-connection model, or the less-cruddy-but-still-meh "use one process for N client connections then reap because we know we leak memory like a sieve" model.

Truth is, C (and asm) simply can't be beat for systems level programming where size and speed (i.e. performance) really matter.
This really depends on what your're doing. For a lot of code, especially dealing with networked stuff, having an efficient I/O model + efficient threading + a proper string class that stores length and doens't have multithreaded COW ref-counting bottlenecks + a fast non-fragmenting memory allocator (usually means garbage collecting) is going to matter a lot more than a slightly more aggressive native code optimizer.

The only place assembly belongs these days is extremely low-level code that needs to take advantage of CPU features. The OS low-level "ZeroPage" function, optimization for sound and video codecs, etc... but in regular usermode applications? Just about never. I wouldn't mind seeing C being dropped entirely in favor of C++ - yes, even for kernel development, although you'd probably use a subset of C++ features there (using it as a "Super C"). For library development & other-language interop, use C++ internally and a C interface to the rest of the world. But please, stop writing raw C code - the only legitimate reason would be developing for platforms where no C++ compiler is available, like some of the really limited microcontrollers.
2056
Living Room / Re: VectorMagic: Convert Bitmaps into Vector Art (Free)
« Last post by f0dder on November 18, 2010, 05:57 AM »
wtf? InkScape does conversions of BMP...HOW did I not know this?!?!  >:( Ive spent countless hours drawing over bmp files on InkScape...this could have been saved...NOT happy  :down:
You didn't know about the vector tracing feature? :P

Yeah, it's a major timesaver - it's not perfect, though, so there's usually still plenty of hand-cleanup necessary.
2057
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 17, 2010, 01:51 AM »
So my terminology is wrong - so sue me :P
Even HDD manufacturers use the term (erroneously) in their HDD tools, so it's easy to see why people get confused :)
2058
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 17, 2010, 01:43 AM »
This is somewhat of a shock to me - first, because this means a format doesn't wipe your data. Second, because new bad sectors might not be discovered merely from a read, and sector reallocation only kicks in on a write. Eek.

Thanks for making me look into this, Joker - I'll be sure to do an explicit single-pass wipe instead of format from now on.
You could always try a Low Level Format, it worked on my 1TB Samsung - took a very long time, probably the same amount as doing a single-pass wipe.
There's no such thing as end-user "low level format" and there hasn't been for years and years - all this does is to fill the drive with zero sectors:
While it is impossible to perform an LLF on most modern hard drives (since the mid-1990s) outside the factory, the term "low-level format" is still being used (erroneously) for what should be called the reinitialization of a hard drive to its factory configuration (and even these terms may be misunderstood). Reinitialization should include identifying (and sparing out if possible) any sectors which cannot be written to and read back from the drive, correctly. The term has, however, been used by some to refer to only a portion of that process, in which every sector of the drive is written to; usually by writing a zero byte to every addressable location on the disk, sometimes called zero-filling.
2059
General Software Discussion / Re: Why the aversion to .NET Frameworks?
« Last post by f0dder on November 16, 2010, 05:09 PM »
Can't really speak about the SxS implementation, as I haven't dived into it - but having the ability to have several versions of a component installed is A Good Thing(TM). Theoretically, components should stay backward binary compatible except when major-versions are relased - but that's not always done. And installing an older minor-version on top of a newer minor-version-but-same-major-version can cause trouble... but so can installing a newer component, if an application depended (willingly or not) on a bug in an older version!

Can't recall when I've been bitten by "DLL hell" except for a really old game that needed a really old msvcrt version, but I know there's a lot of crap software around. Never experienced SxS crapping out... but I can imagine it happens when people go messing with "cleaner" style applications or muck around in %SystemDir% without knowing what they're doing.

While I don't particularly enjoy MSI, you can't just compare it to linux package managers and say that the linux way is better - MSI offers a lot more flexibility. I haven't seen a linux package manager that lets you configure subcomponents of a package... if you'er offered any of that flexibility, either it's ports/portage-style compile-from-source, or you have to choose between several differently named packages - one package per featureset, ugh. Too bad MSI is so slow.

You mention Paint.NET.  Photoshop starts faster than Paint.NET.
-oldfogie
Perhaps if what you're comparing to is Photoshop 2, but definitely not the newer versions :)

.NET is interpreted bytecode - but without the decade or so of refinement that Java has.
-oldfogie
It's JIT'ed (and interpreted for the codepaths that don't make sense to JIT). .NET hasn't been around as long as java, sure, but it's not like .NET hasn't benefited from research done before it was invented. As for speed wrt. Java, you can be sure that supports of either language will be able to come up with syntehtic benchmarks beating the opponent - but in the real world, the Java apps I've used feel slow and clunky compared to the .NET apps I've used. The reason most probably being the GUI toolkits used rather than the core VM speed, though.

I can't name a single .NET application that is recognized by major publications that ordinary users might use beyond Paint.NET.  That's because they don't exist - even Stack Overflow can't seem to come up with a list beyond Paint.NET and Visual Studio.
-oldfogie
Probably because a lot of the end-user desktop apps in use are old codebases, and a lot of new development is done for the web platform. Rest assured that a lot of business development is being done i .net :)

I didn't see ANY benefits to learning a whole new language to discover new ways to crash software. We simply don't need a new language.
-oldfogie
While it might not be your impression, C# and the .NET platform definitely makes it faster to develop than C/C++, and definitely easier to avoid a lot of common crash opportunities. Especially people clinging on to C (and using the insecure libc functions) really shouldn't be doing any sort of networked code, please.

Oh, you're a C++ developer who wants to use the Ribbon interface?  Yeah, you get to wait a few months to do that while .NET developers get extra loving from us.
-oldfogie
Heh. The Ribbon was introduced to MFC first, and not added to .NET until 4.0... so come again :). Also, saying that C# isn't a "REAL" language is bullshit - it's a pretty damn fine language, even if the .NET framework isn't perfect.

If Microsoft had their way, they would only offer VC++ internally.
-oldfogie
And that's why they've spent an enormous amount of time doing a lot of VC++ specific features in VS2010? Riiiiight.

A literal two byte modification is the difference that keeps the VC++ runtimes from operating under Windows 95 and later.
-oldfogie
It takes more than that - there's imports for "recent OS" APIs, there's the PE header image/subsystem version fields, there's the loader config data section which Win9x barfs on, and iirc a couple of things more. But hey, you get the CRT sourcecode with Visual Studio... my biggest beef is that the recent linkers won't let you set an image version of 4, everything else is fixable.

Also, Microsoft deploys .NET through Windows Update but refuses to deploy the VC++ runtimes the same way so VC++ developers have to bundle the runtimes with their application but .NET developers do not have to bundle .NET.  Another slap in the face.
-oldfogie
I thought VC++ runtime updates were distributed through wupd as long as you use the recommended (MSI merge module) way of installing it? There's no proper way it can be done if you simply bundle the .DLLs in your own installer - sucks, but that's the way it is.

Microsoft refuses to provide a C/C++ route on Windows Mobile 7.
-oldfogie
Pretty much every phone company has One True Way to program their devices, and it's usually with a managed language. I think Symbian is one of the few exceptions?

Here's the ironic kicker:  The important parts of .NET, Java, et al are written in...C/C++ and assembler.  In other words, even the authors of those languages admit that C/C++ is better.
-oldfogie
Pretty minor parts, really - especially the assembly (*y, not *er). And yes, that is partly because there are performance-critical code pieces where it does make a difference - and partly because of the chicken-and-egg problem of doing a new language.

As much as I like C++, it's a super quirky language with a boatload of problems (language as well as standard libraries). C++0x came too late, and doesn't address all of the problems. There's definitely a need for a richer language than C++, and a more comprehensive standard library - C# and .NET class libraries are pretty decent (even if the .NET class libraries definitely aren't perfect). And it's a helluva lot easier using p/invoke than JNI when you need to dip into native code :)
2060
Post New Requests Here / Re: Idea: Productivity Suite and Monitor.
« Last post by f0dder on November 16, 2010, 03:08 PM »
Either go for a fixed fee (this makes most sense), or look at vcs check-in activity as suggested by JavaJones - trying to do any kind of "activity monitoring" is intrusive and won't work anyway.
2061
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 16, 2010, 12:29 PM »
Hm, could be you're right - if "unformat" was able to restore file/folder names without mangled first-char, then probably a copy of the FAT was saved, or just the file names. I wonder if this was only for floppies, or disk partitions as well?
2062
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 16, 2010, 12:12 PM »
40hz: I definitely remember using /u back in the old DOS and w9x days - and iirc it was super slow. So I think (:)) it's more a case of /u actually zero-filling the partition rather than "not saving recovery information". Iirc the regular format didn't even completely nuke the FAT filesystem structures, but rather set every file&folders as deleted (done on FAT by setting the first char of it's name to some special character).
2063
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 16, 2010, 11:33 AM »
Would have been really cool if Microsoft included a zero-fill switch ( /z ) for their format command.
They used to have the ( /U ) unconditional format option in the 9x days. But in retrospect I'm not entirely sure what it did (data destruction wise).
/u is accepted by WinXP format, but doesn't seem to do anything - definitely no zero-filling :)
2064
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 16, 2010, 06:59 AM »
Hm, I stand corrected - just did a test in vmware. For whatever reason it seems that doing a (non-quick) format actually doesn't zero-fill the sectors - apparently it's equivalent to a quick-format followed by a bad-sector scan.

This is somewhat of a shock to me - first, because this means a format doesn't wipe your data. Second, because new bad sectors might not be discovered merely from a read, and sector reallocation only kicks in on a write. Eek.

Thanks for making me look into this, Joker - I'll be sure to do an explicit single-pass wipe instead of format from now on.
2065
Living Room / Re: Two broadband connections at the same time?
« Last post by f0dder on November 16, 2010, 01:56 AM »
JavaJones is mostly right :) - you can grab a single file using both connections as long as the place you're grabbing it from supports multiple connections and you download with something that supports multiple connections for a single file; most HTTP servers support this, and bittorrent definitely does.

But you can't bundle two physical lines into one TCP connection.

Btw, the wireless connection for gaming? Is that stable and low-latency enough?
2066
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 16, 2010, 01:50 AM »
As for data security, a normal non-quick format (which just means zero-filling all sectors - there's really no such thing as a "low-level format" anymore, after a drive leaves the factory) is just fine.
You sure about that one? For the average end user sure okay, but a business? I've recovered data from a (fully) formatted drive, Even got a good bit (20%) of info off a drive that had been (factory restore) reimaged. Gutmann did mentioned (recommend actually) DBAN in his epilogue.
I don't see how you could restore data from a fully formatted drive - but reimaged is easy, since that only overwrites the partition with what's in the image file, and leaves the rest of the partition untouched.
2067
Living Room / Re: Is "Quick Format" safe?
« Last post by f0dder on November 15, 2010, 02:42 PM »
I normally do a full format when I purchase a new drive, exactly to check for any bad sectors - besides, the last 4 drives I've purchased have been for external storage, and processed with TrueCrypt... for security reasons, TC volumes shouldn't be quickformatted unless you're going to fill them to the brink right away.

As for data security, a normal non-quick format (which just means zero-filling all sectors - there's really no such thing as a "low-level format" anymore, after a drive leaves the factory) is just fine. People that still cling on to "military grade security multi-level formatting as prescribed by Gutmann" should check out what Gutmann himself writes - things have changed since the old MFM drives the article was originally written for :)

Bottom line: a single zero-fill pass (or random-data if you insist) is good enough. If you suspect the NSA is after you, it's probably still good enough, but you might want to incinerate the drive just in case.

EDIT 2010-11-16: whoop, apparently a non-quick format doesn't zero-fill sectors, so you do need a disk wiper - I still stand by a single-pass wipe being perfectly good enough, though.

EDIT 2012-11-19: whoop, you live, you learn. Quoting from Change in the behavior of the format command in Windows Vista:
The format command behavior has changed in Windows Vista. By default in Windows Vista, the format command writes zeros to the whole disk when a full format is performed. In Windows XP and in earlier versions of the Windows operating system, the format command does not write zeros to the whole disk when a full format is performed.
2068
Living Room / Re: 101 Great Computer Programming Quotes
« Last post by f0dder on November 15, 2010, 01:32 PM »
And that's where we get the Queen song, "I want to ride my bicycle, I want to ride my bike." :D
Haha, epic :D
2069
Living Room / Re: 101 Great Computer Programming Quotes
« Last post by f0dder on November 15, 2010, 11:40 AM »
#15 “There are two major products that come out of Berkeley: LSD and UNIX.  We don’t believe this to be a coincidence.”
(Jeremy S. Anderson)
Funny, but didn't LSD originate in Basel, Switzerland? - was probably consumed a lot more in Berkeley, though :) (I remember reading a short article about a microprocessor engineer who confessed he had used the substance, in small quantities, while working on CPUs...)
2070
General Software Discussion / Re: Slash your windows boot time
« Last post by f0dder on November 15, 2010, 11:23 AM »
Not only on the boot time but the cpu usage when your computer's running.
Shouldn't make that much difference on CPU usage, as most services are going to be idle most of the time - but it can definitely make a difference wrt. memory consumption.

Services especially, i had a svchost that was running about 80% cpu between boot and actual computer use.
Indexing service? :)
2071
Living Room / Re: How to understand all the Intel chip types?
« Last post by f0dder on November 15, 2010, 05:06 AM »
Don't you just miss the Pentium structure?  P4 2.8 is faster than a 2.6 etc..   :D
...but was a P4 2.8 GHz faster than an Athlon64 2.4 GHz? Also, keep in mind that not all chips branded "Pentium 4" are the original P4 core type, afaik.
2072
Scrapbook also saves the plain files, but the pages can be a bit tricky to find since the folder names the scraps are stored in are timestamps. It also stored a "scrapbook.rdf" and "cache.rdf" - I dunno if scrapbook can rebuild them if they go missing.

I hardly ever bookmark anything these days; sites I visit frequently have shortcuts on mouser's LaunchBarCommander, other sites are saved in browser history (and thus available fast by searching in the firefox address bar), sites I need to check out later are saved in browser session (I'm SO looking forward to FF4/Panorama so I can group stuff and reduce visual clutter!), and stuff I might check back on a lot later is saved to scrapbook; it really sucks when you need a piece of information and the hosting site is gone, or the URL scheme changed.
2073
I used it to download all the online HTML help files (on-line manual) of a program I use so I could have a copy to refer to when I am off-line. It was much easier than page by page saves.  Had all 102 files in about 60 seconds. :)
You might want to check out ScrapBook - while it doesn't handle "grab all links on a page", it's very useful for quickly for (fully) saving a single page for offline viewing. It's a tool I'm pretty happy to have in my repository :)
2074
FireFox + DownThemAll - as unobtrusive as it gets. Right-clicking a file gives you normal "save as" and a "save link with DownThemAll", single-left-clicking pops up the standard FireFox "save file..." dialog, with added "DownThemAll" and "DTA OneClick" options.

It isn't 100% perfect as it's browser based, so if you have a really fast connection the download process can stall FireFox a bit... but it works very well for my needs, resume as well as multiple-connection speedup. And it integrates pretty well with sessions-checking crap.
2075
Living Room / Re: Email Security
« Last post by f0dder on November 11, 2010, 01:13 PM »
OK back to topic: If an email service does not provide TLS I do not use it.
Sounds a bit pointless, since transport between SMTP servers isn't TLS'ed.

(But OK, if you're un an unprotected wifi, at least other people in the coffee shop can't snoop on the mails you're reading).
Pages: prev1 ... 78 79 80 81 82 [83] 84 85 86 87 88 ... 364next