I can vouch for that. You wouldn't believe the number of solicitations I receive for my little site each week, and I haven't updated it in months. Yet people send me free software all the time. Most of it is highly marketed junk
that forces you to cut through mountains of marketing copy amidst a good UI, but you end up asking, "Where the substance, the edge; i.e., what makes this slick program different from the freeware one?" What's odd is that they almost never send a registered version, ha!
One of the attractions to DonationCoder.com is its openness to the best software, no matter its licensing. My gripe with TopTenReviews is that without reviewing freeware/OS/Donation apps, then you're really only giving readers a partial review. Imagine if in his Image Shootout Review
Nudone had stopped after reviewing ACDSee, ThumbsPlus, and PicaJet FX? Or if I had stopped after reviewing WinRAR, WinZip, and PowerArchiver in the Best Archive Tool
I welcome more review sites, and Veign makes an excellent point about synopsizing content. If you want the "quick and happy cheerleader" version of software reviews, visit The Great Software List
, where everything is reduced to one paragraph and a screenshot. However, if you want more depth, with user input, response, and feedback, I'm coming here to DonationCoder.com. One reason I think DonationCoder.com has a great advantage with reviews is that many of you guys build
software and approach it from several different angles, not just the "user" perspective I come from every time.
JavaJones, as for benchmarking, remember that it, too, can be subjective to an extent. Slow-opening programs are like slow-opening websites — they kill you, and that information does have a direct impact on users. But when I was doing my research for the Best Archive Tool review, I found tons of benchmarking sites that measured compression ratios and compared their algorithms, and in the end, I just wanted to see what each would do with a big freakin' fat file on my own (fast) computer. Much benchmarking is done at the university level for papers and studies, it seems.
When you look at TTR's best feature — its comparative tables — you still want to know "how" they thought one program received a 3.5 rating and another received a 4.0. At least, I want to know why — was it slower, did the feature not work, was it more difficult to access, etc.