Home | Blog | Software | Reviews and Features | Forum | Help | Donate | About us
topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • September 20, 2017, 10:18 PM
  • Proudly celebrating 10 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - f0dder [ switch to compact view ]

Pages: [1] 2 3 4 5next
1
fSekrit / Open-sourcing fSekrit
« on: February 08, 2016, 05:13 PM »
So, this has taken far longer than I wanted it to, but the time has finally come: fSekrit is going opensource. I don't personally feel comfortable using closed-source security products, so better put my money where my mouth is.

TL;DR: w00p w00p.

Why has it taken so long - after all, I've mentioned open-sourcing it as early as 2008, and probably earlier (this was the lazy first result from a quick search)? Well, as mentioned in that post, embarassment of showing your source to the world was one factor. Then there was time and motivation: fSekrit 1.40 does most of what I need, and after getting a full-time development job, doing some fundamentally boring development (cleanup, documentation, ...) in my spare time didn't seem like a lot of fun.

There were also a number of decisions that had to be made - for various reasons, I didn't feel like dumping the entire Subversion repository (some of the code was embarassing, but there were also issues like having used hardcoded paths and passphrases during early development, not using a standard repository layout, and stuff I've forgotten by now). It quickly became clear that I wanted to move to Git, and that I wanted a cut-off point for what I shared with the rest of the world - and I bumped my head on grafting. Furthermore, I wasn't sure which license to release the code under.

So, I've finally made some decisions, in order to be able to move forward:

  • I've chosen 1.40 as the public cutoff point.
  • I won't muck around with grafting, will suffer subversion if I need history.
  • License will be //TODO// - I'm leaning towards something permissive, though.
  • The code will be released under my real-name GitHub account, but otherwise the 'f' in fSekrit stays.
  • The work-in-progress 2.0 code will be pushed later, but it's currently in a too messy state.

I won't make any guarantees about further progress, but at least this is a step forward. There's some boring grunt work that has to be done before development can properly be resumed.

  • The current 2.0 branch basically has to be salvaged; I tried to do too many things at once, and keeping Win9x compatibility means adding proper unicode support resulted in kludgy code.
  • Win9x support will be dropped. If there's still people using Win9x, bug fixes might be backported to 1.x.
  • Less focus on super-small executables, for instance I'll (at least initially) be using STL containers.
  • Builds will be done with a C++11 (or newer) compiler, support for VC2003 toolkit will be dropped - it hasn't been available for download for ages, anyway.
  • I need to add unit tests. Any suggestions for a framework? Integration with Visual Studio is a plus, but the core must be cross-platform. Google test? Or Catch?
  • I need to do some work on the build system. Is SCons still viable? Or should I just go Gradle?

I don't have SCons installed at the moment, but the current code can be directly checked out of Git, imported into Visual Studio 2013 (with conversion, the solution is VS2008) and built.

2
Developer's Corner / Git and PGP commit/tag signing
« on: February 03, 2016, 01:22 AM »
Hey everybody, do any of you guys have any experience with PGP-signing in Git?

There's good reasons to sign your code, especially if you're planning to share your code with the world, and it's simple enough to set up - there's a zillion blog posts regurgitating the bare basics. I could of course just generate a 4096-bit RSA key and be done with it, but I guess I'm looking for more of a dos and don'ts or personal experience kind of thing, especially related to key management.

Since it's what people seem to do, I'm planning on using GNU Privacy Guard.

So, should I have one keypair for "everything" (signing in Git as well as email, if needed, and other encryption purposes), or is it better to have separate keypairs? Or signing keypair as a subkey? Any thoughts on keypair properties (e.g., RSA for the master, DSA signing-only key, expiration dates of master and subkeys, ...)? Anything else (GPG is a clusterfuck UX-wise, and has a lot of knobs you can play with)?

I'm pretty sure master + subkey is the way to go, and setting up is described decently enough, I guess - even if the dance seems elaborate.

As for the signing process itself, for the project at hand, I'll probably go with only signing tags - I'll be the only one committing to the repository (merging pull requests, should any ever appear), and I prefer signing to be a conscious, reviewed activity.

3
General Software Discussion / Smallish RAMDisk benchmark
« on: October 22, 2012, 03:36 PM »
After Curt linked to a RAMdisk Benchmark, I decided to do my own testing, just for good measures, completeness' sake, and because I'm curious :-)

Note that the benchmarks were run on my workstation, with the load of normal apps I usually have open (firefox, thunderbird, pidgin, skype, and a zillion others) - while they were all pretty much idle, this obviously makes the benchmark slightly less "pure" than a "real" benchmark, but IMHO the numbers shouldn't be measurably skewed. Also, I have Intel's speedstep power management enabled, and didn't bother to "pre-burn" to ensure the CPU was running at max frequency; I'd wager say this shouldn't effect the benchmarks much either, since they're long-running, but it's worth keeping in mind.

Benchmark software & configuration:
CrystalDiskMark 3.0.1 x64
5 passes, 2000MB test file
Test data: random (default)
...I'm not a super big fan of CDM, since it's weird and uses the silly SI units for MB - but it's easy to use, and what Raymond's benchmark uses.

ATTO disk benchmark: 2.47 bench32
Direct I/O, Overlapped mode
transfer size: 0.5 to 8192kb, tested with queue depth of 2 and 8
total length: 2GB

OS: Win7 x64 SP1, Build 7601
CPU: Intel Core i7 3770 (Ivy Bridge)
RAM: 4x4GB Corsair DDR3-1600MHz
Motherboard: ASUS P8Z77-V PRO

General RAMdisk configuration:
4gig, formatted as NTFS with 4kb clusters

Note that I did not test the speed winner of the Raymond's benchmark, Bond Disc, since it simply seems too weird - and it has a max size of 640MB, which makes it a no-go anyway. I tested: Superspeed - because it's a big professional commercial product, and I had access to some older version of it
ImDisk - because it's more or less the "reference opensource ramdisk"
SoftPerfect - because it's a commercial product but free for non-commercial use

It might also have been worth looking at CPU usage while doing the benchmarks - I kept half an unscientific eye open on Process Monitor, and it seems like all three more or less maxxed out a single core while benchmarking, but nothing more accurate than that :)

Without further ado, results for each product - the textual results are from CrystalDiskMark:

SuperSpeed RamDiskPlus 10.0 x64
  Test : 2000 MB [Z: 1.2% (48.3/4094.7 MB)] (x5)
  Date : 2012/10/22 21:12:55
           Sequential Read :  6251.861 MB/s
          Sequential Write :  8910.925 MB/s
         Random Read 512KB :  6268.756 MB/s
        Random Write 512KB :  8409.925 MB/s
    Random Read 4KB (QD=1) :  1171.595 MB/s [286034.0 IOPS]
   Random Write 4KB (QD=1) :   882.728 MB/s [215509.7 IOPS]
   Random Read 4KB (QD=32) :  1152.307 MB/s [281324.9 IOPS]
  Random Write 4KB (QD=32) :   754.141 MB/s [184116.5 IOPS]
superspeed-crystalmark.pngsuperspeed-atto-qd2.pngsuperspeed-atto-qd8.png



SoftPerfect RAMDisk 3.3.2 (2012-Oct-11) x64 - note that v3.3.1 from Oct06 changelog says "Major optimisation with performance gains 20% to 900% in various tests."

  Test : 2000 MB [Z: 1.2% (48.3/4096.0 MB)] (x5)
  Date : 2012/10/22 21:33:50
           Sequential Read :  8575.204 MB/s
          Sequential Write :  9629.429 MB/s
         Random Read 512KB :  7506.314 MB/s
        Random Write 512KB :  7784.935 MB/s
    Random Read 4KB (QD=1) :  1538.529 MB/s [375617.4 IOPS]
   Random Write 4KB (QD=1) :  1067.687 MB/s [260665.8 IOPS]
   Random Read 4KB (QD=32) :  1490.878 MB/s [363983.8 IOPS]
  Random Write 4KB (QD=32) :   901.656 MB/s [220131.0 IOPS]
softperfect-crystalmark.pngsoftperfect-atto-qd2.pngsoftperfect-atto-qd8.png



ImDisk 1.5.7 (2012-Jul-30)
  Test : 2000 MB [Z: 1.2% (48.3/4096.0 MB)] (x5)
  Date : 2012/10/22 21:56:39
           Sequential Read :  5955.938 MB/s
          Sequential Write :  8793.090 MB/s
         Random Read 512KB :  5747.944 MB/s
        Random Write 512KB :  8380.221 MB/s
    Random Read 4KB (QD=1) :   670.431 MB/s [163679.4 IOPS]
   Random Write 4KB (QD=1) :   563.793 MB/s [137644.7 IOPS]
   Random Read 4KB (QD=32) :  1519.625 MB/s [371002.2 IOPS]
  Random Write 4KB (QD=32) :  1135.113 MB/s [277127.1 IOPS]
imdisk-crystalmark.pngimdisk-atto-qd2.pngimdisk-atto-qd8.png

I think my recommendation henceforth is going to be SoftPerfect. It's fast, it's free and it's got an uncluttered interface (ImDisk is somewhat raw and messy), and it can do differential image saves instead of dumping the entire memory contents (saves quite some time if saving a large ramdisk). Also worth noting is that adding a new drive is instantaneous in ImDisk and SoftPerfect, whereas it takes quite a while (up to a minute or so) in RamDiskPlus.

EDIT 2012-11-07: added links for softperfect and imdisk.

4
Living Room / Home server upgrade meanderings
« on: September 28, 2012, 12:34 PM »
So, my current server is getting a bit long in the tooth - it has served me since December 2007, with a few harddrive replacements in the time between.

The current specs:
Intel(R) Celeron(R) CPU 420  @ 1.60GHz (singlecore, runs merrily with a big Scythe with the fan disabled - nice noise-wise).
ASUS P5B-MX, 2x1 gigabyte of whatever ram.
No-name, inefficient 350W PSU
120mm casefan
1xWD3200BUDT 2.5" 320GB WD AV, system + miscdata disk
2xWD6401AALS - WD Caviar Black 640gig, raid-mirror important stuff.

Everything is AES-256 encrypted, which is the major slowness factor, but it's also not powerful enough for the minecraft sprees I do with my friends every now and then (rendering the out-of-game worldmaps is WAY slow). The AES is heavy enough that I'm pretty far from maxing out disk speed.

Server runs Debian, and copying is done by a Win7 box pulling across the gigabit LAN, served by Samba (3.5.6). Haven't done a lot of smb nor proc/net tweaking.

Some power usage statistics:
~7.3W shut down (shut down, not standby - most older systems are like this)
~68W idle
~82W copying, ~33MB/s, ~65% CPU (50 kcryptd, 15 smbd)

Pretty interesting that it claims only ~65% CPU usage, btw, since it's clearly the CPU that's maxed out - doing anything on the box is s-l-o-w while copying.

I transplanted the disks to my testbox, a Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz, different motherboard obviously, with CPU-fan and no case-fan, but same PSU. The stats there:
4.3W shutdown
~65-67w idle
~82W copying, ~45MB/s, ~50% CPU (kcryptd, nothing else above 0.x% :-))

That's almost fast enough not to buy a new server but...
1) I'd still like to be able to saturate my disks (and this box is clearly CPU limited as well, kernel crypto-loop doesn't multithread, at least not for one device).
2) I do need a testbox every now and then, and the current server is a bit too slow for some of the things I do... plus, I'd like to donate it to my brother, instead of the insanely slow P4-celeron I've been postponing fixing up for him for a couple of years ;-)
3) I'm certain I can get even lower power consumption.
4) I like fiddling with hardware :$

So, I've been pondering a bit as to what I need to get my grubby little hands on. Considering that my current slightly beasty desktop (i7-3770, 16gigs of ram, and a GTX460 graphics card) runs at... what is it, ~65-70W idle... I expect I can go somewhat lower for a server build.

But which CPU? I kinda want an i5, since those have the AES-NI instruction set, and then I'm guaranteed AES won't be a bottleneck. I guess just about any i3 will be able to saturate disk without AES-NI, but probably at higher power consumption.

And I have no clue what i3 vs. i5 is like with regards to power consumption - the Watt amount listed on Intel's site is TDP, which I understand to be more related to max heat than directly to power consumption... and at any rate, the current CPUs are damn efficient at power reduction when idle (which the box will be *most* of the time). Anybody got some realistic estimates what power consumption is with Ivy Bridge line of i3 and i5, idle as well as load?

Are there large differences in power consumption on various motherboards? Any particular boards that are good? (I don't need a crapload of features - decent gigabit NIC that works with linux, at least four SATA ports. 6-8 would be nice, but not a *requirement*, and while I don't need 6gbps sata it's probably best to go for that, if the new server is going to last 5+ years).

And what about heat? It's pretty nice that the old celeron can handle passive cooling, even under load - the server is in my living room, and my apartment is pretty small, so... noise is an issue.

I've slightly considered getting a Xeon, but have no idea whatsoever wrt. their power consumption - and it does seem a bit expensive to get a xeon + server motherboard, with the main reasoning being ECC support for the RAM. I'll probably be going for 2x4GB ram - a bit overkill, but then at least my demands for the next 5+ years should be met.

Oh, and I do want on-board graphics. Anything goes (80x50 textmode ;P), as long as it doesn't suck too much power. I'm obviously thinking on-cpu intel HD graphics.

So, that was the CPU muscle + power consumption bit. Next up: case and harddrive stuff. I do need a new case, since the current minitower is a bit too cramped - and it's too flimsy to properly absorb harddrive vibration.

Not sure what to go for; I don't need a super big tower, but I want something heavy&solid to dampen drives, and enough room that working with the box isn't cramped. I've also been considering some kind of hot-swap bay, but have no idea what brands too look for. I'd rather have something without bays where I can just pull out a drive, like this?, but I want the thing to be solid... and not add too much noise. Oh, and not fsck up things totally heat-wise. Also, what's Linux SATA hotswap support like these days? Like, doable on a standard motherboard without fancy controllers?

And I guess most decent cases come without PSUs - also a bit unsure what to look for, there. I want something power efficient and silent, and preferably with modular cabling (but not a deal-breaker if it doesn't have it). I've got a Corsair tx550m in my workstation, which is pretty nice - but 550W is overkill even for that machine. I wonder if it makes sense going for something with a lower Watt rating, since the server box is going to be *way* below that? There don't seem to be a lot of modular PSUs available below that power level, though, and especially not here in .dk. Also, a definite plus for the tx550m is that it provides very stable voltage levels.

I think that's it for now - dunno if I forgot something :)

5
General Software Discussion / iPad2: alternative to Stanza?
« on: October 17, 2011, 02:12 PM »
Okay, so I'm pretty pissed off right now.

I upgraded my iPad to iOS5 a couple of days ago, and boy was that a craptastic experience - dismal download speeds... yeah sure, lots of people hitting Apple's servers at the time, but they're trying to position themselves as a cloud provider and can't handle it? Not to mention that the crapTunes downloader doesn't know how to resume downloads... like, wut, it's 2011?.

Anyway, I digress - yesterday I realized that Stanza no longer works, throwing Sig6 and Sig11 errors. What, an OS update on easy-to-support fixed-capability hardware that breaks backwards compatibility of a regular usermode application? "Apple - suddenly everything sucks!". Then I realize that Stanza probably won't be updated, since Amazon bought Lexcycle probably pretty much just to kill off Stanza. Ugh.

And of course there's no way to downgrade to the previous iOS version - I thought the backup I made before installing iOS5 would allow that, but I guess that's a silly expectation when you're dealing with crApple. Not even if you find a 4.whatever firmware image from their own servers - you need a jailbroken device for that. Like, wtf?

This leaves me with, what? iBooks is a bit of a joke. Too much time has been spent on making it "cute", like the bookcase UI metaphor and the super-animated page transitions. But it's slow, and requires the use of crapTunes to transfers ebooks over from my collection. Ugh.

Stanza had a bunch of features going for it, where the important ones I'm looking for in a new reader are:
  • Calibre integration, so I can simply grab the ebooks from a Calibre instance running on my workstation.
  • Fast no-nonsense page flipping.
  • Compact list of available eBooks (cover art and title, perhaps possibility to sort by author or title).

I came about Ouiivo eReader, but it's unstable, can only show the first page of search results from a Calibre server, et cetera.

6
General Software Discussion / Linux kernel.org hacked
« on: September 01, 2011, 02:33 PM »
"Oops."

shot-2011-09-01@21.28.34.png
Kernel.org Server Rooted and 448 users credentials compromised

Now, as mentioned in the article there's no reason to worry about the Git source repository, due to the nature of Git itself... but the kernel tarballs could be affected, and we won't know the details until after an audit is done. (Yes, there's signatures for those tarballs, but who checks the signatures? And is there any guarantee that the tarball signing key hasn't been compromised?).

What does this mean? If you've downloaded tarballs from kernel.org the previous month or so, be sure to audit your systems and follow the news very carefully. Hopefully all sane distributions get their kernel sources from Git and not kernel tarballs, so people upgrading kernels from their distro vendor should be safe - but stay tuned.

Interesting news, anyway. Seems to be a combination of trojanizing an Intel kernel committer (social engineering or haxxor of his system?), and then a bit of local->root privilege escalation.

7
Zowie, what a thread title!

So, I've set up a private minecraft server on my linux box, serving a handful of friends. Currently, it's running under my 'f0dder' user account... it works, but probably isn't the most secure and smart way to do things.

I've finally found a decent mapper that doesn't have craploads of dependencies... it's still somewhat slow, probably because of the ancient singlecore CPU in the server, but it works. And I'd like to automate the map generation to run overnight. Cronjob, eh? I could just do that from root's crontab, but that also feels wrong.

So, what's a decent setup for all this? I assume the first thing would be setting up a dedicated "minecraft" user that doesn't allow remote logins, and is as restricted as possible. Next up, what to do about file permissions? The generated map data should be available through http, and the http daemon uses the www-data user+group for that...

NTFS ACL permissions are so simple and flexible to work with, but this is linux :)

8
fSekrit / 2011 status report
« on: March 04, 2011, 01:44 PM »
OK, so I haven't done a helluva lot of work on fSekrit since the progress and thoughts threads was started. No lame excuses, just a lot of Real LifeTM :)

I've decided on Git for version control, as I've verified it can do the history split/merge I want for the "historic & private" version vs. the "new & public" one. Not entirely sure how to wrangle the grafting yet, but I know it can be done, and that's the important thing. I'm not entirely sure when to move things over to Git - or, rather, where to start the public history, as I've already moved the private repository to Git. I could do the remaining cleanup so there's no swear-words or other embarrassing stuff in the codebase and go public there, or I could do the minimum amount of work so there's a working & tested build (which is some effort) until the code base is released. Not entirely sure yet.

So, what's the status right now?

I've pretty much settled on the internal data representation I want for the next version of fSekrit, which will allow for things like multiple tabs in one document and future option expandability without requiring file-format changes. I've updated reader code for v1 and v2 of the fSekrit file format to read into this new internal representation, but I still need to settle on a serialized format of the internal v3 representation, and write load/save code for this. Not terribly complicated, but fairly boring - C# is so much easier than C++, just add [DataContract] and [DataMember] attributes and you're pretty much done :P. I'm considering whether I should just use Google's protobuf, but on the other hand one of the main selling points of fSekrit is compact size.

And then there's the other stuff from the Progress thread that haven't really been started yet. What I'm currently considering is to finalize "sekritCore", which means verifying that I've made correct flexibility decisions with regards to the v3 document format, finalizing v3 load/save code, and possibly get some unit testing in place for these core features (as far as I can tell, Googles' testing framework is the best bet for C++ code). Once that's done, opensource the project, and start picking away at the ToDo list, one feature at a time.

Any comments, or have people stopped using fSekrit for lack of updates? ;)

9
So, what took them so long? ;)

Quote
although open source has demonstrated its worth, particularly on servers, the cost of adapting and extending it, for example in writing printer and scanner drivers, and of training, have proved greater than anticipated. The extent to which the potential savings trumpeted in 2007 have proved realisable has, according to the government, been limited – though it declines to give any actual figures. Users have, it claims, also complained of missing functionality, a lack of usability and poor interoperability.

Source: H open.

10
General Software Discussion / Alternative .chm readers?
« on: February 13, 2011, 06:08 AM »
Microsoft's own .chm reader is fast, but it has a number of shortcomings:
  • It doesn't support UNC paths.
  • It has trouble with certain things in filenames (# symbol, names ending in "col", ...)
  • It doesn't have "proper" font scaling (smooth ctrl+mousewheel as we're used to from browsers).
  • It doesn't support multiple tabs (important for me when reading tech ebooks and wanting to check out a referenced section without leaving the current section).
  • It lacks bookmarks (another ideal function would be remembering where you left off, when you close an ebook).
  • It doesn't have a "paged" mode - most of the time, the "one continuous page" style is what I want, but sometimes it would be nice to have a paged mode.

So, are there any decent alternatives?

11
Potentially bad news ahead:
Quote
Allegations regarding OpenBSD IPSEC
Theo de Raadt <deraadt <at> cvs.openbsd.org>
2010-12-14 22:24:39 GMT

I have received a mail regarding the early development of the OpenBSD
IPSEC stack.  It is alleged that some ex-developers (and the company
they worked for) accepted US government money to put backdoors into
our network stack, in particular the IPSEC stack.  Around 2000-2001.

Since we had the first IPSEC stack available for free, large parts of
the code are now found in many other projects/products.  Over 10
years, the IPSEC code has gone through many changes and fixes, so it
is unclear what the true impact of these allegations are.
via OSnews.

12
General Software Discussion / Linux webserver du jour?
« on: November 18, 2010, 07:04 AM »
I'm currently working on a project that among other things includes a part written in Ruby, running on a linux box. Up to now we've been running it under WebRick on the test server, but I'd like moving to a real web daemon - and now I'm wondering what my choices are.

Apache is a no-go. It's not entirely for rational reasons, but I feel it's too big and clunky and dusty.

For my own little webserver I've been using lighttpd which has served me pretty well, and is probably what I'll end up using unless there's better suggestions. This thread is mainly to see if there's something even better, since I haven't shopped around for httpds for several years :)

Also, is there any particular stuff I should know about having ruby running under a httpd? I managed getting it running on my own server, but have no idea whether it's running interpreted or in jit'ing vm - afaik the default for ruby is interpreted, but there's several VMs available?

13
Developer's Corner / Git: converting svn repo & stuff
« on: July 25, 2010, 05:56 PM »
OK, so I've pretty much decided to move from subversion to Git for source control needs. Obviously I'd like my old version history to be ported over from the svn repos, and I've pretty much got that nailed (although there's some quirks to work out, because some of the first stuff I put under version control didn't follow a standard layout).

The tricky part has to do with fSekrit. Eventually, I want to open-source it, but I don't want the full version history to be available to everybody (for instance there's some ugly code in the early versions, swearing in comments, hardcoded passphrases for testing purposes, ...). So, the question goes: is it possible to do a Git setup with two repositories: one with full version history, the other with the "public" version, and the ability to push commits to both repositories? Or will I have to accept that the "historic" repository as a  read-only archive, only pushing changes to the "live" publicly available one?

14
Living Room / Google does no evil; kills reMail
« on: February 19, 2010, 12:08 PM »
Yup, sensationalist headline.

From slashdot:
Quote
Hugh Pickens writes "PC World reports that Google has acquired a popular iPhone application called reMail that provides 'lightning fast' full-text search of your Gmail and IMAP e-mail accounts. The app downloads copies of all your e-mail which can then be searched with various Boolean options. reMail has only been in the application store for about six months — with a free version limited to one Gmail account and a premium version which can connect to multiple accounts. 'Google and reMail have decided to discontinue reMail's iPhone application, and we have removed it from the App Store,' writes company founder Gabor Cselle, who will be returning to Google as a Product Manager on the Gmail team.

While I do believe it's a bit too early to jump to conclusions, this certainly smells fishy.

15
fSekrit / Development: progress and thougts
« on: January 31, 2010, 05:48 PM »
I figured it's about time I write down some thoughts on the future of fSekrit in one (hopefully coherent) thread, rather than having bits and pieces spread across various other threads. So, without further ado, here goes a braindump :)

Current state of fSekrit
The program is relatively close to being feature-complete, at least in the context of the features I originally envisioned. A few of the unimplemented features require a fair amount of code, however, feature count isn't everything.

Not all code is as clean as I would ideally want, there's a fair amount of commenting and documenting to be done, and a bunch of refactoring as well. Work has been started on this.

There's currently no test suite, which is... pretty bad. There's been a few bugs that a test suite would/should have caught. Never really found any C++ unit testing framework I liked, but I recently bumped into gtest which actually looks pretty decent. Feedback?

Overall, I'd say that the project is in relatively good shape.

fSekrit in the future
Keywords:
  • Modularizing - progressing nicely, "sekritCore" close to done.
  • Documentation - update & cleanup existing. (Internals, not readme.txt)
  • Unit testing - not started.
  • Key derivation - implement PBKDF2 instead of sha256(passphrase).
  • Tabbed interface - multiple "document streams" in one container. Work has been started.
  • Mass upgrader - automate upgrading of editor part of documents.
  • Open-source - unleash the source code unto the world.

The current goals are towards cleaning up the source code, before new functionality is added. This means modularizing, documentation, unit testing. Work is progressing nicely (load-code has been refactored & works, save is yet to come), but there's still a fair amount of work to be done. Executable size has bloated a bit, but once unit tests are in place and refactoring is done, some code will be specialized instead of using standard C++ containers, which should bring code size down to the size of 1.40 - perhaps even a bit smaller.

Once cleanup is done, I'll have to decide on whether I want to open-source the project first, or if some of the missing features should be implemented. I'm leaning towards open-sourcing first, perhaps implementing PBKDF2 first. Feedback?

Opensourcing fSekrit
I've been wanting to do this for a while, it's something that has been planned pretty much from the beginning. I didn't want to release the code before it is "decent enough", though - I'll have to admit that some revisions haven't exactly been top-grade code :)

There's various decisions to be made wrt. opening the source. One of them is license - it's definitely not going to be the horribly yucky GPL. Basically I don't want anybody making money off my work, I want attribution if my code is re-used, and I'd prefer to stay in charge (though this last requirement needn't be enforced in the license). Feedback?

There's also the issue of hosting. Forum and binary downloads probably still fit just fine on donationcoder.com and dcmembers.com, but I'm not sure what to do with the source code. I'm considering SourceForge or GoogleCode, dunno if there's other/better choices. Feedback?

At least initially, I'm going to keep the subversion repository on my own private server, and let people contribute patches if they want. Source code previous to the open-sourced version won't be public available. Eventually, it'd be nice to have updates to my own repository mirrored to a public repository; this really screams "move to a DVCS". Feedback?

I might want some bug tracking / feature request system as well... that would probably come with the source hosting. I've used RedMine a bit, and that's the one I've been liking best - trac is apparently nice, but looks a bit unpolished.

16
fSekrit / fSekrit has a new website!
« on: December 13, 2009, 07:07 PM »
I've finally gotten around to setting up a "proper" website for fSekrit; mouser gave me a dcmembers.com account quite a while ago, and it didn't have anything but a half-arsed page for my Notepad++ plugins... until now. Behold f0dder.dcmembers.com, the new and much improved1 site for fSekrit and the Notepad++ plugins :)

When Gothi[c] gets around to fixing it, the current page should redirect smoothly to the new site... until then, this post will hopefully help search engines pick up. Redirects are up and running, thanks plenty to Gothi[c] :)

In order to not be an entirely self-promoting post, I'm going to ask you guys for feedback. Anything goes - design, content, grammar, splellelling, you name it! Current list of known this-could-be-done-betters:
  • Main page is rather boring :)
  • Space is needed between icons and text in sidebar.
  • Latest forum posts in fSekrit page not implemented yet.
  • Use donationcoder favico for DoCo sidebar link.


#1: thanks to scancode for pointing me to Free CSS Templates, otherwise the new site would still look like crap :P

17
fSekrit / LATEST VERSION: fSekrit 1.40 shrinkwrapped!
« on: December 03, 2009, 03:30 PM »
There, fSekrit 1.40 has been released!

Since stuff mentioned in the beta period should have been mostly addressed, I've decided that 1.40 is ready as an early 2009 christmas present. So, what do we have here?


Version 1.40 - December 3, 2009 - 90kb/45.5kb


  • fixed: file->export appends ".txt" instead of ".exe" if no extension given.
  • fixed:  long-standing bug where failing to save changes when closing fSekrit with a modified document would cause fSekrit to exit, rather than notifying of error and let user attempt to save again.
  • fixed:  saves are *finally* done properly, by saving to a temporary file and replacing the current file only when all the file writing business is done.
  • added:  font selection dialog, no longer do you need to much around with the registry to set another default font. The font is still not stored in your document, though, and is single global per-user registry setting.
  • added: "portable" mode, which (for now) means it will not use %TEMP% to store it's temporary editor executable, but instead store it in the same folder as the opened document. Registry is still used for font selection, though! To enable this feature, create a file called "fSekrit.portable" in the same folder as the document you want to function in portable mode.
  • added: URLs are now recognized and turned into hyperlinks.
  • fixed: Read-only notes should be a lot more sane - changed from confusing "make read-only" that half-worked to "Save As Read-only" that works :)
  • fixed: Win9x and NT4 support has been broken since version 1.35. Release builds are now done with an older compiler toolchain, and 9x/NT4 support is back :)


Enjoy! :)

18
fSekrit / Beta: fSekrit 1.40 needs some abuse!
« on: October 07, 2009, 02:41 PM »
After almost two years of inactivity, and being embarrassed by reproducable data loss, I finally kicked my own butt around a bit. This beta needs some heavy testing, since I reworked the file-save logic. It should be a lot more robust now, but because of the changes it definitely needs some beating around.

1.40 BETA5:
  • fixed: Read-only notes should be a lot more sane - changed to "Save As Read-only"
  • fixed: NT4 and Win9x support should be back again.

1.40 BETA4:
  • fixed: font selection dialog initialized to show current font selection.
  • added: recognition and hyperlinking of URLs.

1.40 BETA3:
  • fixed: font selection dialog should work on all Windows versions now (*crosses fingers*)
  • added: "portable mode" - create a file called "fSekrit.portable" in the same folder as your fSekrit document, and %TEMP% won't be used for the temporary-editor-executable.

1.40 BETA2:
  • added:  font selection dialog, no longer do you need to much around with the registry to set another default font.  The font is still not stored in your document, though, and is single global per-user registry setting.

1.40 BETA1:
  • fixed:  long-standing bug where failing to save changes when closing fSekrit with a modified document would cause fSekrit to exit, rather than notifying of error and let user attempt to save again.
  • fixed:  saves are *finally* done properly, by saving to a temporary file and replacing the current file only when all the file writing business is done.

19
Living Room / Boys On Wheels [NSFW, NPC]
« on: October 05, 2009, 03:24 PM »
Ah, this is fantastic - haven't had such good laughs in ages. A lot of people will probably find this distasteful and disrespectful... but I'm a big fan of anti-political correctness, and I think it's pretty cool what these guys are doing.

bow.jpg

girls_on_wheels.jpg


...and there's more :)

20
Living Room / I've said it before - they're out to get you!
« on: September 23, 2009, 11:08 AM »
Ninja cat comes closer while not moving!

ninjacat.jpg

I think that "while not moving" should be "while you're not watching", but hey - it doesn't ruin my point... cats are ice-cold psychopathic murders, plotting to kills us all!

21
General Software Discussion / Dealing with HUGE text files?
« on: August 27, 2009, 01:07 AM »
I've seen a few inquiries here and there for people who need to deal with huge text files - we're talking the multi-gigabyte range here. Usually it's for log files and the people really just need viewing (and grepping) and not editing, but edit facilities might be necessary every now and then.

Does any of you guys know a text viewer (or, better, editor) that handles such files without hiccup? We're obviously talking something that doesn't use 32bit variables for file size or line count, and doesn't try to load the entire file at once... I'm pretty sure I bumped into such an editor years ago, but haven't had the need for huge files since.

Freeware would of course be preferred, but any suggestions are welcome. Some people seem to think that you'd need a 64bit editor to handle such big files, but imho it's perfectly possibly to handle with a (smartly programmed) 32bit editor... if you need to handle really big files, it's folly to try and do full-memory loading anyway :)

22
Living Room / 8-bit trip with animated lego bricks
« on: August 24, 2009, 11:25 AM »
Kudos goes to p3lb0x for linking this on facebook. What can I say? It rocks - remember to click HD.

shot-2009-08-24@18.24.28.png

23
General Software Discussion / Win7, disk imaging, vmware
« on: May 12, 2009, 04:47 PM »
Here goes...

I'm (hopefully!) getting hold of a nice SSD tomorrow, which means reinstalling Windows (the main reason for getting an SSD is fast application loadtime et cetera). While I don't usually like running bleeding-edge OSes, Win7-RC seems pretty nice & stable to me, so I'll give it a go. It's basically either that or Vista, and Win7 does have some neato improvements as well as being ~4GB freshly after install (my vlited Vista on the laptop is ~6GB).

Win7 part
First, does Win7 need it's BCD in that little 100meg-or-whatever hidden partition it creates by default, or is it possible to have the BCD on the system partition? If it can go on the system partition, is there any advantage whatsoever to keep the two partitions separate?

Disk imaging part
Next, I'd like a recommendation for disk imaging software. I used to use ghost and then TrueImage, but haven't been into system recovery by disk image for some years now (I've used DriveImage XML for the purpose of picking out individiual files that I might have forgotten to backup before a format, but that's it). So, what some good (and preferably free) disk imaging software today that...

1) can handle partition bootsector and perhaps MBR (ie, restore an image to a blank harddisk and then boot from it)
2) can be booted from a DVD or (preferably) USB keyfob
3) has windows software to extract individual files from it's image (not a requirement, but would be nice).

Acronis TrueImage probably fits the shoe, but I don't have a license for it anymore, and frankly all the different versions listed on their site leaves me damn confused.

VMWare -> hardware part
And, finally - what about creating an image from a vmware virtual machine and transferring it to real hardware? I believe this is possible, but does it require any special trickery, and would there be any downsides from doing this? I have nicely working vLited Win7-RC setup disc, but it's impossible to disable pagefile and hibernation on that disc, which thus has to be done post-install... and frankly, it's nicer doing setup & testing in a VM before going to real hardware.

24
Living Room / BIOS Level malware attack
« on: March 23, 2009, 04:05 PM »
Uh... oh...

Via slashdot:
shot-2009-03-23@22.02.38.png

I guess the attack would have to be BIOS-specific (for finding a spot to put the malware) and slightly chipset-specific (for flashing the code to BIOS flashrom), but it's nasty nevertheless... combine this with SMM exploit and a hypervisor, and you're unremovable (except of course on motherboards where the flashrom chip can be removed from the motherboard - most seem to be directly soldered on, though).

Undetectable is still hard, even with a hypervisor, and I doubt it can be fully done. But you can go very stealthy.

25
I used to think this setting was a decent performance thing, and used to enable it. My impression, based on various faulty wannabe-tech-knowledgy blogs was that "Enable write caching" was only the usual filesystem caching, whereas "Enabled advanced performance" meant the actual disk write cache (ie., the on-disk memory buffer, usually 8, 16 or 32 megabytes).

badmojo.png

It's a while since I learned the true meaning, and turned off the setting in shock, and then didn't think much more about it. But after yesterday's slashdot article Apps That Rely On Ext3's Commit Interval May Lose Data In Ext4 (and especially a lot of the moronic comments, sigh) I remembered this setting again, and thought it would be a decent thing warning about it here at DC.

Basically, what it does is making the Windows API function FlushFileBuffers() do nothing. This API is meant to flush the OS's write cache for a file to disk, and is the only way to guarantee that your data goes to disk in a modern OS. Enabling the setting probably won't affect a lot of software positively (since not a lot of software actually uses FlushFileBuffers()), but for things like databases this is crucial to ensure data consistency across crashes.

Read the technet link for a more in-depth description, and go turn that checkmark off if you have it enabled :)

Pages: [1] 2 3 4 5next