avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday March 4, 2021, 12:35 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - slowmaker [ switch to compact view ]

Pages: [1] 2next
Living Room / 1956 Autohotkey ancestor
« on: November 06, 2011, 02:36 AM »
Currently browsing the 1950s Popular Sciences, ran across this:

You'll have to scroll down a little to see it, it's the bottom right corner of page 278, the page the link above *should* take you to.

Just thought it was interesting....

On a side note, my addiction to reading PopSci and PopMech back issues over the last couple of years has led to a "projects I wannna do" clipping/notes set that is essentially impossible to ever, ever Ever EVER complete (barring science-fiction-grade life extension discoveries).

Which is a good thing.

DC Member Programs and Projects / sstoggle2
« on: February 06, 2011, 01:47 AM »
A long, long time ago (2002 feels like a long time ago, anyway), I wrote a little tray app called Screen Saver Toggle. It worked well enough, but time revealed some issues which, with the passage of a great deal more time, I got around to addressing. And here 'tis.

It does what you would expect from the name, blocks/unblocks the screensaver; it was originally unique in that it was the only utility like this (that I knew of) that did the job simply and quickly, one click and you were done. Nowadays there are several similiar apps that are just as quick and easy. Nevertheless, I'm still fond of the ol' app, so I updated it.

It has retained the 'simple and quick' characteristics, and done them one better: mouse-o-phobic users can call it from scripts now to toggle it or force the screensaver, or whatever, which means it can be tied to a hotkey, or made part of a command file sequence of events.

I very often print out ebooks so I can read them on paper. I have them available on the computer, but I READ them in print. It's much easier.

I'm a wee bit late to this thread, but I'm glad to see someone besides me does that.

Coincidentally, I calculated laser printer ink and acid-free paper costs just the other day to show my wife that cost per page made it actually a cost effective way to get a hard copy book in many cases, especially if the digital copy is in a form that allows you to reflow the text (taking advantage of the larger 8.5 x 11 pages vs 'normal' book page size).

Of course, you need to be sure of copyright issues, but when it's allowed, it's pretty handy.

Thanks for all the advice, guys. I didn't see wraith and mouser's posts before I talked to the guy, but it worked out anyway.  

I made it clear exactly what I was willing to do (i.e. absolutely nothing beyond the simple set-up originally described), and gave the guy the number of a local full-time web guy when he starting talking about "getting lots of hits."

SEO wasn't in the original description, so it gave me the excuse to pass it on this way (didn't want the job in the first place, was obligated to meet with the guy for family-politics reasons). Er, you may wonder why I didn't just ask the full time guy for a price in the first place; fact is, it just felt kind of rude. You know, "Excuse me, sir, what price should I charge to undercut you in obtaining some local business?" Wouldn't matter how I phrased it, the guy would know the intent. Or at least that's what I always imagine in situations like that.

Thanks again; the info in this thread now gives me a great starting point to work from if I should get cornered again!

An acquaintance of an acquaintance of a friend is asking if I will set up their web site for them, and I realized I have no idea what to charge for an old fashioned setup-the-domain-and-stick-a-few-static-pages-on-there-for-them site. If I did web dev in general, I guess I would be willing to just charge some reasonable fraction of the more involved work rates, but the fact is I don't normally do web dev (for money) at all, so I don't really know where to begin guessing.

There will be no ongoing maintenance, no shopping cart (or any other) scripting to deal with. Design effort on my part will be minimal, if any (will probably consist mostly of convincing them not to use some eye-watering background image they think is beautiful).  I'm estimating that the total output of time on my part won't be over two or three hours, if that long.

I have no particular desire to give them a great deal - it's not a buddy-buddy thing - but I don't want to inadvertently rip them off, either.

Any suggestions for a fair price ballpark?

My apologies for the slow reply; I've been caught up in not-so-fun stuff for a while, but I'm beginning to regain some spare time now.

And, yes, I've thought about the environment variables angle; I've  considered adding that to Backend eventually, but I'm not sure yet whether it is actually necessary.

For instance, that capability is technically available in the sense that the command  line passed to the system via Backend can call batch files (which is how I typically use it), and of course those batch files can use environment variables as much as they like.

On the other hand, I've got to dive back into the source for Backend anyway to fix a glitch I noticed in how it handles undefined extensions, so I may play with adding the environment vars at that time.

So, the summary answer is that you can sort-of do it already, and I may add a more direct handling of that sort of thing in future.

I was lamenting on the irc channel and on a recent post that with video games becoming so cinematic these days, some of us would much more enjoy just watching someone play a video game like a movie, rather than actually playing it ourselves; especially with so little free time.

There is a pretty avid community of 'Let's Play"-ers, who focus not so much on pure walkthrough as on general commentary, sort of like hanging out while an aquaintance plays and talks; sometimes they'll play a game they think is stupid just so everyone can have some mst3k-style fun. YouTube has loads of 'em.

Sample link: Katrinonus, a lot of "Let's Play" stuff mixed in with voice acting stuff.

Living Room / an abbreviation system pitfall
« on: March 04, 2010, 02:55 PM »
(this is a slightly modified version of an article I put on my web site today; I thought some of the DC folks might find it interesting)
Some Things To Consider When Looking For An Abbreviation/Shorthand System

I've recently had reason to look into shorthand and abbreviation systems purporting to be suitable for American English, and decided to present here an aspect of those systems that I found interesting.

Shorthand systems can be a bit too enthusiastic in their claims of how much effort you will save. From a Rap Lin Rie (Dutton Speedwords-based) site:

"Learn these 10 shorthand words and increase your longhand writing speed by 25%.
the = l; of = d; and = &; to = a; in = i; a = u; that = k; is = e; was = y; he = s"

Another Rap Lin Rie page by the same person adds the following two words (still claiming 25% speed gain for the (now) 12-word list):
for = f, it = t.

So, we have the following list of words, reduced to single letter codes, that are claimed to increase writing speed by 25%:


This claim is not true, as will be shown later in this piece.

I should note that I am not anti-Dutton Speedwords. On the contrary, my wife and I are currently discussing the possibility of adopting Dutton Speedwords as a family 'private language', both for the fun of it and also for the brevity of speech it genuinely does appear to allow (but mostly for the Fun bit :)). Rap Lin Rie just serves as a good representative of the type of claims such systems tend to make.

The generally available word frequency lists can be misleading also, in the sense that would-be shorthand/abbreviation system writers can interpret these lists incorrectly in a couple of different areas. I should stress here that the various collections I am about to mention do not make any claims as to suitability for abbreviation systems. It is others, e.g. amateurs such as myself, who grab the top off these frequency lists and run with them too hastily.

According to a file hosted by the Shimizu English School, using the British National Corpus, these top frequency words account for 25% of written and spoken English:

to (infinitive-mkr)
to (prep)

These may well account for 25% of the word count, but a (typable) shorthand system should be concerned with the letters typed percentage, which can tell a different story.

Note that the various slants on the British National Corpus (spoken only, written only, sorted different ways) are available here). However, none of these lists are quite the thing an abbreviation-oriented shorthand system should be directly based on. You have to hack them up a bit, as we will see shortly.

From the Brown corpus (1960's written American English):

the      6.8872%
of       3.5839%
and      2.8401%
to       2.5744%
a        2.2996%
in       2.1010%
that     1.0428%
is       0.9943%
was      0.9661%
He       0.9392%
TOTAL   24.2286%

Again, there is the word count problem, but the Brown corpus actually has two more strikes against it:

1) It is a bit old.
2) It is based on written material, no spoken word stats at all.

Paul Noll, in his 'Clear English' pages, states that he/they created their own, more modern American English corpus by "taking forty newspapers and magazines and simply sorting the words and then counting the frequency of the words."

The time frame is somewhere between 1989 and 2005, since he mentions that "...the names Paul and George are on the list. This is no doubt due to Pope Paul and President George Bush at the time the list was compiled." You could presumably pin it down further if you wanted to contact the Nolls and ask which President Bush they were referring to, but that time frame is small enough for me (the language hasn't changed that much in 16 years).
Any road, their 'top ten' list follows:


This list has no stats and makes no claims other than the frequency order. It is at least modern and American English, but it still has the problem of being written, not spoken word oriented.

Now, let's take these lists and look at them from what I believe to be a more practical viewpoint. I am operating on the assumption that an abbreviation system should be oriented toward note-taking and audio transcription. There are other uses, certainly, but they often have the luxury of waiting to choose alternatives from popup lists (medical notes transcription aids), and that is not something that makes sense for someone trying to keep a smooth, fast flow of audio transcription going.

I believe this is a fair assumption also because many home-grown shorthand systems promote themselves as being great for exactly the situations mentioned above, especially note-taking (in lectures, for instance).

So, spoken word frequency makes sense for modern usage. That part is easy enough to see, I think.

Actual saved-letter-count, however, is not addressed in any of the shorthand/abbreviation systems I have seen. What I mean is that many systems seem to act as if adopting a shortcut for a particular word somehow eliminates all effort/time involved in writing that word. When you substitute 't' for 'the', for instance, you do not magically save three letters for every instance of 'the'. You save two letters for every instance of 'the'. That is obvious enough (once it's pointed out), but I believe it be one source of misconceptions regarding the true savings provided by a given abbreviation system. The less obvious bit is that less-frequent words may actually realize greater letter-count savings, so 'the' may not really be the number-one choice for greatest potential effort savings (and in fact, it is not, at least in spoken American English).

The other source of misconceptions is less important; it's just the tendency to react (emotionally?) to large words out of preportion to their actual frequency, even taking letter-count-savings into account. By this I mean that someone might put a lot of effort into creating abbreviations for a set of long words they hate typing, even though the stats show that those words simply don't show up often enough to be worth it. However, I will admit that the emotional factor does matter on an individual basis; if it feels like you've saved a lot of effort by creating a shortcut for 'somereallylongwordthatdoesnotactuallyshowupalot', then it may be worthwhile just for the psychosomatic benefit (or for the confidence in spelling). However, that sort of thing should not be allowed to shape the core of the system, it should just be something individuals tack on for themselves.

So, let's look at the previously mentioned lists from the standpoint of American English, spoken word only, with letters-saved-counts based on single letter abbreviations.

My source for this is the American National Corpus frequency data, Release 2 which contained 3862172 words (more than 14276929 characters). An Excel spreadsheet will be attached to this article; the spreadsheet will show the sorting and calculation formulas I performed on the original ANC data.

Percent Savings means 'how much less typing/writing will I have to do if I substitute a single letter for this' in the sense of how many fewer characters, not how many fewer words (your fingers/wrists don't care about words, they care about keypresses/pen strokes).

Dutton   Percent Savings
======   ===============
the           1.71
of            0.46
and           1.78
to            0.59
in            0.34
a             0.00
that          2.17
is            0.24
was           0.45
he            0.11
for           0.33
it            0.69

8.87% is certainly a respectable figure, well worth memorizing a simple system of 11 single-letter codes (I'm ignoring the 'a' for obvious reasons). However, it means that someone who genuinely expects to get 25% speed increase is bound to be greatly disappointed; the real speed increase is likely to be consonant with the savings in effort, i.e. about a third of the claimed speed increase.

(ANC Spoken, before I re-sorted by letter savings)
ANC     Percent Savings
===     ===============
i            0.00
and          1.78
the          1.71
you          1.47
it           0.69
to           0.59
a            0.00
uh           0.49
of           0.46
yeah         1.07

8.26% is not as much as one might expect from a system that takes into account even some of the grunts (for verbatim transcriptionists), is it? Yet, that is what I would get if I naively just swiped the top ten.

BNC     Percent Savings
===     ===============
the          1.71
be           0.12
of           0.46
and          1.78
a            0.00
in           0.34
to           0.59
have         0.75
it           0.69

6.44%; if you based your single-letter abbreviation system on these words, you would get some disappointing results in terms of effort saved.

Brown    Percent Savings
=====    ===============
the           1.71
of            0.46
and           1.78
to            0.59
a             0.00
in            0.34
that          2.17
is            0.24
was           0.45
He            0.11

7.85% is a far cry from the 24.23% an initial (knee-jerk) viewing of their stats would suggest, isn't it?

Noll    Percent Savings
====    ===============
the          1.71
of           0.46
and          1.78
a            0.00
to           0.59
in           0.34
is           0.24
you          1.47
that         2.17
it           0.69

9.45% is the highest yet, and a little surprising to me; the implication appears to be that standard American newspaper vocabulary matches spoken word frequency better than any of the other lists.

Again, I know these lists (except Dutton) were not calculated for 'effort saving', but that's my point; a naive usage of the frequency tables available to us can create unrealistic expectations.

Now, let's look at what happens if you sort by the actual typed-letter savings:

ANC (spoken)  Percent Savings
============  ===============
that               2.17
and                1.78
the                1.71
you                1.47
know               1.10
yeah               1.07
they               1.01
have               0.75
it                 0.69
there              0.66

12.42%, a definite winner for American English transcription purposes. Presumably, the same sort of somewhat counter-intuitive results would be obtained for U.K. English by re-sorting the British National Corpus lists the same way.

However, what if you aren't doing actual transcription, you're just taking notes in lectures, and therefore you don't have to take down every 'you know', 'yeah', 'and', 'to', etc.? Well, then, your list (indeed, all of the above lists) will look different, and you will get to include the 'magical' saving of a full three letters for 'the', two letters for 'to', and so forth. However, the typed-letter savings sorting strategy still applies, it's just that you have to move further down the sorted list to pick out the words you will bother to abbreviate instead of skipping entirely.

A full 24-letter code table using the typed-letter-savings-count approach would offer you a 20% savings in actual effort, not just superficial word count, and that is definitely nothing to sneeze at. So I am currently working on exactly that; a simple 24-letter code, no funky rules, just a straight substitution code that will theoretically save 20% of writing/typing effort.

I'm leaning toward doing both a transcription and a note-taking version. It seems to me that the number of common words that get dropped entirely in note-taking would necessitate a drastically different abbreviation set.

Notes on the ANC data in the spreadsheet:

1) I aggregated some of the data; the original lists words separately depending on the instances of a given part of speech usage, which is irrelevant to my purposes. However, I did not aggregate all duplicates of all words, so be careful if you try to use automated tools on this data set; it's not consistent in that respect.

2) Some of the entries are obviously not things you want for a single letter code system, endings of contractions and so forth, but they remain because their *letter count* still matters for the calculation of overall typing effort.

Attached is the ANC spoken word data in a spreadsheet (compressed to 1M with 7zip, expands to ~13M). The spreadsheet is large enough to crash my copy of Open Office Calc repeatedly (about every fifth operation), so I had to use Excel. Sometimes, Evil sucks less...

Living Room / Re: What books are you reading?
« on: March 02, 2010, 08:35 PM »
it takes away the veal that may be covering what is already in front of you.

I'm pretty sure you meant 'veil'; the other way would be mighty gross, I suspect. :D

On the other hand, walking through curtains of veal might be interesting if you're feeling peckish...

Certainly seems like the logical way to do it, but it's much easier said than done.

I have tried writing a list of what needs to be done, that was an epic fail, then I tried doing 1 chapter at once, kept getting distracted, and I have tried countless other things.

Two suggestions, both of which may fall under the heading of the countless other things you already tried, but here goes:

1) random dictionary attack: open the dictionary to a random point with your eyes closed, stab your finger down somewhere, then open your eyes and do your best to make that work apply somehow to the current page of your novel, no matter how retarded the connection may seem.

2) what would.. attack: ask yourself, as if you were talking to another person, what a non-blocked writer would be doing at that point in the story. If the answer is "I dunno" even after a few minutes, ask yourself what someone who did know be doing. If the answer is "this is stupid", ask yourself what would someone who did not think this is stupid be doing, etc.

Both a little off the wall, but both have worked for me in other problem areas, so you might want to try.

Living Room / Re: looking for a title (short story)
« on: February 21, 2010, 10:13 AM »
Man, you guys don't dilly-dally around, do you? Thank you, thank you thank you!  :Thmbsup:

Living Room / looking for a title (short story)
« on: February 21, 2010, 03:04 AM »
Housetier had such good luck with the request for help remembering a movie title, I thought I'd try with a short story I read years ago and forgot the title and author of. I actually found the author and the story mentioned years later in an article on writing, and this was after I had been trying like hell to remember the story name, and then I went and lost track of the freaking writing article as well. :wallbash:

The story, science fiction, was written as the diary of a super-genius young girl as she goes through some time period after some sort of clean-bomb attack (clean as in killed everyone who wasn't in a bomb shelter but did not leave radiation or physical damage to the world above, so maybe bioweapon?) She had a pet bird (a mynah bird I think) and is skilled in karate. She learned Pitman shorthand in the space of about three days in order to conserve paper in her diary.

She learns/deduces that her father was part of a team developing/observing/experimenting with a crop of super-intelligent kids popping up around the country/world. She also finds out that she is supposedly super-smart even compared to the other super-smart kids (I remember a line something like: 'Great. Even among other geniuses, can't be normal'). She was something of an anomaly in that she was raised normally for a number of years because her father didn't realize his own daughter was one of the new crop at first, then she was subjected to what we would now call subliminal encouragement (advanced books that just happen to be lying around shortly after an overheard conversation about the same subject, stuff like that I think-that part is especially fuzzy).

The narrater (the girl) was a really likable, funny kid, or at least I thought so at the time.
Let's see, what else; she could see in the infra-red range (looked at a wall with electrical problems once and commented 'looks mighty hot' but didn't know others couldn't see heat that way).  One of the 'sneaky education' vectors in her life was an elderly asian man who moved into town. He was the source of the karate training. He actually was one of the scientists like her father, working on the super-kid situation. I think he helped her get books on the sly that her father forbade (father was actually in on it, it was one of those how-do-you-lead-a-pig things).

She recalls 'exercising wings' with her mynah bird while still young enough not to realize they were different species? Father walked in while they were balancing on the rail of her crib?

I think she thinks she has located another potential super-kid survivor location in the end, but it is left up in the air as to whether she is right.

That's it; no more fragments of that story are coming up in my brain.

Any ideas?

Living Room / Re: What books are you reading?
« on: February 21, 2010, 02:21 AM »
Currently reading the advance uncorrected proofs of 'Are the Rich Necessary?', by Hunter Lewis (found in a thrift store some time ago, so they are no longer 'advance' at all).
It's okay; I have not read in the economics field before, so I can't compare it to other works, but it has convinced me that some things I previously considered self-evident might not be true at all. It has also convinced me that economics is a seriously vague and muddled subject.

I had begun sporadically working through Ronald Mak's first go at 'Writing Compilers and Interpreters' (quite old), then got sidetracked by the discovery of this site and beginning to work through the Basic section of the DC Programming School. I'll probably get back to that soon.

The last remaining book in my current to-read stack at home is 'Godel, Escher, Bach: An Eternal Golden Braid'. I started it once before, because it just felt like one of those books I should be ashamed of not having read if I want to keep my nerd cred, but put it back down (don't remember why). Hopefully I'll get to it soon as well...

One of the old collections of Van Vogt (SF writer from way back, once famous for writing 'Slan'). I'm away from home right now, so I can't look in my read-and-going-to-pass-it-on box to check the title, but I think it was either 'Destination: Universe!' or 'The Book of Van Vogt'.

On the somewhat newer (by my standards) front, I thoroughly enjoyed 'The Cabinet of Curiosities' by Douglas Preston and Lincoln Child. The Aloysius X. L. Pendergast character in particular is the kind I enjoy, i.e. hyper-intelligent and eccentric.

On a side note, I have to admit to being impressed by some of the titles mentioned in this thread; Darwin and Mouser in particular appear to be reading at a level that makes my selections look like 'Thinner thighs in Thirty Days'.  :Thmbsup:

Living Room / Re: People are really (really, really) stupid
« on: February 21, 2010, 01:30 AM »
You know, I've said it before and I will say it again. We need to remove all safety warnings from every product and this stupidity problem will solve itself. People will LEARN to survive and do what needs to be done. Darwinism FTW!
The gene pool could sure use some chlorine :)

Unfortunately, evolutionary forces can work the other way as well; look up the classic SF author C.M. Kornbluth and his Marching Morons short story sometime. ;D

(I think there were two: The Marching Morons and The Little Black Bag)

I just finished a little helper app called Backend, which allows me to try different text editors while keeping my beloved auto-handling (i.e. runs the appropriate compiler or interpreter depending on the file extension of the currently edited doc). Some editors have this, some don't, some have it but make it really, really painful to use; Backend pretty much fixes that.

If you are a fickle text editor user like I (periodically) am, you might save yourself a bit of aggravation with this.

I use PSPad, quirky though it is, because I have yet to find anything short of an IDE with as much all around power that I can still understand and use relatively quickly; the vim and emacs editors seem impressively powerful but too alien to just hop right into in the middle of a project.

I've only just begun investigating Notepad++, but it seems--at first glance--to be a bit more limited than PSPad in some respects, while remaining just as quirky (but aren't all freeware text editors that way?). For instance, and maybe f0dder can clear this one up for me, I haven't yet seen a way for the same hotkey to trigger different actions depending on the current file extension (i.e. F5 to run different compiler commands for .c files versus .bas). Macros also don't seem to be directly editable (I did just discover I could search for them in Notepad++'s xml files -- with PSPad's search in file feature :) -- and edit the macro manually there). Having said all that, I need to stress that I don't know nearly enough about Notepad++ to do a full apples-to-apples comparison.

Has anyone given NoteTab Light a thorough try-out? I haven't used it much, but it claims to have RegEx support, including RegEx for 'find in files' type searching. It has (or had, last time I tinkered with it) an extensive internal macro language and a lot of text processing stuff. I use it for a more powerful Notepad replacement because it loads fast enough to prevent impatience (whereas PSPad, Notepad++, etc load *just* slow enough to annoy me for small, quickie edits).

Can 'favorite' apply even if I don't still program in the language? If so, Euphoria would definitely be my answer. I don't actually code in it anymore, but it was my go-to language for essentially everything for many years, and still would be if it weren't for a handful of things that really have little to do with the core language.

Clean syntax, interpreter ease of development, translate to C compilation for faster run, custom type-checking routines, error messages that actually make sense, that lovely sequence data type, mmmmm :-*. Euphoria was one of the few languages I felt I could actually 'think' in, it fit me so well.

If the 'favorite' label isn't allowed for a language I don't use anymore, then I guess PureBasic would be my current fav. My feelings about it are kind of like mouser's for C++, though; I like it a lot, but it's still just the thing that meets my goals better than anything else at the moment. I'm still on the lookout for something that will really knock my socks off the way Euphoria did years ago, but no luck so far... :(

DC Member Programs and Projects / Re: unlockgo
« on: January 19, 2010, 06:07 PM »
Very nice.  What language did you write that in?

PureBasic. It's quirky, but pretty good. My current preferred coding language.

I won't go on and on here, but if you want to hear me gush a bit about the language, I did a (very biased) write-up on it on my site.

For those who are interested, I went ahead with this, over here.

The source is included in the archive.

DC Member Programs and Projects / unlockgo
« on: January 16, 2010, 03:45 PM »
A little utility to run custom commands at every Windows session unlock.

Idea came from the question by hhbuesch over in this thread.

It seemed like a neat idea, and I wanted to see if I could do it, it turned out I could, so here it is.

hhbuesch, this gave me an idea I started coding, not script specific, just:

run_on_unlock.exe <whatever command line you want here>

and then whenever the user comes back in from a session lock, the system will run <whatever command line you want here> verbatim, as if you'd typed it in a console session.

I don't want to step on the toes of whatever project prompted your question. Do you mind if I post a program like that elsewhere on the board?

General Software Discussion / Re: Keyword Generator/Manager
« on: January 11, 2010, 01:44 AM »
Maybe concordance programs?

Do you mean session unlocks, as in fast user switching (winkey+L) out, then logging back in?

If so, your script's caller needs to do some request notifications of session state changes and process them.

The only way I know to do that is win api coding:

    1) call WTSRegisterSessionNotification()
    2) catch #WM_WTSSESSION_CHANGE message and check for #WTS_SESSION_UNLOCK flag
                    * if WTS_SESSION_UNLOCK flag is detected, the script should be called
    3) call WTSUnRegisterSessionNotification() before it exits

I can tell you a pro for #1 and #2, with a slight modification; I moved essentially my whole pc 'life' to my flash drive a while ago and I've been extremely pleased with it. The only things on the main hdd are the things I absolutely can't find any other way to install.

The slight modification is the fact that I cheated: I created a logical drive on my hdd that I keep synced with my flash drive, so I get the snappier response of my hdd when working at home (and I don't wear out the poor flash drive so quickly; I normally really hammer those things into the ground). When I work on stuff elsewhere (often) I just make sure the flash drive is synced and I'm good to go.

#3 sounds great, but I can't speak from experience there. Do those sdhc cards have any access speed issues?

So the applications *stay* maximize and at the position you left them at? They don't follow the scrolling around, or get reverted to non-maximized-but-same-size-as-when-maximized?

Err, maybe? I'm not sure I understand your phrasing, but I'll try to answer:
apps *stay* maximized in the sense that they retain their maximized *size*

they stay at the position you left them at in the same sense as other windows stay at the position you left them at

I think the above may have answered the second sentence of your post as well, or maybe not; as I said, I'm not confident I understood the question.

All I can tell you is that the times I've tried it, *all* windows - maximized or not - slid on/off the screen as you would intuitively expect them to in response to the appropriate mouse movements.

The only things that did not go off/on screen were the taskbar and desktop icons, and presumably any windows you add to the exclusion list.

That's about it for my knowledge of this program. It has a downloadable demo that might answer your questions more clearly.

Pages: [1] 2next