ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

10/GUI

<< < (10/11) > >>

JennyB:
Keyboards are actually pretty good at making multiway selections. They are superior to touchscreens because the usual choices can be made by muscle memory alone, without having to look.
-JennyB (October 15, 2009, 01:43 PM)
--- End quote ---

Excellent point. I tend to like hotkeys myself.

The only problem comes when you start supporting multiple applications with widely varying sets of controls and features. You'll see this mostly in music, media, and graphics applications. -40hz (October 15, 2009, 03:24 PM)
--- End quote ---

Here's an interesting video of what was possible in that direction nearly 15 years ago, and a sensible use of multi-touch support. In either case, the main point is to move away from the mouse having to select both the action and its object.
 
Eventually you run out of logical key combinations for all the tasks you want to have a key for. Once that happens, you're forced to use arbitrary and non-intuitive ones. A good example is V for PASTE or W for CLOSE in most apps.

--- End quote ---

Yes, that's one of the reasons I gave up using Dvorak!  >:(

That's why I'm proposing that keycodes not be hardwired to particular keys or gestures. 

The idea comes from ColorForth which uses only 27 keys of a normal keyboard - have a continually updated display of key assignments (not on the keycaps - you should not be looking there anyway) that is generated from lists of currently available commands. The user only sees the command label, not the keycode that it returns. Think of it like a link in the Help index.

Initially, with a new program, no keys are assigned - or perhaps the input device auto-assigns those with familiar labels. Commands are selected by picking with a mouse, or perhaps in FARR style.  When you learn a command and know you are going to use it often, drag a copy off the list and place it on a spot associated with a particular key.

40hz:
the main point is to move away from the mouse having to select both the action and its object.
-JennyB (October 17, 2009, 04:54 PM)
--- End quote ---

Wow...

For years there's been something about mouse usage that really bothered me, but I could never quite put my finger on what it was. You just did. Thank you!  :up:


That's why I'm proposing that keycodes not be hardwired to particular keys or gestures.  

The idea comes from ColorForth which uses only 27 keys of a normal keyboard - have a continually updated display of key assignments (not on the keycaps - you should not be looking there anyway) that is generated from lists of currently available commands. The user only sees the command label, not the keycode that it returns. Think of it like a link in the Help index.

Initially, with a new program, no keys are assigned - or perhaps the input device auto-assigns those with familiar labels. Commands are selected by picking with a mouse, or perhaps in FARR style.  When you learn a command and know you are going to use it often, drag a copy off the list and place it on a spot associated with a particular key.

-JennyB (October 17, 2009, 04:54 PM)
--- End quote ---

That sounds something like IBM's old 'dynamic key system' back in the days of minicomputers. There was a set of "command tabs" which appeared across the bottom of the screen that corresponded visually to a double row of function keys at the top of the keyboard.

The screen labels (and related keys) would reassign themselves depending, not only on what application that was running - but more importantly - what you were doing in the app itself. For example, entering an edit mode would display a group of edit functions. Switching over to a data entry mode would reassign the keys to other functions. There were certain keys that had standardized assignments however. If I recall correctly, on the bottom row, the first key on the left was always HELP, the second key was NEXT, and the third was PREVIOUS. Programmers weren't required to adhere to the 'standard key' mapping conventions. But everybody did, so it was never an issue.

IBM's idea was to have the minimum number of keys active at any time in order to avoid operator confusion and minimize opportunities for keystroke errors.

Also interesting was how this method was incorporated into their security model. Any function or selection the user wasn't authorized to make simply didn't appear in the available keys. So if you were running an accounting app, you only saw what you needed to do your job. Things you weren't authorized to do simply didn't exist when you logged in.

They used the term "obscured functions" to describe this feature. IMHO it was a far better method than 'greying out' unauthorized selections - or even worse, allowing you to do anything, but screaming at you when you try to select something you're not supposed to, like most systems do today.

Was IBM's dynamic model somewhat similar to what you had in mind?

JennyB:

Was IBM's dynamic model somewhat similar to what you had in mind?

-40hz (October 18, 2009, 08:05 AM)
--- End quote ---

From all I know of it (which is just what you have written above) it is very similar, because my idea seems to imply an "every app is a server app" model. It goes like this:

The current list of available actions can be likened to a directory listing. It can be changed by program action, by navigating up or down the heirarchy, or by  selecting a different object. A separate zoom control selects actions suitable for the appropriate level of nested objects, and changes the selection cursor to suit.  So, at the top level you would find actions for logging on and off, creating new tasks, and choosing which screen to focus. Below that, actions relating to individual windows (move, resize, etc), and below that, actions peculiar to the particular program.

On Object Selection (MouseDown)
Controller sends the mouse position. Server returns the object path (The list of all nested objects at that point).

If the actions currently displayed are still in the object path, they remain unchanged and the nested object at that level is selected. So if, for example, you have been resizing windows, a new window can be moved by selecting any point on it. No trying to click on lines, corners and fiddly little handles. Otherwise, show the actions suitable for the new object at the current level.

Optionally, change the action selection.

On MouseUp, the controller sends the current object path and the token for the current action.

The main difference is that I am thinking of the controller as a personal device, perhaps a netbook or a mobile phone, so standard controls are not the issue - no more QWERTY!




40hz:

Was IBM's dynamic model somewhat similar to what you had in mind?

-40hz (October 18, 2009, 08:05 AM)
--- End quote ---

From all I know of it (which is just what you have written above) it is very similar, because my idea seems to imply an "every app is a server app" model.

-JennyB (October 19, 2009, 01:12 PM)
--- End quote ---

"Every app is a server app..."

Interesting idea. (And well beyond what IBM was thinking! :up:)

Are you envisioning something where the desktop/control interface acts as the client in a client-server environment? (That would lend itself nicely in a fully virtualized environment, where the OS performs a purely supervisory role, and each application is launched as a separate virtual machine.)

Paul Keith:
I don't really know what a server app specifically is but I don't get why every app should be a server app and why it would improve the system? (aside from creating a uniform set of similar application I guess...)

Anyone have the layman translation?

Seems like it's about rebuilding an OS where as up to now, I was assuming JennyB's concept was just a macro monitor on steroids.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version