Welcome Guest.   Make a donation to an author on the site September 19, 2014, 04:53:25 AM  *

Please login or register.
Or did you miss your validation email?


Login with username and password (forgot your password?)
Why not become a lifetime supporting member of the site with a one-time donation of any amount? Your donation entitles you to a ton of additional benefits, including access to exclusive discounts and downloads, the ability to enter monthly free software drawings, and a single non-expiring license key for all of our programs.


You must sign up here before you can post and access some areas of the site. Registration is totally free and confidential.
 
The N.A.N.Y. Challenge 2010! Download 24 custom programs!
   
   Forum Home   Thread Marks Chat! Downloads Search Login Register  
Pages: Prev 1 [2] 3 Next   Go Down
  Reply  |  New Topic  |  Print  
Author Topic: 10/GUI  (Read 14649 times)
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #25 on: October 14, 2009, 04:51:07 AM »

Quote
The idea of having to wave my hands around in the air to control anything in a precision way seems both inaccurate and tiring, and fingerprints are a real concern if you're actually touching the display surface (obviously not if you're talking about the edges, where you'd use the power button or carry it - I'm not objecting because I'm a clean freak).

Actually you're not waving your air to control anything.

That would be more befitting of a Wii-cam and it would be a bad idea for the sensor to be so sensitive when it is used for such delicate tasks.

Instead it would just be a low sensor that works on the control principle of the 10Gui except your hands are not flat on a pad.

It's far from perfect which is why I don't like the idea either but to repeat the point of my earlier post, if you're going to do finger gestures, it's just much more practical to not alter any major component especially if you're going to increase the size of a keyboard.

The vertical edge of a monitor is great for this because all you need to do is mimic the hand gestures of pointing a finger and manipulate things depending on how many fingers the sensor detects.

Even if we're going by buttons, there is absolutely no way you can touch the screen edge no matter how large your hand is unless the buttons were poorly placed. Just try it now and see how much space you can put your hand on the back of the monitor where the buttons can be placed (with a slight alignment towards the edge so that you can spot the color flashes if there are any)

Quote
There is no precedent for such a UI being used for any precision purpose or as the general interface for a normal computer system, whereas the device I suggest is merely a potentially novel combination of existing and proven technologies.

umm... a Wii-cam like CamSpace requires way way way more precision than the monitor sensors I'm talking about: http://www.youtube.com/watch?v=v0srY37kkMw

There are even concept motion sensors now that can mimic your hand movements literally - without an object and at the distance of watching a large television set. (Edit: Here it is. Project Natal: http://www.youtube.com/watch?v=g_txF7iETX0 - also I stand corrected about the lack of an object but still full figure detection at that range is already possible.)

I'm not going to attack your device suggestion besides mentioning that the keyword there is "novel" combination since I want to emphasize the point here that I'm not attacking your idea (much) but merely mentioning the absurdity of some of your claims against my idea.

For example, I would understand if you say that the design was just bad for so and so legitimate tech design reasons but no precedent for such a UI?

Just a wireless mouse can show you the possibility of how possible motion detection is today and did I mention webcams?
« Last Edit: October 14, 2009, 05:11:24 AM by Paul Keith » Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
JennyB
Supporting Member
**
Posts: 209


Test all things - hold fast to what is good

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #26 on: October 14, 2009, 10:03:13 AM »


Another thing that I would miss would be the ability to have say two windows one side of the screen and one on the other - but neatly filling the screen - a la GridMove. This hasnt really been considered I think.


The basic idea could be extended so that dropping one window onto another would stack them vertically. Each window in the horizontal strip could then be a vertical strip of windows, which are manipulated in the same manner.
Logged

If you don't see how it can fail -
you haven't understood it properly.
Perry Mowbray
N.A.N.Y. Organizer
Charter Member
***
Posts: 1,807



Thoughtful Scribbles

see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #27 on: October 14, 2009, 10:10:53 AM »


Another thing that I would miss would be the ability to have say two windows one side of the screen and one on the other - but neatly filling the screen - a la GridMove. This hasnt really been considered I think.
The basic idea could be extended so that dropping one window onto another would stack them vertically. Each window in the horizontal strip could then be a vertical strip of windows, which are manipulated in the same manner.

Yes: I had the same thought... though personally, I'd probably go for something more along the lines of Google Earth or SketchUp for moving window views... just 2D with all my fingers seems a little limited Wink
Logged

JavaJones
Review 2.0 Designer
Charter Member
***
Posts: 2,537



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #28 on: October 14, 2009, 11:58:39 AM »

Paul, I'm not entirely sure how to respond. Your concept of touching, or nearly touching the monitor for regular interaction just seems impractical to me. Why? Think about ergonomics: http://ergonomics.about.c...e/ss/computer_setup_2.htm Note that it says to place your monitor *at least* 20 inches from yourself. Now my arm is 25-26" to the tips of my fingers. My hand by itself, from wrist to the tip of my long middle finger is about 8 inches. So that means I'd be holding my arm out fully extended trying to manipulate things *all day long*. That's going to quickly get tiring, and develop into some kind of RSI quite soon, I'm sure.

Maybe your monitor is closer than mine (and closer than recommended), but if you're conforming to ergonomic guidelines then I don't see how your idea is functional. And certainly no one would want to build a fundamental interaction device for a computer that inherently defies guidelines for ergonomic computer setups.

But then I also have the feeling I'm not quite getting what your concept is...

As for precision, the Wii controller works as well as it does because it has sensors built-in. If you're talking about sensors on the side of your monitor that would work just like the pad in the original demo video, then yes obviously there could be that level of precision, but I thought you were talking about not actually having to touch it, something based on visual or proximity detection rather than touch. If that *is* the case then again accuracy is going to be an issue. If you're talking about physical touch, then the ergonomic issues above are clearly a significant challenge to the concept.

This may surprise you but the "Natal" video doesn't actually show very precise interaction. Try using that, or even the Wii interface, to precisely select a single word from a paragraph of text. That's the kind of UI interaction I deal with constantly on a daily basis, and anything that is going to replace my PC UI device has to be at least as good as the basic mouse in that regard.

- Oshyan
Logged

The New Adventures of Oshyan Greene - A life in pictures...
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #29 on: October 15, 2009, 03:04:31 AM »

Paul, I'm not entirely sure how to respond. Your concept of touching, or nearly touching the monitor for regular interaction just seems impractical to me. Why? Think about ergonomics: http://ergonomics.about.c...e/ss/computer_setup_2.htm Note that it says to place your monitor *at least* 20 inches from yourself. Now my arm is 25-26" to the tips of my fingers. My hand by itself, from wrist to the tip of my long middle finger is about 8 inches. So that means I'd be holding my arm out fully extended trying to manipulate things *all day long*. That's going to quickly get tiring, and develop into some kind of RSI quite soon, I'm sure.

JavaJones, look at your previous criticisms. You never brought this up and instead used such words as "arm waving",  90% too far away and fingerprints on the edge of the screen.

It's disingenuous of you to switch arguments constantly without acknowledging first how silly and mistaken your original arguments were.

Most importantly, impractical is a far cry from silly.

It's even disingenuous to say it would be quickly tiring when part of the reason people get Carpal Tunnel Syndrome is because keyboard positions and mouse positions are not adapted to prevent fatigue.

Ergonomics equals comfort and has very little to do with fatigue.

In fact, if you actually were thinking of that screenshot then you would realize how unergonomic touchpads can be.

Can you imagine how frustrating it would be to accidentally swipe a pad because of the constraints of space if you were to have a pad jutting out of that keyboard or a pad where instead of the shape of the mouse alleviating discomfort from your wrist, you are forced to lie your palm flat on a surface even if we're talking un-even surfaces?

You also forgot the ergonomic factor that if you don't want to use monitor sensors, you just cover it. Don't want to use touchpads? Need to pull it out and replace it with a mouse.

No offense intended but it seems to me that it sounds impractical to you because you are trying to pigeonhole the sensors into something it's not. The 10Gui is not replacing keyboards here. It is replacing the mouse.

Your argument would be the equivalent of saying it is going to be quickly tiring to rest/constantly move your hands on the mouse because you can't rest both of your hands on the keyboard.


Quote
Maybe your monitor is closer than mine (and closer than recommended), but if you're conforming to ergonomic guidelines then I don't see how your idea is functional. And certainly no one would want to build a fundamental interaction device for a computer that inherently defies guidelines for ergonomic computer setups.

Any touchpad no matter how superior inherently defies guidelines for ergonomic computer setups.

That's why mostly artist has adapted to tablet pcs while many have not. Without reprioritizing their computer goals, the ergonomics of the tablet pc isn't there.

This holds the same for laptop touchpads and I've seen many insert a mouse because it is so un-ergonomic.

In fact, for the tasks of the 10Gui it is less un-ergonomic to play thumb chopsticks flat on a surface than it is to temporarily point your finger at the edge of the monitor just as it is no more un-ergonomic to push the power button of your monitor off than to whine how every monitor doesn't come with a remote because it is "too far" according to "guidelines for egonomic computer setups".

Quote
This may surprise you but the "Natal" video doesn't actually show very precise interaction. Try using that, or even the Wii interface, to precisely select a single word from a paragraph of text. That's the kind of UI interaction I deal with constantly on a daily basis, and anything that is going to replace my PC UI device has to be at least as good as the basic mouse in that regard.

This may also surprise you but the range of the Natal is so far compared to the fingers close to your monitor and we're talking about concept ideas here.

It is again a disingenuous red herring on your part to go from silly... impractical... no precedent for a UI...and now this!

You're a guy suggesting a touch surface to replace a mouse and you're using an example of precisely selecting a single word from a paragraph?!

Again, finger gestures are supposed to replace a "mouse" not a "keyboard".
Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
Stoic Joker
Honorary Member
**
Posts: 5,261



View Profile WWW Give some DonationCredits to this forum member
« Reply #30 on: October 15, 2009, 06:05:11 AM »

Just wanted to interject that I find it interesting that nobody has brought up Microsoft's Surface Computing Interface (which does replace both mouse & keyboard) in this discussion.
Logged
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #31 on: October 15, 2009, 07:06:06 AM »

@Stoic Joker,

It's probably because it's even way more out there compared to Courier and Natal.

Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
JavaJones
Review 2.0 Designer
Charter Member
***
Posts: 2,537



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #32 on: October 15, 2009, 12:41:33 PM »

Hmm. I can see this is going nowhere. We'll see which of our ideas ever sees support, much less realization, but I stand by my previous arguments.

In a funny coincidence I went to Best Buy a few days ago to buy a digital camera for a friend (10% off coupon made it worthwhile over online vendors who were out of stock anyway). While I was there I saw a fairly expensive "edge touch" picture frame where you used sensors on the edge of the screen to control the settings and UI. It was... awful. cheesy

- Oshyan
Logged

The New Adventures of Oshyan Greene - A life in pictures...
JennyB
Supporting Member
**
Posts: 209


Test all things - hold fast to what is good

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #33 on: October 15, 2009, 01:43:26 PM »

Actually, I was more thinking that the command surface would also dynamically change based on the app. When you're running something like Photoshop, the main part of the panel would display Photoshop controls. When you switched apps, it would switch to a 'control panel' for that app.

Standard items like file open/close/save/print/next/previous/etc. could be assigned permanent locations (ex: an icon bank across the top of the command area) among all apps for consistency.

Or a zoom-in/out gesture  - screen-app-window-object?

Quote
In many respects (and much as it pains me to say it *choke*) Apple's iPhone incorporates a lot of this already. My GF just upgraded her AT&T cellular plan and got a 3G as part of the deal. Despite my general dislike of Apple for their proprietary closed platform and elitist mindset, even I have to grudgingly admit that the interface design is, for the most part, quite impressive.

But with the way most apps work these days, right now I think the alphanumeric keyboard might actually be in danger of being on the lagging edge of where interfaces are heading.

Keyboards are actually pretty good at making multiway selections. They are superior to touchscreens because the usual choices can be made by muscle memory alone, without having to look. Whatever variety you use (standard keyboard, chord keyboard, or marking menu as with KeystrokeCE) the limit (without shifting) seems to be a 32-way choice, which is also a good limit for the number of options to display at a time.

I'm not sure if this can be done in Windows, or any other OS, but it potentially splits the program from the interface entirely.  All the former has to do is provide lists of functions it makes available, which the interface device displays and selects as it sees fit; and send a list identifier at the appropriate point to tell it which list to switch to.
Logged

If you don't see how it can fail -
you haven't understood it properly.
40hz
Supporting Member
**
Posts: 10,670



see users location on a map View Profile Read user's biography. Give some DonationCredits to this forum member
« Reply #34 on: October 15, 2009, 03:24:45 PM »

Keyboards are actually pretty good at making multiway selections. They are superior to touchscreens because the usual choices can be made by muscle memory alone, without having to look.

Excellent point. I tend to like hotkeys myself.

The only problem comes when you start supporting multiple applications with widely varying sets of controls and features. You'll see this mostly in music, media, and graphics applications. Eventually you run out of logical key combinations for all the tasks you want to have a key for. Once that happens, you're forced to use arbitrary and non-intuitive ones. A good example is V for PASTE or W for CLOSE in most apps.

If you rely on muscle memory, and know that something like "alt-S" = "SORT," then what happens when another app decides it should be used for SAMPLE or SCALE or SKEW? I'm thinking along the lines of what happens when the average American gets into a car designed for the British road system. It's only mirrored so it's not totally unusable. But it's still a jolt to deal with.

The other thing is that there's a lot of research showing that most people prefer to use spatial models and functional "chunking" (ex: "Sugar is on the top left shelf next to the honey.") to remember things rather than code tags (ex: "Sugar is a sweetener. All sweeteners are coded as S-group items and are shelved alphabetically, within their group, over in section 17").

So I still think some sort of graphic control surface will ultimately win out over key combinations for the general population. That's why GUI OS interfaces became so popular. People hated command lines and hot keys. Especially for applications they used every day. A lot of early wordprocessors lost out to MSWord because they wouldn't provide their users with an alternative to key commands.

I'm not sure if this can be done in Windows, or any other OS, but it potentially splits the program from the interface entirely.  All the former has to do is provide lists of functions it makes available, which the interface device displays and selects as it sees fit; and send a list identifier at the appropriate point to tell it which list to switch to.

It's very doable. But you'll probably never see something like that incorporated into the OS itself.  Especially when you consider the amount of joint company cooperation that would be required to make it work. (There's a risk of violating antitrust laws for starters!)

 smiley



Logged

Don't you see? It's turtles all the way down!
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #35 on: October 15, 2009, 05:45:24 PM »

Hmm. I can see this is going nowhere. We'll see which of our ideas ever sees support, much less realization, but I stand by my previous arguments.

In a funny coincidence I went to Best Buy a few days ago to buy a digital camera for a friend (10% off coupon made it worthwhile over online vendors who were out of stock anyway). While I was there I saw a fairly expensive "edge touch" picture frame where you used sensors on the edge of the screen to control the settings and UI. It was... awful. cheesy

- Oshyan

Very well. If you're too prideful to admit that you're jumping through hoops with your reasoning and that you didn't really stood by your arguments and you kept changing them...

Btw. I'd just like to point out that I am not for monitor sensors.

If you had only looked past your blind bias, you would have seen that I preferred a much more developed KeyStrokeCE for the desktop and only mentioned monitor sensors to keep on topic with the 10Gui concept.



Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
JavaJones
Review 2.0 Designer
Charter Member
***
Posts: 2,537



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #36 on: October 15, 2009, 06:01:21 PM »

Hmm, yes I suppose from your perspective you are faultless in your reasoning and I am foolish. Nevermind that if monitor sensors weren't a focus of your intention, one wonders why you spent so much time and energy arguing in their favor.

Looking back through what you have said, you've described the implementation in multiple ways, and I could as easily perceive that as "changing your argument", just as you have (incorrectly) with mine. All of the major points I made remain valid. The fingerprint issue is something you latched onto because it was easy to argue against as it didn't apply to your concept, but it was hardly a pillar in my argument.

I will say that I found your initial explanation of the idea confusing and that naturally my response was based on that early understanding. Whether the fault was mine or yours, one of misperception, or poor explanation, is irrelevant. The focus of my criticisms of the approach evolved as my understanding of your concept evolved, but it was not evasive and, as I said, all the essential elements of my criticisms remain valid.

I note with interest that no one else has jumped into this particular part of the discussion, whether in favor of you or I. Most likely nobody cares about it. Frankly I'm feeling the same way now. The reason I continue to respond is I find your communications rather rude and patronizing, and I'm not so happy to let that stand. But I suppose the only way to end it is to let it die. So this will be my last response. The world can judge me, and the validity of my points in this thread, as they may. Wink

- Oshyan
Logged

The New Adventures of Oshyan Greene - A life in pictures...
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #37 on: October 16, 2009, 05:51:05 AM »

Hmm, yes I suppose from your perspective you are faultless in your reasoning and I am foolish. Nevermind that if monitor sensors weren't a focus of your intention, one wonders why you spent so much time and energy arguing in their favor.

No, you have it backwards. You threw out the words silly first. You were the ones who kept changing the argument without even admitting that you were mistaken in the previous one. You are the one accusing me of using your own flawed arguments as some kind of pillar instead of admitting how absurd they were.

Quote
Looking back through what you have said, you've described the implementation in multiple ways, and I could as easily perceive that as "changing your argument", just as you have (incorrectly) with mine. All of the major points I made remain valid. The fingerprint issue is something you latched onto because it was easy to argue against as it didn't apply to your concept, but it was hardly a pillar in my argument.

Lol, I never latched on to anything. I'm the one who had to constantly switch arguments while tolerating your manner of thinking and arrogance.

Quote
I will say that I found your initial explanation of the idea confusing and that naturally my response was based on that early understanding. Whether the fault was mine or yours, one of misperception, or poor explanation, is irrelevant. The focus of my criticisms of the approach evolved as my understanding of your concept evolved, but it was not evasive and, as I said, all the essential elements of my criticisms remain valid.

To use your own words: "Hmm, yes I suppose from your perspective you are faultless in your reasoning and I am foolish"

Quote
I note with interest that no one else has jumped into this particular part of the discussion, whether in favor of you or I. Most likely nobody cares about it. Frankly I'm feeling the same way now. The reason I continue to respond is I find your communications rather rude and patronizing, and I'm not so happy to let that stand. But I suppose the only way to end it is to let it die. So this will be my last response. The world can judge me, and the validity of my points in this thread, as they may. Wink

- Oshyan

Riiight.

To use your own words: "Hmm, yes I suppose from your perspective you are faultless in your reasoning and I am silly"

JavaJones, the reason "I" responded to you in the first place was because it was you who threw out the first insult and was rude and patronizing in your subsequent replies and you continue to be right now in this post.

I suggest instead of pretending your points is the one in need of judgement, you drop the arrogance and look at your personality first. 

« Last Edit: October 16, 2009, 05:55:13 AM by Paul Keith » Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
housetier
Charter Honorary Member
***
Posts: 1,321


see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #38 on: October 16, 2009, 06:45:52 AM »

I want to hear arguments about 10/GUI or alternatives, but not who "started it first".
Logged
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #39 on: October 16, 2009, 08:26:18 AM »

@housetier:

It's not an argument of who started it first. That's pretty much clear with this post:

Well, I think Mouser has already expressed some of my concerns. I like the idea of a better multi-touch interface close to hand, rather than the silly idea of trying to actually use your monitor which is A: too far away 90% of the time and B: you don't want fingerprints all over. However I think all the potential of the touch interface is wasted in this concept because it spends too much time and UI commands trying to improve on existing window management solutions when I honestly don't find the existing solutions to be that big a problem. I routinely have 10 or more apps/windows open, and many of these have tabs of their own inside (pspad text editor with tabs, Firefox, Chrome, IE all with tabs, etc.). So for me this concept video is trying to solve a problem I don't have with an intriguing interaction device that is ultimately wasted due to the misdirected UI changes.

- Oshyan

It's more of a rude comeback/reminder to JavaJones' own rude comments as to why we're arguing about monitor sensors.

In that sense, it does fit the "arguments about alternatives" bit. We're just being rude to each other because we're coming off rude to each other.
Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
housetier
Charter Honorary Member
***
Posts: 1,321


see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #40 on: October 16, 2009, 08:45:04 AM »

Since I don't own one of these new tiny mobile PCs yet, I was wondering if 10/GUI was inspired by them. Well maybe I am getting ahead of myself here, as I doubt there is a device that can handle 10 fingers accurately.

Nevertheless, I find 10/GUI interesting not because of the 10 fingers but because of their window management ideas. I have yet to find a desktop metaphor that suits the way my brain works. I have come close with the Awesome window manager, which I can control via keyboard. But a more fluid layout like shown in the demo video seems interesting. I winder if there is a window manager that does that.

Back to mobile pcs: how do they manage several windows on a small screen?
Logged
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #41 on: October 16, 2009, 08:59:49 AM »

Back to mobile pcs: how do they manage several windows on a small screen?

Well, I don't own an Iphone or a new Smartphone but maybe by new you also meant to include the older PocketPCs/Palms. (based on your comment about how they manage several windows on a small screen)

In that case, they manage multiple windows through a drop down arrow.

It still depends on the software you have but many of them are designed to work via a drop down arrow or some form of alt tabbing.

In a way, it's no different than cellphones with graphical icons. You press a button. Get a menu. Click on an icon that opens up a menu in full screen (except you can choose to have those applications remain open.)
Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
Tuxman
Supporting Member
**
Posts: 1,472


OMG not him again!

View Profile WWW Give some DonationCredits to this forum member
« Reply #42 on: October 16, 2009, 07:14:31 PM »

i think 40hz's old startrek picture does point the way to the future -- customized input pads tailored to the application you are working on.
Yep:
http://www.artlebedev.com...verything/optimus-tactus/
Logged

I bet when Cheetahs race and one of them cheats, the other one goes "Man, you're such a Cheetah!" and they laugh & eat a zebra or whatever.
- @VeryGrumpyCat
40hz
Supporting Member
**
Posts: 10,670



see users location on a map View Profile Read user's biography. Give some DonationCredits to this forum member
« Reply #43 on: October 16, 2009, 07:21:31 PM »

i think 40hz's old startrek picture does point the way to the future -- customized input pads tailored to the application you are working on.
Yep:
http://www.artlebedev.com...verything/optimus-tactus/

That's exactly it! Schwing!!!  Grin

(Thx for the link T-Man!)

Logged

Don't you see? It's turtles all the way down!
Tuxman
Supporting Member
**
Posts: 1,472


OMG not him again!

View Profile WWW Give some DonationCredits to this forum member
« Reply #44 on: October 16, 2009, 07:23:53 PM »

No problem. Recently stumbled upon it.
(Who does buy me one? ...)
Logged

I bet when Cheetahs race and one of them cheats, the other one goes "Man, you're such a Cheetah!" and they laugh & eat a zebra or whatever.
- @VeryGrumpyCat
JennyB
Supporting Member
**
Posts: 209


Test all things - hold fast to what is good

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #45 on: October 17, 2009, 04:54:52 PM »

Keyboards are actually pretty good at making multiway selections. They are superior to touchscreens because the usual choices can be made by muscle memory alone, without having to look.

Excellent point. I tend to like hotkeys myself.

The only problem comes when you start supporting multiple applications with widely varying sets of controls and features. You'll see this mostly in music, media, and graphics applications.

Here's an interesting video of what was possible in that direction nearly 15 years ago, and a sensible use of multi-touch support. In either case, the main point is to move away from the mouse having to select both the action and its object.
 
Quote
Eventually you run out of logical key combinations for all the tasks you want to have a key for. Once that happens, you're forced to use arbitrary and non-intuitive ones. A good example is V for PASTE or W for CLOSE in most apps.

Yes, that's one of the reasons I gave up using Dvorak!  Angry

That's why I'm proposing that keycodes not be hardwired to particular keys or gestures. 

The idea comes from ColorForth which uses only 27 keys of a normal keyboard - have a continually updated display of key assignments (not on the keycaps - you should not be looking there anyway) that is generated from lists of currently available commands. The user only sees the command label, not the keycode that it returns. Think of it like a link in the Help index.

Initially, with a new program, no keys are assigned - or perhaps the input device auto-assigns those with familiar labels. Commands are selected by picking with a mouse, or perhaps in FARR style.  When you learn a command and know you are going to use it often, drag a copy off the list and place it on a spot associated with a particular key.
Logged

If you don't see how it can fail -
you haven't understood it properly.
40hz
Supporting Member
**
Posts: 10,670



see users location on a map View Profile Read user's biography. Give some DonationCredits to this forum member
« Reply #46 on: October 18, 2009, 08:05:16 AM »

the main point is to move away from the mouse having to select both the action and its object.

Wow...

For years there's been something about mouse usage that really bothered me, but I could never quite put my finger on what it was. You just did. Thank you!  thumbs up


That's why I'm proposing that keycodes not be hardwired to particular keys or gestures.  

The idea comes from ColorForth which uses only 27 keys of a normal keyboard - have a continually updated display of key assignments (not on the keycaps - you should not be looking there anyway) that is generated from lists of currently available commands. The user only sees the command label, not the keycode that it returns. Think of it like a link in the Help index.

Initially, with a new program, no keys are assigned - or perhaps the input device auto-assigns those with familiar labels. Commands are selected by picking with a mouse, or perhaps in FARR style.  When you learn a command and know you are going to use it often, drag a copy off the list and place it on a spot associated with a particular key.


That sounds something like IBM's old 'dynamic key system' back in the days of minicomputers. There was a set of "command tabs" which appeared across the bottom of the screen that corresponded visually to a double row of function keys at the top of the keyboard.

The screen labels (and related keys) would reassign themselves depending, not only on what application that was running - but more importantly - what you were doing in the app itself. For example, entering an edit mode would display a group of edit functions. Switching over to a data entry mode would reassign the keys to other functions. There were certain keys that had standardized assignments however. If I recall correctly, on the bottom row, the first key on the left was always HELP, the second key was NEXT, and the third was PREVIOUS. Programmers weren't required to adhere to the 'standard key' mapping conventions. But everybody did, so it was never an issue.

IBM's idea was to have the minimum number of keys active at any time in order to avoid operator confusion and minimize opportunities for keystroke errors.

Also interesting was how this method was incorporated into their security model. Any function or selection the user wasn't authorized to make simply didn't appear in the available keys. So if you were running an accounting app, you only saw what you needed to do your job. Things you weren't authorized to do simply didn't exist when you logged in.

They used the term "obscured functions" to describe this feature. IMHO it was a far better method than 'greying out' unauthorized selections - or even worse, allowing you to do anything, but screaming at you when you try to select something you're not supposed to, like most systems do today.

Was IBM's dynamic model somewhat similar to what you had in mind?
« Last Edit: October 18, 2009, 08:07:59 AM by 40hz » Logged

Don't you see? It's turtles all the way down!
JennyB
Supporting Member
**
Posts: 209


Test all things - hold fast to what is good

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #47 on: October 19, 2009, 01:12:28 PM »


Was IBM's dynamic model somewhat similar to what you had in mind?


From all I know of it (which is just what you have written above) it is very similar, because my idea seems to imply an "every app is a server app" model. It goes like this:

The current list of available actions can be likened to a directory listing. It can be changed by program action, by navigating up or down the heirarchy, or by  selecting a different object. A separate zoom control selects actions suitable for the appropriate level of nested objects, and changes the selection cursor to suit.  So, at the top level you would find actions for logging on and off, creating new tasks, and choosing which screen to focus. Below that, actions relating to individual windows (move, resize, etc), and below that, actions peculiar to the particular program.

On Object Selection (MouseDown)
Controller sends the mouse position. Server returns the object path (The list of all nested objects at that point).

If the actions currently displayed are still in the object path, they remain unchanged and the nested object at that level is selected. So if, for example, you have been resizing windows, a new window can be moved by selecting any point on it. No trying to click on lines, corners and fiddly little handles. Otherwise, show the actions suitable for the new object at the current level.

Optionally, change the action selection.

On MouseUp, the controller sends the current object path and the token for the current action.

The main difference is that I am thinking of the controller as a personal device, perhaps a netbook or a mobile phone, so standard controls are not the issue - no more QWERTY!




Logged

If you don't see how it can fail -
you haven't understood it properly.
40hz
Supporting Member
**
Posts: 10,670



see users location on a map View Profile Read user's biography. Give some DonationCredits to this forum member
« Reply #48 on: October 19, 2009, 02:04:13 PM »


Was IBM's dynamic model somewhat similar to what you had in mind?


From all I know of it (which is just what you have written above) it is very similar, because my idea seems to imply an "every app is a server app" model.


"Every app is a server app..."

Interesting idea. (And well beyond what IBM was thinking! thumbs up)

Are you envisioning something where the desktop/control interface acts as the client in a client-server environment? (That would lend itself nicely in a fully virtualized environment, where the OS performs a purely supervisory role, and each application is launched as a separate virtual machine.)

« Last Edit: October 19, 2009, 02:13:54 PM by 40hz » Logged

Don't you see? It's turtles all the way down!
Paul Keith
Member
**
Posts: 1,982


see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #49 on: October 19, 2009, 07:24:52 PM »

I don't really know what a server app specifically is but I don't get why every app should be a server app and why it would improve the system? (aside from creating a uniform set of similar application I guess...)

Anyone have the layman translation?

Seems like it's about rebuilding an OS where as up to now, I was assuming JennyB's concept was just a macro monitor on steroids.
Logged

<reserve space for the day DC can auto-generate your signature from your personal PopUp Wisdom quotes>
Pages: Prev 1 [2] 3 Next   Go Up
  Reply  |  New Topic  |  Print  
 
Jump to:  
   Forum Home   Thread Marks Chat! Downloads Search Login Register  

DonationCoder.com | About Us
DonationCoder.com Forum | Powered by SMF
[ Page time: 0.064s | Server load: 0.09 ]