Size: 6041
Comment:
|
Size: 7789
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 8: | Line 8: |
* Control from a desktop * Control from a tablet |
* Remote control * Control at a distance |
Line 15: | Line 15: |
=== Control from a desktop === | === Remote Control === Desktop: |
Line 20: | Line 22: |
=== Control from a tablet === standing... |
Tablet/Phone: standing... |
Line 23: | Line 24: |
combine into one "remote control" one with these two and smaller remotes like handsets or scroll-and-select remotes???? | While some interactions can occur with less-capable remotes, === Control at a Distance === Here I mean ''direct'' control at a distance. Instead of using a remote intermediary, you gesture at the display unit (or an attached sensor) itself. This actually can encompass "wands" or remotes like that used on the Wii, which assist with gestural sensing, but they are not very prominent anymore, with the advent of the Kinect there is no real need to hack those, when you can hack the direct gesture systems. The principles are similar, however. The control systems work in two basic ways: ==== Gesture ==== recognizes and does stuff... language of control ==== Direct Manipulation ==== The other method is to use gestures to directly control elements on the screen, or to indicate them with a mouse pointer. This is the classic sci fi movie version of VR, and futuristic control systems of all sorts. The user grabs shapes, or moves the focus over a series of objects then indicates. Your first experience on many of these systems is exactly like this, as a virtual mouse is provided and you select options, type WiFi passwords and so on. From this you may have noticed the key problem with it, that moving like this is tiring, and it is more tiring the more precision you have to use to make the system perform properly. lessons... bullet list of things to do and not do... * Use briefly, or in brief bouts. Allow the user to make a selection, or a gesture, then display information. * Aside from entertainment, best for assistive systems where the user's primary task is on another device, or is in a real environment. * Huge benefits in environments where the user cannot or should not touch the display. Some interesting research is occurring in hospital operating rooms, but the next step will be mechanics, outdoors, and public access devices. |
Line 26: | Line 49: |
Mobiles are not different from desktops because they are small but because they are connected and personal. Good products don't just meet a niche, but leverage the native intent of the interface. Lately, we have heard some gnashing of teeth as developers try to figure out how to make things that are useful for the Apple Watch. Meanwhile, users of Pebble wonder what the fuss is about as much has already been figured out regarding wearables.
Likewise, we have to design properly for large interactive displays. We cannot just make really big interfaces, but have to recognize they are public and collaborative.
Control Methods
Touch displays add an extra layer of complexity not in other products in that they are routinely — and unusually: simultaneously — multi-modal. A single user may be at arm's reach interacting with the display while others stand well back observing. There are actually several ways to control large displays:
- Remote control
- Control at a distance
- Very coarse control
- Fine control up close
Let's take each one in turn:
Remote Control
Desktop: sitting...
multiple workspaces... can replicate on remote, or have separate control... upsides and downsides (reflect mirroring computer experiences like when you do not want the users to see everything, even as simple as notes for a presentation... )
Tablet/Phone: standing...
While some interactions can occur with less-capable remotes,
Control at a Distance
Here I mean direct control at a distance. Instead of using a remote intermediary, you gesture at the display unit (or an attached sensor) itself.
This actually can encompass "wands" or remotes like that used on the Wii, which assist with gestural sensing, but they are not very prominent anymore, with the advent of the Kinect there is no real need to hack those, when you can hack the direct gesture systems. The principles are similar, however.
The control systems work in two basic ways:
Gesture
recognizes and does stuff... language of control
Direct Manipulation
The other method is to use gestures to directly control elements on the screen, or to indicate them with a mouse pointer. This is the classic sci fi movie version of VR, and futuristic control systems of all sorts. The user grabs shapes, or moves the focus over a series of objects then indicates. Your first experience on many of these systems is exactly like this, as a virtual mouse is provided and you select options, type WiFi passwords and so on.
From this you may have noticed the key problem with it, that moving like this is tiring, and it is more tiring the more precision you have to use to make the system perform properly.
lessons... bullet list of things to do and not do... * Use briefly, or in brief bouts. Allow the user to make a selection, or a gesture, then display information. * Aside from entertainment, best for assistive systems where the user's primary task is on another device, or is in a real environment. * Huge benefits in environments where the user cannot or should not touch the display. Some interesting research is occurring in hospital operating rooms, but the next step will be mechanics, outdoors, and public access devices.
Very coarse control
gesture at a distance, both secondhand (ambient, environmental) and mostly, deliberate...
ADD: Gesture Control At a Distance? Some precision use of this, Kinect, some Smart TVs I think, and the OR research projects...
Fine control up close
stuff... all the notes about edges, and blocking and all that from below...
Public & Collaborative
In addition, you may recall when I said the interfaces are used in public, and collaboratively. Public doesn't have to mean literal free access, just "not in private." Large interactive displays are unlike desktop computers or especially mobiles due simply to their size. More often than not, the size is a direct offshoot of the need to be in public. At the lowest end of public we mean the TV in your home. Smart TVs mean someone controls a device which the whole family is observing.
Other cases such as museums, business contexts and so forth have the same basic issues as family Smart TV control. As single individual is in physical possession of the control unit (or has the focus of the gestural control mechanism) but may not be fully in control. He has to take input from the entire family, assure the group makes decisions, and make it clear what input he has requested of the device.
When designing for any of the control cases, two audience classes can be considered.
- Group
- Individual
And unusually, we do not just have to design for both audiences, but usually both experiencing some output at the same time. Let's look at some problems with this.
Feedback
Traditional input methods consider the controlling user, so when a selection is made feedback must be quick enough it is clear that the intent was executed. It doesn't have to take effect, but something must happen such as the button indicating click, a delay indicator, even just vibration and noise.
However, those not making the control inputs are out of the cognitive cycle. They may have requested an input, but are aware there may be competing requests, the controlling individual may have misheard, or ignored it, and time delays are much higher. The need is still there, so all users can orient themselves to the system properly, s
Wayfinding, Orientation and Notification
easier to get lost, especially as it does not hold attention
even the controlling individual can be distracted by the others in the room, or other tasks and miss cues or need to reorient
use mobile methods to remind and label at each point, as few transient indicators as possible
Focus
"no, up one" we've all done that in a meeting... but only because we can see what is going on... systems without focus like gesture? ???
More issues??? BRAINSTORM. GOOGLE SMART TV REVIEWS... ALL SHOULD BE TAKEOFFS ON BASIC SYSTEMS
Conclusion
Summarize findings in bullet list...
future directions of research... call to action for folks to volunteer their projects???
Proposed design guidelines: (must be validated, of course) - Near (close range) interaction is only along the sides. - Dupe on both sides, or move between them? Need to provide for left and right hand use. - The near user, touching the screen, should be out of the way of far users. Hence edge placement again. - Near user cannot see the far use. Dupe the screen for them? - When visual sensors are available, make near use only pop out when users are nearby. - Not distracting to others??? - Can use bezel anchoring of the hand (hold/touch the bezel, as happens with tablets) - Feedback on obscuring. Sensors, when available, should indicate this. - Repeat all existing TV guidelines for distant use, but validate those; can do much with math as far as angular resolution, contrast ratios without research.
List of research topics: - Accuracy of touch - Bet it changes due to device position. Level, above head, etc. - Touch enhanced by anchoring to the bezel? - MEASURE gorilla arm, don't just assume. Make people use for a while and observe, measure changes over time. - How to account for near/far sizing? Does the small near use annoy and distract? - Pixel density. Some near guidelines may have to go back to old ones, like no italics, due to low density. - Perception of no-bezel use. Wall-to-wall or multi-screen interfaces, can user work with close-range control, can they find it, will they stabilize on the screen or not?