Size: 11060
Comment:
|
Size: 11632
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 34: | Line 34: |
* Remote controls are unusual in that they are split in need, appealing to both very simple and very complex entry methods. | |
Line 35: | Line 36: |
* f * f * f |
* Don't make up your own control language. Use standard mappings when such abstractions already exist. Don't change channels by click left and right. Spinning controls increment clockwise and decrement counter-clockwise. * React instantly. It is okay if there are technical constraints on completing the request, but indicate the request has been received. Reflect inputs to the screen with indicators, or actually begin the action in a visible way immediately. |
Mobiles are not different from desktops because they are small but because they are connected and personal. Good products don't just meet a niche, but leverage the native intent of the interface. Lately, we have heard some gnashing of teeth as developers try to figure out how to make things that are useful for the Apple Watch. Meanwhile, users of Pebble wonder what the fuss is about as much has already been figured out regarding wearables.
Likewise, we have to design properly for large interactive displays. We cannot just make the same stuff we do on desktop, or mobile, really big and call it suitable on a large display. We have to understand the ways they are interacted with, and recognize large displays are public, and collaborative.
Control Methods
With the apparent ubiquity of the touchscreen smartphone, there's an assumption that all interactivity is touch. But really, there are several ways to control large displays:
- Remote control
- Control at a distance
- Very coarse control
- Fine control up close
Each of them has pros and cons, not just in price or suitability for installation, but in the way they encourage group behaviors, or the detail of control input.
Let's review each one in turn:
Remote Control
Now we remotely control much of our lives, but the first remote controls most people used were for controlling large displays. The basic pushbutton TV remote has much to teach us still. Good remotes are:
- Individual
- Only one person uses the remote at a time. Poor remote control systems allow multiple simultaneous input methods, and do not clarify when they are in conflict.
- Responsive
- When an input is performed, the remote indicates the control was sent, and the display device responds in a reasonable time. The response doesn't have to be completion of the request, just an indication that the request has started, and is being processed. Bad remotes have long delays.
- Abstracted
- The input method is indirect, so has to be abstracted at some level. There is no direct control, but a language of control imposed. Think of how channel changing on the classic old TV remote is up and down. Trackpads that try to allow direct mouse pointer manipulation work poorly because the pointing surface is moving, or at an arbitrary orientation to the display.
Of course, even TV remotes are becoming more complex, but the principle has been extended for many types of control, from industrial automation to collaborative public space wall displays. We can break down remote control into two basic categories
- Fixed
- Wall mounted units, kiosks, desktop computers and laptops since they are not really usable unless set on a surface. WHAT IT MEANS...
- Portable
- Simple pushbutton remotes, complex remotes with things like keyboards, smartphones and tablets with remote control apps on them. Gesture-enabled wands are discussed under Control at a Distance. MEANING
multiple workspaces... can replicate on remote, or have separate control... upsides and downsides (reflect mirroring computer experiences like when you do not want the users to see everything, even as simple as notes for a presentation... )
Designing for Remote Control
- Remote controls are unusual in that they are split in need, appealing to both very simple and very complex entry methods.
- f
- Don't make up your own control language. Use standard mappings when such abstractions already exist. Don't change channels by click left and right. Spinning controls increment clockwise and decrement counter-clockwise.
- React instantly. It is okay if there are technical constraints on completing the request, but indicate the request has been received. Reflect inputs to the screen with indicators, or actually begin the action in a visible way immediately.
Control at a Distance
Here I mean direct control at a distance. Instead of using a remote intermediary, you gesture at the display unit (or an attached sensor) itself.
This actually can encompass "wands" or remotes like that used on the Wii, which assist with gestural sensing, but they are not very prominent anymore, with the advent of the Kinect there is no real need to hack those, when you can hack the direct gesture systems. The principles are similar, however.
The control systems work in two basic ways:
Gesture Language
Actions can be performed with simple gestures, or strings of them to form more complex commands, or series of commands. The gestures are generally not going to be very natural, so must be memorized. Typical users will not be able to memorize or apply more than a few gestures, so the design of the information has to be directed and simple.
Gestures are typically things like next, stop, back, details, options and so forth. This can be combined with other methods, such as voice control, to provide for input like typing which would be very difficult to perform otherwise.
Direct Manipulation
The other method is to use gestures to directly control elements on the screen, or to indicate them with a mouse pointer. This is the classic sci fi movie version of VR, and futuristic control systems of all sorts. The user grabs shapes, or moves the focus over a series of objects then indicates. Your first experience on many of these systems is exactly like this, as a virtual mouse is provided and you select options, type WiFi passwords and so on.
From this you may have noticed the key problem with it, that moving like this is tiring, and it is more tiring the more precision you have to use to make the system perform properly.
Designing for Control at a Distance
- Aside from entertainment, this is best for assistive systems where the user's primary task is on another device, or is in a real environment.
- Use these input methods briefly, or as short strings periodically. Allow the user to make a selection, or a gesture, then display information.
- Huge benefits in environments where the user cannot or should not touch the display. Some interesting research is occurring in hospital operating rooms, but the next step will be mechanics, outdoors, and public access devices.
- Be careful selecting gestures, to assure they are unambiguous, are not performed naturally so will only be used deliberately, and do not interfere with their primary job. In tests in the hospital setting, some gestures caused the surgeon to accidentally make contact with his upper body, violating sterile procedures.
- Don't expect to be able to train for part time or brief-use. Museum visitors cannot learn a gesture language quickly enough for it to be useful to them.
- You will have to reflect the input to the screen, which may distract other users of the system.
Very coarse control
gesture at a distance, both secondhand (ambient, environmental) and mostly, deliberate...
ADD: Gesture Control At a Distance? Some precision use of this, Kinect, some Smart TVs I think, and the OR research projects...
Fine control up close
stuff... all the notes about edges, and blocking and all that from below...
Public & Collaborative
In addition, you may recall when I said the interfaces are used in public, and collaboratively. Public doesn't have to mean literal free access, just "not in private." Large interactive displays are unlike desktop computers or especially mobiles due simply to their size. More often than not, the size is a direct offshoot of the need to be in public. At the lowest end of public we mean the TV in your home. Smart TVs mean someone controls a device which the whole family is observing.
Other cases such as museums, business contexts and so forth have the same basic issues as family Smart TV control. As single individual is in physical possession of the control unit (or has the focus of the gestural control mechanism) but may not be fully in control. He has to take input from the entire family, assure the group makes decisions, and make it clear what input he has requested of the device.
When designing for any of the control cases, two audience classes can be considered.
- Group
- Individual
And unusually, we do not just have to design for both audiences, but usually both experiencing some output at the same time. Let's look at some problems with this.
Feedback
Traditional input methods consider the controlling user, so when a selection is made feedback must be quick enough it is clear that the intent was executed. It doesn't have to take effect, but something must happen such as the button indicating click, a delay indicator, even just vibration and noise.
However, those not making the control inputs are out of the cognitive cycle. They may have requested an input, but are aware there may be competing requests, the controlling individual may have misheard, or ignored it, and time delays are much higher. The need is still there, so all users can orient themselves to the system properly, s...
Some systems, such as gesture based sensing, require feedback on the screen of the control. This is different than indicating focus or wayfinding, and the feedback may interfere with other users' ability to consume information. This may make some input methods less suitable for shared environments.
Wayfinding, Orientation and Notification
easier to get lost, especially as it does not hold attention
even the controlling individual can be distracted by the others in the room, or other tasks and miss cues or need to reorient
use mobile methods to remind and label at each point, as few transient indicators as possible
Focus
"no, up one" we've all done that in a meeting... but only because we can see what is going on... systems without focus like gesture? ???
More issues??? BRAINSTORM. GOOGLE SMART TV REVIEWS... ALL SHOULD BE TAKEOFFS ON BASIC SYSTEMS
Conclusion
Summarize findings in bullet list...
future directions of research... call to action for folks to volunteer their projects???
Proposed design guidelines: (must be validated, of course) - Near (close range) interaction is only along the sides. - Dupe on both sides, or move between them? Need to provide for left and right hand use. - The near user, touching the screen, should be out of the way of far users. Hence edge placement again. - Near user cannot see the far use. Dupe the screen for them? - When visual sensors are available, make near use only pop out when users are nearby. - Not distracting to others??? - Can use bezel anchoring of the hand (hold/touch the bezel, as happens with tablets) - Feedback on obscuring. Sensors, when available, should indicate this. - Repeat all existing TV guidelines for distant use, but validate those; can do much with math as far as angular resolution, contrast ratios without research.
List of research topics: - Accuracy of touch - Bet it changes due to device position. Level, above head, etc. - Touch enhanced by anchoring to the bezel? - MEASURE gorilla arm, don't just assume. Make people use for a while and observe, measure changes over time. - How to account for near/far sizing? Does the small near use annoy and distract? - Pixel density. Some near guidelines may have to go back to old ones, like no italics, due to low density. - Perception of no-bezel use. Wall-to-wall or multi-screen interfaces, can user work with close-range control, can they find it, will they stabilize on the screen or not?