The following guidelines are designed to provide you with practical ways to implement the user interaction design principles for Microsoft Surface experiences.
Each set of guidelines comprises three categories:
You can create a complete, believable universe by making objects on the screen behave just as objects in the real world behave. You can subtly incorporate principles of physics in how objects move, their inertia, and what occurs when multiple objects collide to help enhance the fidelity of the virtual environment.
Must
Should
Could
You can affix either of two kinds of tags to a physical object to elicit a specific response. A tag is a special pattern of dots. The tag consists of a geometric arrangement of areas that reflect and absorb infrared (IR) light. There are two types of tags: byte tags (which encode an 8-bit number) and identity tags (which encode a 64-bit number). For more information about tags, see the Microsoft Surface SDK Help documentation.
When a user places a tagged object on a Microsoft Surface screen, the vision system reads the tag and determines its value, location, and orientation. The vision system also interprets any other portion of the object that reflects IR light as a contact and sends information to the application. You must then create a visual response appropriate to that object.
An untagged object is referred to as a blob. Microsoft Surface can detect infrared (IR)-reflective objects that users place on the screen. In many cases, the Surface unit identifies these objects as multiple contacts (identified as either blobs or fingers) for each contiguous IR-reflective portion of the object. The vision system cannot determine whether adjacent blobs are part of the same physical object and cannot identify untagged physical objects. An application treats contacts from untagged objects the same as contacts from other objects that register as blobs, such as an entire hand that someone places on the Surface screen.
There are no certification requirements that apply specifically to the reaction to untagged objects. The same requirements that apply to finger and blob contacts in general also apply to contacts from untagged objects.
Microsoft Surface is particularly well-geared to multiuser interaction, but you should also consider how a single user might use your application, and how you can encourage that user to invite other users to the experience.
At any given time, multiple users who are working around a Microsoft Surface unit might be engaged in multiple levels of task coupling. Consider how to best support different levels of coupling in tasks, and how to support varying levels of coupling in the same application. There are three levels of task coupling:
Any applications that you design for use by multiple people simultaneously should enable users to easily see, reach, and use whatever they need for their respective activities. This guideline includes and goes beyond applications that use a 360-degree user interface. You might encounter instances where an application does not have a 360-degree user interface but various controls, screen elements, and content “belong” to a specific user who may be on any side of the unit.
Users approach a Microsoft Surface unit from all directions. To support multiple users, avoid having content that orients toward one edge of the display. No one likes to read text upside down, and this non-ideal experience creates the impression of a "preferred" side of the Microsoft Surface unit. The Microsoft Surface SDK includes a ScatterView control that enables you to orient content toward any edge of the screen. This control is the easiest way to achieve a 360-degree user interface.
Depending on your application’s scenarios and context, the viewable space might be constrained. In some cases, the canvas is fixed and shows only a limited amount of content. In other cases, the canvas is flexible, which enables users to zoom in and out.
Use spatial memory if the canvas is larger than what appears on the screen. In all applications, backgrounds, objects, and controls must consider the z-axis for their behaviors and movements in response to users’ input. For example, the Concierge application includes an infinite canvas when users navigate the map screen, yet category cards and controls stay within the boundaries of the card screen.
Users can clearly see and recognize objects, content, and other elements from a distance. When users move closer, they see more detail, such as additional information, subtle textures, or hints of reflected light. When users interact with interface elements, those elements reveal a finer level of detail through sounds, visual feedback, and movement. For example, icons in Launcher transform into application previews when a user touches them, and then they change into the live applications when a user touches them again. These actions provide progressively more detail with deeper interactions. As users zoom in closer to objects, the objects should reveal unexpected visual or audible details.
Build a super-realistic application by creating rich, detailed representations of familiar real-world behaviors that are augmented with delightful capabilities. These two goals are deeply interconnected because "magical" capabilities are more believable if they emerge from a real environment. For example, the Microsoft Surface Photos application enables users to easily enlarge images that cannot be physically stretched in the real world.
The application’s interface should always show movement and rarely be completely still. Moving and changing elements are pleasantly surprising and unobtrusive. Find the Microsoft Surface equivalent to a person breathing or blinking, or clouds gracefully passing in a summer sky. These things are constantly present, but never distracting. And the application should react instantly to any touch and should always provide something else to touch (but nothing to break the flow of attention).
Users enjoy "playing" with multitouch input and seeing responses for that input. This includes responses that are purely enjoyable and not necessarily functional, and responses that are integral to the action or purpose of the application. Microsoft Surface experiences should focus on highly crafted representations of content, motion, and sound to help create fun, pleasurable experiences of the highest possible quality. The interaction should focus on the quality of the journey, not just completing tasks. For example, the transition from a stack view of photos into a grid view supports the task of browsing more content, but the transition occurs seamlessly as the container’s shape changes and resizes and the content gently moves to indicate scrolling.
Design Surface experiences with a framework that starts with familiar metaphors and objects and then exposes additional possibilities as the interaction unfolds. Over time, this framework enables users to progressively enjoy more capabilities and draws them deeper into the experience.
Most Microsoft Surface experiences are highly exploratory and contextual to the environment. Applications should invite users to interact in a natural manner instead of telling them what to do. Design subtle affordances that invite users to discover through exploration, and create constraints that prevent users from doing anything "wrong.”
Microsoft Surface experiences should trim features and focus on a few rich options. Narrow the number of options to only the most important choices, and make those choices drive a rich and rewarding experience.
Microsoft Surface applications should anticipate user responses to ensure a smooth and efficient experience but should keep users in control. For example, if a user interacts with an application about a recent trip to Europe, the application might automatically display trip photos. However, the user might perceive the automatic action as presumptuous, disruptive, or annoying if it is wrong. Instead the application should enable the user to choose to display vacation photos, which makes it easy and quick for the user to complete the anticipated action and puts the user in full control of the experience.
A Microsoft Surface application should contain levels of depth that move users smoothly from their first touch to full engagement with the application. If instructions are necessary, integrate them with the natural flow of use and do not steer attention away from content. If you are trying to reveal the existence of functions, make those functions apparent at the right moment (when they have an effect). If the application teaches techniques, such as gestures, demonstrate those techniques through affordances and constraints that guide the gesture.
Because of the spatial capabilities of Microsoft Surface applications, you can enable users to navigate through an environment and provide on-demand, deeper views of an object (also known as progressive disclosure). For example, users can zoom into a photo to reveal deeper content and functionality.
The following guidelines describe how to use progressive disclosure to help users learn as they use Surface applications.
Users might try an action that does not work, but the resulting feedback should help them learn, resolve problems, or encourage them in a correct direction.
Microsoft Surface applications should appeal to more than one sense at a time. Visuals, sound, motion, and physical interactions help communicate, create moods, convey personality, direct attention, and enchant.
Touch, gesture, and direct manipulation in Microsoft Surface experiences move away from discrete actions toward continuous action. In GUI applications, discrete actions are mostly brief, single-click actions that users perform in a sequence to complete a task. For example, to move an object from one location to another, a user would select the object, select the appropriate command, and then move the object.
In contrast, direct manipulation favors continuous actions. To move an object from one location to another, the user can just grab it and move it to its new location, as shown in the following illustration.
The user interface should only have limited number of items that are not content. Users should directly move objects on the screen. Controls should reveal themselves from content and should be lightweight and relevant to the content. A Microsoft Surface application should always preserve the illusion that users can interact directly with the content itself. For example, the touch areas of the stack control in the Photos application are at the top of the z-order when the stack is at rest, but the touch areas are partially obscured when users touch photos in the stack.
A gesture refers to a physical action. Many systems use gestures like shortcut keys, which makes them independent of the particular physical location where the user interacts with content. In contrast, Microsoft Surface uses manipulation gestures. The distinction between system gestures and manipulation gestures is important:
Both types of gestures can use identical physical actions, but on-screen graphics guide and provide affordances for manipulation gestures. The use of manipulations relates to the principle of scaffolding.
Do not replace these manipulation gestures with controls that enable users to zoom in or out by pushing a button.
If your application integrates properly into Surface Shell, you can help preserve the overall context of the Microsoft Surface experience and ensure a baseline behavior across all applications.
The physical environment around a Microsoft Surface unit (such as the location, audience, lighting, and objects) influences the Surface experience. For example, cultural differences in audience significantly affect how closely you should arrange clustered seating, and the seating density also influences how people use the Surface unit. Internal décor can also affect how effectively a Surface unit might attract people in a given venue.
If a Surface application requires users to place certain objects on the Surface screen, the location of those objects on the screen and the location where users store them when they are not in use can affect how quickly users discover certain functionalities and how they perceive the overall goals of the experience.