Language & Luminosity
Today, there are some interesting examples on the Web of contextual localisation: site language determined by device location; blog theme set to light or dark depending on device time. While potentially helpful, however, these particular decisions to dictate or change aspects of the experience are, unfortunately, based on assumptions. Just because I am in Japan does not mean that I want the Japanese version of a site. Just because it’s night out does not mean that I am reading in the dark. Furthermore, not all participants desire the same experience at the same or each time they interact, given how much can be different each time that they do.
If such adjustments do make sense and will actually improve the experience by presenting the most relevant facets of the experience at the most opportune time, this could be achieved more effectively by basing decisions not on assumptions or generalisations, but instead on a known individual device, platform, and participant preferences, signals, and historical and behavioural data to create highly personalised, contextually relevant decisions and experiences.
Automating some of these adjustments makes sense when there’s an opportunity to simplify or remove manual input while providing the desired information, but there will be times when manual override is desired or required. At MIT Media Lab, Matthew Aldrich and team from the Responsive Environments Group have been testing an experimental control device about the size of a credit card that monitors and automatically adapts light in a workspace, both saving energy and providing an ideal, pre-programmed source of work light. The device contains sensors that measure direct light cast from overhead light-emitting diodes (LED), ambient light from neighbouring work stations, and natural light from nearby windows. The device also features simple controls to manually adjust the light intensity and colour balance of the overhead LEDs. Aldrich explains, “It opens up the space for user controls, so you can adapt the lighting to what suits you best.”
Context is More Than Location
“As marketers mature in handling context, they will come to know that Mrs. Smith isn’t interested in the store nearest her home, where the area has poor lighting and bad parking, but the one you have in the mall 10 miles further out. Why? Because she can visit your shop, along with six others, in the stress-free mall; leave her infant in the crèche; and pick up her husband from work on the way back.”
—Anthony Mullen, The Future of Marketing is (Better) Context
The sophistication of today’s smart devices and the amount of data we store on the Internet about ourselves and our participants has reached an all-time high and shows no sign of slowing. On the contrary, if companies like Google, Facebook, Amazon, Apple, Microsoft, Twitter, Foursquare, Netflix, and Spotify (to name a few) continue on their current path, we’ll be giving up more and more about ourselves in exchange for more personalised products, services, and value—provided such companies deliver the right kind of value as deemed by their participants.
Recently, I cancelled our Netflix subscription. I wanted more time to write, play music, and hang out with my family. I may restart my subscription one day, it’s not going away any time soon and I know they’ll be pleased to see me. But I wanted a break. Netflix, however, does not want to let me go. They know a little bit about my viewing habits, too: which movies and shows I’ve watched; which genres I prefer; how many times I’ve watched particular movies or episodes; whether I’ve watched until the end or partway through; how long I took to watch all available episodes of a show; whether I’ve watched similar or related shows, et al. For Netflix, reaching out with an email designed to pique my personal interest in restarting my subscription is easy. I devoured every episode of Portlandia seasons 1 and 2 because it is some funny hipster tease. When season 3 became available on Netflix, they simply tailored an email inviting me to watch the new season. I’ll be honest, I’m tempted. They already have my payment details on file. I could stop writing about it, restart my subscription at the tap of a link, and be watching new episodes within seamless seconds. For now, I’ll at least try to finish this post.
Alas, I digress.
“Today your phone doesn’t really know [whether] you’re walking, running, skiing, shopping, driving, or biking, but in the future, Google will know that and will be able to build wild new kinds of systems that can serve you when doing each of those things.”
—Robert Scoble, Google, The Freaky Line and Why Moto X is a Game-Changer
This year, search giant Google has brought several projects together in an emerging new ecosystem of contextual relevance. Google Now presents timely information both explicitly by participant-initiated search queries, or implicitly without any manual input at all.
Currently, there are 38 different “cards” of information available based on location, date and time, email content, calendar events, search history, and historical and behavioural patterns over time. Google Now also employs the Google Knowledge Graph of semantic search information to enhance results with a deeper meaning behind connections and relationships between data. Google Now is personalised search and contextually relevant information based on known patterns and relationships between data without the participant needing to search for anything. Participants can fine-tune results and the experience, defining which are most and least relevant to them at any given time.
Last month, Google released its new flagship Android device, Moto X. The device is ‘always listening, always on’, boasting innovative features such as Active Display, Quick Capture, and Touchless Control.
With Active Display, the device instantly displays the time and any new notifications just by pulling the device out of your pocket, flipping it face-up on the table or desk, or nudging it to attention. Tap and hold the notification for a quick preview, drag to unlock and view in full, or simply let go to return the device to sleep.
Quick Capture provides fast and easy access to your camera so that you don’t miss capturing the moment. With device in hand, flick your wrist twice to engage the camera, and tap record to capture 10MP stills or 1080p video.
With Touchless Control, Google combines a voice-activated and controlled natural user interface with the informational power of Google Now. Speak, “OK, Google Now”, and your Moto X wakes ready to respond to your query, read or write an email or text, provide directions to your destination, updates of the big game, and a growing number of other timely information, all based on contextually relevant information from apps, search, and sensors. For those without the new Moto X device, there is a Touchless Control app in the Google Play Store, enabling similar control for other devices running Android OS.
Google Glass, while still in its infant “Explorer Program” strictly for early adopters, already boasts several innovative features such as voice control and multiple sensors, including, gyroscope, accelerometer, magnetic field, orientation, rotation vector, linear acceleration, gravity, and light. As per Touchless Control, speak, “OK, Glass”, and search with all the power of the Google Knowledge Graph. Google Glass also displays information in context via Google Now. With Glass, you can get relevant Google Now information on the go and in the moment, without needing to pull out your phone.
“There is potential on both sides of the equation, both for using physiological signals to quantify an emotional state while people are playing the game, and getting an idea of how people are emotionally experiencing your game.”
For the past few years, Valve Software’s resident experimental psychologist Mike Ambinder has been exploring the possibilities of contextual entertainment in game design based on biofeedback. Mike and team have conducted a number of experiments with players wearing specially designed hardware to measure physiological signals such as heart rate, skin conductance levels (specifically, pH levels in perspiration on the player’s skin), facial expressions, eye movements, EEGs (Electroencephalography, the recording of electrical activity along the scalp), pupil dilation, body temperature, and posture. By measuring these signals, Mike and team were able to better understand and begin to quantify new data such as arousal and valence levels of players as they progress through the game. The next challenge for Valve game designers and developers will be to change the game based on this new data to heighten the experience in real-time. While the hardware required to measure such signals might not be consumer-ready just yet, the findings of these tests are intriguing for the future of experience and responsive design.
Ambinder proclaims, “Right now, for every game on the market, you map player intent to on-screen behaviour. If you think about it, we’re missing a whole other axis of player experience. What is a player’s emotional state while they’re playing the game? How are they enjoying the game? How challenged are they by the game? How frustrated are they by the game? We’re very curious about what we can do when we take that data and feed it into a game, and have that game respond dynamically to a player’s emotional state.”
Whether it be improved conditions via sensors and controls, improved productivity via knowledge graphs and timely information, or improved gameplay and entertainment via biofeedback, these are just some of the recent examples of how responsive design is becoming much more rich, engaging, useful, meaningful, and powerful.