ambient autonomous interfaces
The original intent of technology was to augment human intellect. Artificial intelligence represents one of the biggest leaps forward in this regard. However, as technology becomes more human, what will augment technology?
I am convinced that “unartificial” intelligence, which I consider to be intelligence gained from a device interacting with the real world, is the answer. Just as I predicted the spoken future a decade ago, and most recently, my 2018 predictions for Voicebot.ai, I predict the rise of autonomous interfaces that are powered by sensors. I’d like to take this opportunity to elaborate further on this idea.
There is nothing artificial about a physical object, such as a vehicle or person, moving through space and time in the real world. An ambient autonomous interface is a concept and approach to personalization that seeks to leverage device sensors as external “cues” to power live interfaces that morph automatically to fit changes in physical context. Because ‘ambient autonomous interface’ is a mouthful, I am simply calling this concept Cue.
Sensor technologies (i.e. accelerometer, gyroscope, proximity, ambient light, voice, biometrics etc.) in smartphones have advanced dramatically over the past 10 years, yet developers have failed to exploit their potential to power truly personal app experiences.
In today’s world, sensors power self-driving cars, automatic heating and cooling systems, and even escalators and traffic signals, all exerting some degree of autonomy.
“Tesla’s self-driving cars, for example, feed data into a central unit that periodically updates the decentralized software.” - Competing in the Age of Artificial Intelligence
Yet, smartphone apps still remain quite dumb in this regard. And product designers continue to crank out rigid, fixed screen states for an increasingly dynamic and stateless world.
Personalization was supposed to solve this problem by tailoring app content to the perceived needs of the individual user. However, till now, personalization has been centered around tracking human activity in a digital product and building advanced algorithms that attempt to infer and anticipate a user’s future wants and needs, often manifesting itself as a list of suggested links of stuff to click and read.
An ambient autonomous interface is a highly social interface. It is in constant conversation with its surrounding environment through ‘dark interactions’.
“ Dark interactions can help designers create a world in which computers serve up well-timed experiences.” - Dark Interactions Are Invading Our Lives. Where’s The Off Button?
This type of ‘ambient’ experience quietly follows alongside human activity in the real world and adapts itself automatically to each changing physical context. The table below is a sample of over 100 moments throughout the day that could trigger context switching in an autonomous interface. It maps the journey throughout the day for an office worker, who gets dressed, goes to work, attends meetings, goes for lunch, returns to the office and ultimately heads home. It ends with going for an evening run.
These concepts represent a true companion ‘app’ experience and will usher in the next great wave of personalization. The user experience is autonomous in that it leverages device sensors as inputs to infer and decide between a range of designed intents and actions, constantly morphing to fit each changing moment of context. Input sensors that can trigger actions include WIFI connected states, fluctuations in bandwidth, entering and exiting buildings, entering vehicles, entering trains, gestures and body motions, facial expressions, driving, walking, running, and cycling.
The realization of what’s possible with this type of autonomous interface surfaced in my mind a couple years ago, when I co-founded a startup in Qatar that leverages proximity-based sensors to perform autonomous searches for nearby restaurants. Each time a person’s physical context changes by crossing a beacon or polygon geofence, in-app actions are triggered that in turn trigger automatic changes in screen state on an iPhone and Apple Watch.
In the coming months, I plan to carry these learnings forward and push the boundaries of what's possible with an autonomous interface powered by an even broader range of sensors.