Interaction zero

By Jon Seneger, Vadim Dubrov, Aric Dromi

– The future of interactions has no interaction!

“Can zero interaction models support both states of user certainty or uncertainty?”

• Certainty: Command/Event/Place/Time driven interaction. (Need/Want)

• Uncertainty: Desire/Exploratory/Impulse driven interaction. (Wish)

With many user experience and interactive design discussions are still focused on expanding established, but outdated interaction models. Ever since the first dream of a calculus machine, there has been a desire to have a verbal interaction model. Yet the efforts of the last 30 years have been mostly focused on the “Glass Window”, enhancing the looking glass effect. Initially, this window wasn’t portable, and the content is existing behind the Glass could only be created, activated and manipulated by physical interaction with mechanical devices like mouse, keyboard or joystick. Once the “Window” was made portable, and the Glass touchable, the digital natives’ primary access to the digital world became tapping on the glass.

Gyroscope and accelerometer for six operational axes made these devices more sensitive and responsive to alternative interactions. By adding compass and GPS, the devices became “aware” of our location and extended our old event-driven interaction model to the time/location/information space. It was quite a natural progression for this one-way Glass to become more “transparent” and transform to wearables, capable of sensing information about states of our physical bodies. Thus accelerating fusion of our physical and digital selves.

Inevitably leading to transplanting and outsourcing of our cognitive functions, enabling what can be seen by some as technological extensions and others amputation, an inevitable process of us merging with technology. Leading to what can resemble an intersection of shared functionality close to that of conjoined twins.

Moving the Glass close to our eyes, tapping into this added sensor we started to immerse ourselves (at least visually and auditory) within the digital world of VR or create a ghostly mixed reality in AR. Preoccupied with recreating simple geometry, physics, natural interaction models and hypnotized by the idea of being close to making this world outside the Window similar to our physical one. The constant strive for near real virtual reality, whereby adding haptic feedback to amplify this sensation of physical presence. Transformable environments with rapidly changing context and morphing abilities of the virtual space allow us to explore different forms of interaction, but we still can’t break through the boundaries of our old event-driven interaction model: skillful hands, magic wands, and magic words.

Yet with all visual driven interaction, one can say that ever since the first dream of a calculus machine there has been a desire to have a purely verbal interaction model. The punch-card/keyboard and even graphical user interface with mice, touch surfaces and air/space interaction are intermediary solutions for a verbal interaction. One can say the hindrance to achieving the needed 2-way communication via verbal interaction is one of machine intellect. The processing power, and synaptic connections if you will (ML) that is required to have a meaningful 2-way communication with a program/device is still on the stage of having a conversation with a toddler. The initial computer-aided vocal systems were ones of talking to you, or reading text aloud. This was added to the simple recording/learning your voice and converting to writing, which over time built up various solutions and systems for simple vocal command drove interactions. Having this oral communication based on commands fell short of usability when one can say that most users fell into the trap of wanting the system to work on 1st or 2nd try. These early systems were a bit like asking your dog to fetch the paper, without putting any effort into doing the training for it to understand what you mean by, Paper and Fetch.

With the emergence of the Vocal aided helper, Apple Siri, the command-driven interaction got 1 step closer to a “working” model. Even though the communication was very command driven and limited to request, the ordinary user accepted its shortcomings merely. Mainly due to the introduction of some sense of intellect (the interaction added some Machine Learning), the communication had some empathy, although randomly dispersed and not always relevant. Later systems like Google Now (Android/TV/Mobile/Chrome and Windows platform), Amazon Echo (Alexa) and Microsoft Cortana (running on Windows 10/Xbox/Microsoft Mobile/IOS and Android) are all approaching the desired interaction model of a digital companion. One can say that these solutions today are at an impasse, we want to talk to our devices like humans, that they respond like humans, but they still act like toddlers with demanding directing grownup voices.

“The services understand words, but they don’t understand you!”

Defining the ‘perfect’ universal out of the box interaction model might be impossible, as it can only apply to a given use case. There is just our needs, wants and content; within that context, we can choose how to express our intention. We expect a predictive, reliable yet unpredicted and spontaneous response from our fellow humans, and interaction is often an effort from both parties. One can say that even though the success of high experiences depends on the ability to minimise friction for various touch points — between the user and the solution/execution of wish/want/need.

When a system can determine uncertainty and respond with suggestive acknowledgment, we might be close to what is expect from a companion, and to some extent, the introduction of a “companion” can be seen as the step before total understanding or pre-cognitive interaction (No interaction, just understanding.) But first, in the coming years, we will undoubtedly see the rise of companies resembling Douglas Adams fantasy of the Sirius Cybernetics Corporation (HHGTG), offering standalone models for domestic appliances. The new interactions will undoubtedly be based on current models, with the “if we can solve it for tomorrow, today, it might be good enough” approach, and every company offering rivaling solutions that will live on various platforms. It’s easy to imagine the kitchen being filled with voices from numerous bots, competing for attention.

The present is not sustainable!

Taking one step away from the immediate current trends and looking into the near future, when the technology enables us to hack our bodies entirely. Improve our code (or hack our brain), performing hardware and software upgrades, by exporting our cognitive functions and consolidating our very existence (outsourcing and enhancing our intellect). It won’t become merely an extension of ourselves — it will define us as humans. That is unless we get a handful of “gods” and the rest of us, although that is beside the point.

While the current work of our brain is done independently (with crude ways of information gathering), nanotechnology, biotechnology, information technology and cognitive science will join together (NBIC) into one field that will undoubtedly shape the rise of the one trillion identities, with a hive/cluster source of information. Giving us the opportunity to bridge the human-machine connection.

Placing ourselves/humanity as an integral part of the machine, while at the same time letting the machine becomes an extension of our natural interaction model, enabling abilities like “telekinesis” and ”telepathy” with other connected entities and items.

In the coming years, we will use sensors to augment ourselves but also augment technology (AI) by tapping into every aspect of our perception, quantifying our self-awareness and uploading it to the global-human-cognitive-cloud. (Maybe not only cognitive)

Shared humanity is next, with the quantified self?

While the integration of, and enhancement of sensors will undoubtedly happen, the current state of ecosystems cannot sustain this. The road going forward has to pave the way for a baseline integration, simply put — we are heading down a path of too many ecosystems. With every manufacturer, content and service provider wanting to capitalize on the growing decentralized delivery models, we are falling into a quagmire of ecosystems demanding too much and scattered attention, maintenance and payment from us.

Moving into a world when everything is connected, and everything got an ID (yes including our brain) will force us to rethink the way we interact with content. We are not only consumers, but content providers and a new model of value exchange will take shape. — “We are now part of the information.” Most users are satisfied with the current value exchange if the value is higher or fills an immediate need for them, the dangers providers face in the coming years with a dispersed ecosystem is a failure to consolidate.

With the emergence and growing popularity of solutions like IFTTT, it is evident that the consumer of services wants to group their services for personal and efficiency needs. The integration of this human-assisted machine learning (yes it’s a crude form of implementation) companies like IFTTT can over time create simple models that can accommodate for users wishes, the unforeseen desires. And eventually being part of the information (the Internet of my IDs) can be a model that dissolves the need for multiple ecosystems and position our intention in the middle (rather than the action) of a context ecology.

Uploading ourselves, our sensory data into a personal digital space will create mostly the state of mirrored intelligence, our brain in the cloud, which will lead to the birth of a new identity model: The digital alter ego? That will enable a whole set of new unthought of interactions between your digital and biological self.