Many decisions will be made based on intuition. Although this is only the beginning of the AI revolution, all of us, including the developers of artificial intelligence, have a big problem understanding it.
Understanding Artificial Intelligence
Explainable AI aims to explain to the end user how AI works, or more precisely, how the model makes decisions. In short, the idea is to answer the question of what happens inside the AI black box.
As part of the ongoing debate, a number of terms that are essential to understanding human perception of AI processes are currently being specified. These are:
- Interpretability: a situation where the actions of the model can be described in terms that people can understand.
- Explainability: understood as the interface between people and the system that makes it understandable to us.
- Transparency: occurs when a person can understand the function of the model without the need for explanation.
- Non-transparency: appears to be the opposite of a transparent model. Its interpretation requires explanation.
- Comprehensibility: the ability of a learning algorithm to represent its learned knowledge in a way that is understandable to humans.
Read more about Explainable AI for UX
It is easy to see that each of these terms refers to an attempt to understand the model and describe how it works.
Where can we look for a solution? The discussion is about communicating with AI, so we should focus on creating intelligent human-AI interfaces. In effect, building a platform capable of understanding the actions of artificial intelligence, giving it the space to express itself – and us the opportunity for accurate interpretation.
This is obviously a challenge for AI specialists, researchers, and interaction designers. The problem is that the recipients of the processes are not just mathematicians or specialists, but whole societies. Children, adults, people less familiar with technology, people with disabilities.
A very diverse group of users with different experiences, preferences, and expectations. In this situation, in addition to the mysterious nature of AI, the complexity of human nature begins to pose a challenge.
Everyday life with AI
Despite the popularity of generative AI peaking in 2023, we still don’t have enough research to clearly define which heuristics govern human interaction with large language models (LLMs).
One of the few studies in this area was conducted by Jacob Nielsen’s team, who interviewed users while they were using Chat GPT. The results presented by NNGroup suggest that our main technique for working with Gen AI is called “apple picking”. This is a pattern of interaction in which we start a dialogue with the AI with a general question and then, depending on the appropriateness of the first answer, follow up with more specific questions.
The effectiveness of this pattern leaves much to be desired and the result is not always satisfactory. Asking takes time and can lead to considerable frustration after successive unsatisfactory attempts. In addition, the model sometimes hallucinates or fails to respond due to service rules.
Another way to get an answer from Chat GPT is to create a structured prompt. The sample prompt construction in the first part defines the roles of the AI, in the second it defines the context in which the question is asked, in the third it addresses the queries, and finally it defines the expected form of the answer.
I encourage you to see how much more accurate an answer you can get using the second method.
Of course, working with ChatGPT is exciting and you can be overwhelmed by the model’s intelligence at first, but for interaction designers this is not enough.
Context changes everything
Three variables are key in the interface evaluation process:
- The quality of the user experience,
- The intelligibility of the interaction,
- The effectiveness of the interaction.
The interaction based on the ChatGPT interface is more or less imperfect in each of these aspects. At this point it is worth mentioning an unusual problem that affects some users. It is enough that we are unable to type efficiently and the whole interaction falls apart like a house of cards.
Find out more about AI testing 👉
Let’s take a moment to think about interacting with voice assistants. The latest developments allow us to speak freely using the natural interface that is our voice.
However, it only takes a change in the context in which we interact to create other problems. For example, using voice in public places is often not possible, and working with confidential data is definitely not recommended.
AI as a game changer
A team from New York University Dubai is working on an interesting project to help people with Alzheimer’s disease in their daily lives. The project is called MemoryCompanion.
You can find out more about the MemoryCompanion project here
In short, researchers at New York University Dubai are working on a solution in which the Large Language Model (after a suitably structured learning process) is personalised to the patient’s needs and provides prompts in specific disease-related situations.
As it works, the model gets to know the user more and more intimately, becoming a companion in the hyper-personalisation it creates alleviating feelings of loneliness and social isolation.
The challenge the developers have taken on is extremely complex. On the one hand, it requires an understanding of how people with dementia function, how the disease progresses and the role of family and carers. On the other hand, it is about building a solution that uses different forms of interaction with artificial intelligence.
In this case, the ethical context of the AI’s interaction with the patient and the security of the sensitive medical data exchanged during the interaction is extremely important.
Can we imagine the interface of such a product? How would the patient interact with the assistant?
Should the AI speak in the voice of the loved one, retrieve images and analyse the routes taken by the patient? Should the interface be the same for both patient and caregiver? These are just some of the challenges faced by the developers of MemoryCompanion.
We already know that in similar situations, the user needs more than a simple GUI interface. What is needed is an Intelligent User Interface (IUI) that is able to adapt the optimal form of interaction to the context.
What are intelligent interfaces?
The term Intelligent User Interfaces (IUI) has been coined to describe the integration of artificial intelligence with the User Centred Design (UCD) approach. An IUI is a type of interface that attempts to adapt to the nature of the interaction, the context and the profile of the user involved in the interaction.
Smart interfaces can be described as being designed to support real-world collaboration between humans and AI. They are, in a sense, ‘situation-aware’ interfaces in the context in which they are used.
One of the most famous intelligent interfaces that has revolutionised the way we drive is… navigation.
Can any of us imagine travelling without it? What is it about it that makes us instinctively reach for it when we get into the car? That most of us can use it with ease, even though none of us are master cartographers?
The intuitive, intelligent interface, powered by artificial intelligence, is responsible for the high usability and efficiency of car navigation.
- The application uses a Graphical User Interface (GUI) to display the map and our position.
- The GUI is supported by animations to show the variability of the data in real time.
- We can navigate the map by touch, voice and text.
It is the combination of these three forms of interaction in a single interface that makes the experience of communicating with AI comprehensible and useful.
Describing data, time, distance or even space with the right parameters allows us to understand fully and accurately what the AI is communicating.
Designing such a perfect interface requires a psychological understanding of human attention, a series of preliminary studies monitoring driver behaviour, and testing prototypes in the driving context and for the driver type. At any time of day and for different cars.
Find out how EDIOSNDA investigated car navigation 👉
Other examples of IUIs are all kinds of dashboards that monitor actions taken by AI algorithms running in the background.
These work particularly well where rapid decision making, or 24/7 process monitoring is required.
Awareness of possibilities
Intelligent interfaces are making great strides in the financial industry. In the investment sector, artificial intelligence algorithms have long been used to respond to market changes. This usually works by the AI algorithm rebalancing our investment portfolio based on knowledge of our risk tolerance and financial capacity.
The intelligence of the interface is to communicate data in real time and to select forms of communication depending on the context. The interface provides us with information on the decisions made by the AI and allows us to take immediate control of the investment process. The transparency of the process is achieved by informing the client of the basis on which the algorithm makes its decision, as well as providing quick access to information on the application’s investment dashboard.
The interface is contextual and ‘understands the situation’ in the sense that when the market reacts quickly, the AI is able to keep the user informed of its actions via app notifications. It can also ask for confirmation of the execution of a purchase or sale transaction.
Intelligent interfaces are also applicable to specialised tools. The key is to ensure perfect cooperation between the automatic mode, controlled by artificial intelligence algorithms, and the manual mode, in which a human makes decisions. A good example of this type of solution is the specialised systems that control snowmaking on ski slopes.
We all know how fickle the weather can be and how it can ruin the plans of ski event organisers. This is where Intelligent Interfaces come to the rescue. Depending on your needs, they can control the entire snowmaking process. From controlling the performance of the machines, operating the snow cannons, and selecting the snow quality to considering the current weather conditions. A specially designed interface gives the user access to real-time information on the operation of the system. The system can also be managed from anywhere in the world!
Read more about the Snowmatic system designed by EDISONDA
These are just a few examples of the interfaces that already exist around us, created before the so-called Gen AI era.
Now that artificial intelligence is inexorably entering our lives, intelligent interfaces are still a subconscious user need!
If we are to make full, informed, and effective use of the opportunities offered by AI, we need to start by communicating with it through appropriate means. Interfaces need to be understandable, accessible and should not require prior training to use. We need to create a language for communicating with AI that is resistant to digital exclusion.
What is the design process for intelligent user interfaces?
The process starts with a thorough understanding of the user’s context and the problems they face in a given situation. Understanding the context is very important because it determines the type of AI we choose and what it will be used for in the product.
What kind of AI will we use for car navigation? Will LLM support it as well as a browser for the visually impaired?
When deciding to create an AI-enabled product, it is important to be patient and do your research. There are many aspects of dealing with AI that are unlikely to be discovered without a thorough discovery phase. Then we are sure to be surprised by the development of the intelligent product.
It is the context and the nature of the task that determines whether we use a voice interface, a text interface, speech-to-text conversion, or facial recognition. When working with artificial intelligence, we need to confirm our assumptions before we proceed.
We should never do the opposite and try to make the situation fit the technology. This can lead to a number of problems and unnecessary costs.
Read more about the AI Discovery 👉
The next step is the choice of technology. The scope of its application and the assessment of its complexity and the time required for a prototype (POC). These decisions are made based on hard findings from the discovery phase and a carefully mapped user journey (Customer Journey AI).
We need to be aware that at this stage of AI development, we are still pioneers in working with artificial intelligence.
Most of the cases that project teams work on are being built for the first time, and there are few examples of how the technology can be applied to a specific case.
Prototyping is therefore a necessary step in artificial intelligence projects. The POC stage makes it possible to test, at relatively low cost, the extent to which the chosen technology supports the performance of the task.
The next important element is the design of the user interface. Depending on the nature of the problem being solved, this may be as simple as a screen with notifications.
However, for complex products that can be used in different contexts, it is the interface that will have a critical impact on how the product is used and how effectively the model can be worked with. It will be responsible for the communication between the user and the solution built on top of the model.
Both the model prototype and the interface should be tested with users during the POC phase. This is the only way to ensure that the AI adapts appropriately to the context of use and is personalised to the user’s needs.
Going back to the example of an assistant for people with dementia, prototyping with users may reveal that text or voice prompts alone are not sufficient because the user cannot remember the person’s appearance.
With the knowledge gained from testing, the model’s intelligence can be increased, and the interface can be better adapted.
Read our report – Rocket Ideas: Healthcare
The future is written in interaction
The process of developing AI-powered products is a fascinating and highly creative one. The more we know about our users, their problems, and the technology itself, the easier it is to make the right decisions. The more we test, the clearer the interfaces of our products become and the less the AI needs to be interpreted.
The coming years will see a leap in the development of intelligent user interfaces powered by artificial intelligence. Within five years, interfaces as we know them will be a thing of the past, replaced by interfaces of the future that adapt to the context of use and query.
The efficiency of everyday tasks and complex processes will be greatly simplified, and humans will regain time for self-development and rest. Before this happens, however, developers of intelligent products should start collaborating with designers and researchers of human-AI interaction as soon as possible.
However, business funders need to be aware that by investing sometimes huge sums of money, they are reckoning with a product that is unsuited to the human-AI relationship, that may fail to meet audience expectations, and that may bury business objectives.
As we create a new category of intelligent digital products, let us remember that it is our responsibility to make them understandable, safe, and accessible.
Let’s collaborate, experiment and share knowledge from AI research. To reduce the risk of failure and create products that are not only useful, but also rewarding to use and ensure user equality.