The core idea behind a graphical user interface has remained the same since it came about. Put the user in a position that allows them to comfortably navigate available functionality, picking the bits that are relevant to carry out a specific task.
All of this happens in a two-dimensional space over time where we find information, buttons and other controls spread out across different screens. Though VR utilises spatial computing it sticks to the same principles.
Human interface guidelines offer different design patterns for different problems. Some help us to distribute information and features across an app to avoid overwhelming the user, present UI more contextually, but also to break up functionality into manageable, self-contained pieces of code and UI fragments that can be maintained, tested and evolve over time. Examples include:
However, once feature count increases, it gets harder to compose these fragments and find places from which this UI can be accessed.
As a product becomes more complex, the chance of yet another feature being relevant to a user diminishes. The end result, potential feature bloat.
Product managers and UI designers are in the delicate role to understand and anticipate how a product is going to be used. It is their responsibility to make difficult decisions on when and how to present feature to users, or deny the introduction of a feature altogether. The question of adding features and how is an entirely separate topic and greatly covered in this interview with Linear’s Head of Product Nan Yu.
Back dealing with a richer feature set, we can leverage concepts to make the interaction with the product more contextual however. These include ways to parachute the user into the right place, or allow a UI fragment to be presented outside the main app, as a widget on your home screen for instance.
The hyperlink, not new but still effective, allows to jump to a specific screen or state of an app. Rooted in the web, even native apps use them widely today for deep linking. GitHub’s iOS app is a great example, taking users directly to issues, pull requests, etc. whenever you open a link on your phone.
For the opposite scenario, breaking out UI and present it elsewhere as a widget, embed or otherwise, various platform specific technologies exist to facilitate it. They can also really help to make parts of the UI reusable so the same interface can presented in different contexts, ultimately keeping the maintenance cost down.
But I can’t help but think that these concepts often are mitigation strategies dealing with the ever growing complexity of digital products. Software may be more prone to it because it’s easier to add things compared to other products.
Enter AI. It may not be able to always count correctly but LLMs are exceptionally good at working out the intent, turning unstructured input into structured data and a set of instructions.
Knowing the intent means understanding what bits of a product have to be involved to perform a task, and by extension knowing what is irrelevant for the given task. Based on the model context, AI can decide which of the available capabilities and tools are required, and plan in which order these have to be used.
What used to be a question of the user navigating countless screens, popovers and menus turns into performing a sequence of steps. These steps can even come from multiple tools. Each one does a very specific job; ask or submit input data, talk to an API, convert some data.