For years designers have been coming up with novel ways to handle increasing complexity in apps and services. How AI might put an end to that.

The core idea behind a graphical user interface has remained the same since it came about. Put the user in a position that allows them to comfortably navigate available functionality to carry out a specific task. Functionality gets presented in a two-dimensional space over time; VR utilises spatial computing but sticks to the same principles.

Human interface guidelines offer different design patterns for different problems. Some help us to distribute information and features across an app to avoid overwhelming the user, present UI more contextually, but also to break up functionality into manageable, self-contained pieces of code and UI fragments that can be maintained, tested and evolve over time. Examples include:

However, once feature count increases, it gets harder to compose these fragments and find places from which this UI can be accessed.

As a product becomes more complex, the chance of yet another feature being relevant to a user diminishes. The end result, potential feature bloat.

Product managers and UI designers are in the delicate role to understand and anticipate how a product is going to be used. It is their responsibility to make difficult decisions on when and how to present feature to users, or deny the introduction of a feature altogether. The question of adding features and how is an entirely separate topic and greatly covered in this interview with Linear’s Head of Product Nan Yu.

Back dealing with a richer feature set, we can leverage concepts to make the interaction with the product more contextual however. These include ways to parachute the user into the right place, or allow a UI fragment to be presented outside the main app, as a widget on your home screen for instance.

The hyperlink, not new but still effective, allows to jump to a specific screen or state of an app. Rooted in the web, even native apps use them widely today for deep linking. GitHub’s iOS app is a great example, taking users directly to issues, pull requests, etc. whenever you open a link on your phone.

For the opposite scenario, breaking out UI and present it elsewhere as a widget, embed or otherwise, various platform specific technologies exist to facilitate it. They can also really help to make parts of the UI reusable so the same interface can presented in different contexts, ultimately keeping the maintenance cost down.

But I can’t help but think that these concepts often are mitigation strategies dealing with the ever growing complexity of digital products. Software may be more prone to it because it’s easier to add things compared to other products.

Deciphering intent and planning with AI

Enter AI. It may not be able to always count correctly but LLMs are exceptionally good at working out the intent, turning unstructured input into structured data and a set of instructions.

Knowing the intent means understanding which functionality is needed to perform a task, and by extension knowing what is irrelevant for the given task. Based on the model context, AI can decide which of the available capabilities and tools are required, and plan in which order these have to be used.

What used to be a question of the user navigating available functionality within an app turns into performing a sequence of steps where each one of them can be pulled from different apps and services. Input data, present data or interact with the API of another service.

What is the impact on UI design?

How will we design AI-first products where AI is not just a bolted on feature on an existing product?