Each week we find a new topic for our readers to learn about in our AI Education column.
Welcome to another AI Education, this week, we’re going to tackle generative UI, or generative user interfaces. So, by now, you’re hopefully at least somewhat familiar with generative artificial intelligence, AI software that can create content seemingly out of thin air like Microsoft’s Copilot and OpenAI’s ChatGPT-4. While most users think of this content as media like images, video, audio and text, generative AI can also be used to produce code and brought to bear on software development.
That leads us to user interface, the elements of software that users—meaning human beings like you and me—interact with as we work, play, socialize, create, or whatever else we might be doing with our technology. Think of your computer as a toolbox. The software on your computer is your tools, like a hammer, a wrench, a saw. User interface would describe the handles of each of your tools, the lock and hinge on your toolbox, and the way your tools are organized for easy access.
Having a good user interface—or creating the best possible user experience—is usually necessary for any technological advancement to take hold. Take the personal computer, which became broadly available in the early 1980s, but didn’t really take off completely until the late 1980s when desktop graphical user interfaces (like Microsoft Windows and the precursors to Apple’s MacOS) became the norm. Having a new, fancy solution to a problem is only half the battle for software developers—users tend to cling to obsolete technologies with excellent interfaces and familiar experiences, as both Microsoft and Apple have found out with operating system upgrades in decades past.
What Is Generative UI?
Imagine applying generative AI’s coding ability to user interfaces not just to design a user interface for a particular piece of software, but to build it on the fly for each user and to allow the interface to evolve and respond to the user each time they use the software, creating a dynamic, intuitive and optimized experience. That is generative UI. Generative UI employs a combination of artificial intelligence technologies to create personalized user experiences.
To take our toolkit analogy, as you use your tools over time, the tools you use the most migrate to the most accessible places, and the tools you use the least move towards the bottom. As one operates within a workshop space, again, the spaces used most become the most accessible, and tools used most move to become within hand’s reach. Or a desktop in offices past, when gradually our phones moved to one side of our desk, our monitors were adjusted to just the right height, the materials we used most were at the top of the top drawer, so over time, as user and workspace evolved together, workers became more efficient in our environments.
Generative UI enables software to evolve to suit the user—or the enterprise—just as our physical workspaces do over time. However, with software, our options for improving the user interface isn’t limited to merely moving our tools around. The options for interacting with software have exploded over the decades. Where through most of the 1980s, we were limited to command prompts, and through the 1990s and most of the 2000s we were limited to button clicks and mouse-guided pointers, today we can interact with our technology through voice commands, movement, light, touch and text. Given the options available, and that every user is different in some way, dynamic interfaces make sense.
What Generative UI Means
For much of the history of the computer, user interface was a passing concern—what was important was that we were actually building computers that worked, and that a few lucky people could access them to use them to solve problems and to further our understanding of computers. Then with miniaturization came the advent of personal computing and the need to make computers usable for more people, even average people, and attention started to be paid to the user interface.
However, up to today, the user interface has been designed with utility in mind. In fact, until very recently, user interfaces were mostly static but customizable at best. Software designers were trained to create interfaces aimed at the average user—so consumer software is usually designed for the average joe. Professional software is designed for professionals of varying education levels. Media of interface were usually limited to the computer keyboard and the mouse—we still mostly click and type our way through everything.
But what if someone can’t see? What if they can’t type? What if they can’t speak or hear? What if they can’t read, or they don’t understand the language that program is in? What if they only need to access a particular part of the software, or if they use different parts of the software in a particular sequence repetitively, all day long, five days a week? A generative UI is able to pick up on these issues and design an interface that will enable the user to not only access the benefits of the software, but also to enjoy what over time will be an optimized and more efficient experience.
A Paradigm Shift
Okay, so, from an impact on the world perspective, we’re talking about accessibility and efficiency, both of which may translate to tangible benefits for society, which is nice, but behind the scenes the real change is going to be in software design and engineering. Because of the increased emphasis on user interface, software design has become a huge part of the IT ecosystem. Software designers are trained to create interfaces aimed at the average.
With a dynamic, generative UI, it’s no longer necessary to design for some non-existent median or mean user of your technology—if the technology needs to be dumbed down to an average or sub-average level for some users, it will no longer prevent more advanced users from enjoying a higher-level experience. For software designers, if they’re still necessary moving forward, targeting use cases and outcomes become more important than ensuring ease of use.
At the same time and almost paradoxically, ease of use should improve across the spectrum of technology, because, extending our toolbox metaphor again, we’re all going to be using tools that shape-shift to perfectly fit into our hands.