As generative AI tools like ChatGPT redefine how we interact with machines, it’s more important than ever to ask: Are the foundational principles of UX in AI still relevant? And how should they evolve?
That question sits at the heart of this article.
AI’s Vicious Cycle
Even at its early stages, AI’s potential capability has been touted as near limitless, at least in theory. When faced with the limiting factors of time, money, and human capability, the promises of AI’s endless capabilities often ring hollow. As Lew and Schumacher expound on through their book, AI and UX, the consequence of over-hyping a breakthrough in AI innovation results in loss of faith in the tech, and loss of the funds and energy to continue developing it to meet that hyped up potential. That period of a cooldown or total freeze of AI development can be called an “AI winter.”
The book argues in no small part that failure to properly account for the end-user in the development of new AI is the biggest predictor of AI winters, and multiple harsh winters make it exponentially more difficult to spring back. UX then, is the support necessary to keep AI development progressing, and meeting that key goal of providing value to users. Let’s take a look at some of the key guidance AI and UX provides to develop a successful AI-enabled product and, where pertinent, put that advice in conversation with a Japanese context.
AI and UX’s Intertwining Fate
The birth of “artificial intelligence” as we know it today can be accredited to the theory of computation developed by Alan Turing, the basis for a computer’s memory and processing unit. This “electric brain” with an ability to solve ever-increasingly complex problems at an ever-increasingly faster pace would be the justification for assigning intelligence to an inanimate object. While the definition of what encompasses AI varies, Lew and Shumacher define AI as “any technology that appears to adapt its knowledge or learns from experiences in a way that would be considered intelligent”.
UX involvement in computer and AI development isn’t new. In fact, psychologists have been deeply invested in AI improvements from the very beginning. The field of human-computer interactions (HCI) sits at the crossroads of computer science and design, and behavioral studies. Because computers, and by extension AI, have a far wider range of capabilities, the interacting process between the human and the computer’s interface requires more complex methods of giving information and receiving feedback.
3 AI-UX Principles
Lew and Schumacher lay out 3 main principles in their UX framework for AI. This framework is rooted in user-centered design and emphasizes the user as the key focal point, not the technology.

Context
Context, in the context of this framework, is defined as the objectives, meaning, and expectations of output by an AI. If the first question is “can an AI do X?” context precludes the follow-up question, “why do X with an AI?” Context primarily applies to the data input or learning phase of an AI, as well as colors the evaluation of its output.
One of the dangers of avoiding context is in failing to clarify an AI’s purpose, and as a consequence failing to evaluate it based on accurate parameters. Is the AI being designed to replace the manual(human) approach, provide extra support alongside the approach, or augment the approach as an invaluable tool? Furthermore, has that value been properly expressed to users? Sticking generative AI or voice assistants into any place doesn’t create “added value” if users don’t use, or worse, actively avoid their application. See the comparison of Siri’s fall to Alexa’s rise in the book:
“After Siri was released in beta form, the assistant’s limited capabilities led to public frustration with the service and with virtual assistants in general… Yet the emergence of Amazon’s Alexa opened the door to try voice again.”
Alexa on the Echo understood its context of use. It didn’t need to be prepared for every situation imaginable while traveling with the user on their phone in their pocket, because the ambient noise, and external stigma of talking to a computer in your phone pushed against that behavior. Alexa found her niche at home, setting timers and offering recipes in the kitchen, setting the mood for a party or romantic dinner, and giving morning debriefs on the way out the door.
There are more variations of context that we’ll touch upon in tandem with the other principles.
Interaction
Interaction within AI is defined as engagement with the user in a way that allows them to confirm and respond. It is the answer to the question “how will AI do X?” and is based on a user’s knowledge on the functions and right to refuse the function. To use voice assistants as an example again, if you ask your in-car assistant to call your best friend Brandon to chat about last night’s game, you’d be pretty upset if it started dialing your ex Brenda without first checking with you.
The points of interaction between a user and an object like AI are called affordances. These affordances allow the user to understand the object’s features and functions. If a door has a long, bar-like handle, that affordance informs you that the door can (or at least should) be pulled. When functions outnumber the affordances, that’s poor design as you haven’t made clear to the user the full capability of its functions. See our blog here for an example of unclear affordances with the Japanese toilet. When affordances are greater than the intended functions, that could be an area for opportunity. A popular example is the story of Listerine which started as a surgical antiseptic before ending up as a mouthwash. Keeping in touch with users to see how they use a product over time can shed light on improvements to encourage certain use and make initial functions more “user-affordable.”
Interaction and affordances of not only one group, but across demographics is key to marketing strategy domestically and internationally. If AI is only developed within a certain cultural context, it may fail to be transferable to different groups. An easy example is language. The English used by AI is usually of a polite tone, but does not different too much from more casual. Japanese language has much more rigidly defined levels of formality and would expect an AI to function and understand any given query regardless of the level spoken. This isn’t even accounting for the daunting task of learning required for an AI to understand a different sentence structure, mixed language input, and vague contextual clues. Not only in Japan’s case, but for most any country, an AI expected to be used globally must account for context and interaction globally in its development.
Trust
The final principle of the AI-UX framework asks the final question, “should a user do X with AI?” Lew and Shumacher define the principle as performing a task as intended “without any unexpected outcomes.” If an AI is designed to be an assistant, it would similarly be adherent to the expectations of a trusted personal assistant. In other words, it must be competent enough at its job (accuracy) and it must keep the business personal (privacy).
AI is the purest example of “you are what you eat.” An AI that consumes quality, variated data that accounts for biases within the data will output quality calculations and information. The opposite is, of course, also true. The solution isn’t necessarily more data (more alcohol doesn’t cure a hangover), and if the AI in question is a true enigma of a black box, the least that can be done is ensure the input data for initial learning is as clean and free of bias as possible.
Privacy falls on the company to manage. If a user has trusted a company with the data of their schedule, their location, or their credit card number, the center handling that sensitive information is bound, often legally, to the protection of that data. Trust is hard to build and easy to break. However, if users, especially Japanese users, trust a company or product, they will be okay, even willing, to sacrifice some level of privacy for added value.

Conclusion
To briefly summarize, the reason why user experience is vital in the development and propagation of AI is the implicit idea that AI should be created for and properly serve its users. By first understanding the needs, use cases, and concerns of a user base, AI can be developed positively from the start and throughout its cycle.
It’s worth noting that this framework by Lew and Schumacher was proposed before the emergence of ChatGPT and other generative AI. This doesn’t render the framework obsolete. On the contrary, the post-GPT landscape has expanded the role of UX. Interaction is no longer confined to screens or taps, it now includes prompt design, tone calibration, and user adaptation in real-time. This makes it even more essential to revisit these principles with a critical, updated lens. Our role as UX researchers is not just to follow frameworks, but to update and challenge them as the world changes.
We at Uism are experts in creating and orchestrating user-based studies on AI, as well as other fields. We even work closely with the two authors of the book quoted throughout, through our connection and active involvement in the UX alliance, ReSight Global. If your company is asking those questions, “why do X with AI?,” “how will AI do X?,” or “should a user do X with AI?” reach out to us, and let us help you find the answer.
About the Author

Ross Miller
Ross utilized his background in languages and East Asian cultures to improve academic and business practices across Japan. He recently acquired his MBA in Business Leadership and Innovation, with a focus in systems thinking, design, and social business. A professional “bridge builder” in diversity management, he wishes to utilize those talents as the perfect liaison in international UX research. Though only spending 5 years in the Japanese city of Fukuoka, he’s picked up the habits and accent enough to be mistaken for half-Kyushuan on several occasions.