{"id":4885,"date":"2025-06-02T02:23:37","date_gmt":"2025-06-01T17:23:37","guid":{"rendered":"https:\/\/uism.co.jp\/?p=4885"},"modified":"2026-04-13T14:08:28","modified_gmt":"2026-04-13T05:08:28","slug":"inciting-an-ai-spring-the-importance-of-ux-in-ai-development","status":"publish","type":"post","link":"https:\/\/uism.co.jp\/en\/blog\/inciting-an-ai-spring-the-importance-of-ux-in-ai-development\/","title":{"rendered":"Inciting an AI Spring: The Importance of UX in AI Development"},"content":{"rendered":"\n<p>As generative AI tools like ChatGPT redefine how we interact with machines, it\u2019s more important than ever to ask: Are the foundational principles of UX in AI still relevant? And how should they evolve?&nbsp;<\/p>\n\n\n\n<p>That question sits at the heart of this article.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">AI\u2019s Vicious Cycle&nbsp;<\/span><\/h2>\n\n\n\n<p>Even at its early stages, AI\u2019s potential capability has been touted as near limitless, at least in theory. When faced with the limiting factors of time, money, and human capability, the promises of AI\u2019s endless capabilities often ring hollow. As Lew and Schumacher expound on through their book, <strong><em><span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">AI and UX<\/span><\/em><\/strong>, the consequence of over-hyping a breakthrough in AI innovation results in loss of faith in the tech, and loss of the funds and energy to continue developing it to <em>meet<\/em> that hyped up potential. That period of a cooldown or total freeze of AI development can be called an \u201cAI winter.\u201d&nbsp;&nbsp;<\/p>\n\n\n\n<p>The book argues in no small part that failure to properly account for the end-user in the development of new AI is the biggest predictor of AI winters, and multiple harsh winters make it exponentially more difficult to spring back. UX then, is the support necessary to keep AI development progressing, and meeting that key goal of providing value to users. Let\u2019s take a look at some of the key guidance <em>AI and UX<\/em> provides to develop a successful AI-enabled product and, where pertinent, put that advice in conversation with a Japanese context.&nbsp;&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">AI and UX\u2019s Intertwining Fate&nbsp;<\/span><\/h2>\n\n\n\n<p>The birth of \u201cartificial intelligence\u201d as we know it today can be accredited to the theory of computation developed by Alan Turing, the basis for a computer\u2019s memory and processing unit. This \u201celectric brain\u201d with an ability to solve ever-increasingly complex problems at an ever-increasingly faster pace would be the justification for assigning intelligence to an inanimate object. While the definition of what encompasses AI varies, Lew and Shumacher define AI as <span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">\u201cany technology that appears to adapt its knowledge or learns from experiences in a way that would be considered intelligent\u201d.<\/span>&nbsp;<\/p>\n\n\n\n<p>UX involvement in computer and AI development isn\u2019t new. In fact, psychologists have been deeply invested in AI improvements from the very beginning. The field of human-computer interactions (HCI) sits at the crossroads of computer science and design, and behavioral studies. Because computers, and by extension AI, have a far wider range of capabilities, the interacting process between the human and the computer\u2019s interface requires more complex methods of giving information and receiving feedback.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">3 AI-UX Principles&nbsp;<\/span><\/h2>\n\n\n\n<p>Lew and Schumacher lay out 3 main principles in their UX framework for AI. This framework is rooted in user-centered design and emphasizes the user as the key focal point, not the technology. &nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"500\" src=\"https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/image1.jpg\" alt=\"3D framework diagram illustrating the three core principles of AI UX design. The vertical axis represents 'Context', the horizontal axis shows 'Interaction', and the depth axis indicates 'Trust'.\" class=\"wp-image-4875\" style=\"width:422px;height:auto\" srcset=\"https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/image1.jpg 500w, https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/image1-300x300.jpg 300w, https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/image1-150x150.jpg 150w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">Context&nbsp;<\/span><\/h3>\n\n\n\n<p>Context, in the context of this framework, is defined as the objectives, meaning, and expectations of output by an AI. If the first question is \u201ccan an AI do X?\u201d context precludes the follow-up question, \u201cwhy do X with an AI?\u201d Context primarily applies to the data input or learning phase of an AI, as well as colors the evaluation of its output.&nbsp;&nbsp;<\/p>\n\n\n\n<p>One of the dangers of avoiding context is in failing to clarify an AI\u2019s purpose, and as a consequence failing to evaluate it based on accurate parameters. Is the AI being designed to replace the manual(human) approach, provide extra support alongside the approach, or augment the approach as an invaluable tool? Furthermore, has that value been properly expressed to users? Sticking generative AI or voice assistants into any place doesn\u2019t create \u201cadded value\u201d if users don\u2019t use, or worse, actively avoid their application. See the comparison of Siri\u2019s fall to Alexa\u2019s rise in the book:&nbsp;<\/p>\n\n\n\n<p><span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">\u201cAfter Siri was released in beta form, the assistant\u2019s limited capabilities led to public frustration with the service and with virtual assistants in general\u2026 Yet the emergence of Amazon\u2019s Alexa opened the door to try voice again.\u201d&nbsp;<\/span><\/p>\n\n\n\n<p>Alexa on the Echo understood its <span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">context of use.<\/span> It didn\u2019t need to be prepared for every situation imaginable while traveling with the user on their phone in their pocket, because the ambient noise, and external stigma of talking to a computer in your phone pushed against that behavior. Alexa found her niche at home, setting timers and offering recipes in the kitchen, setting the mood for a party or romantic dinner, and giving morning debriefs on the way out the door.&nbsp;<\/p>\n\n\n\n<p>There are more variations of context that we\u2019ll touch upon in tandem with the other principles.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">Interaction&nbsp;<\/span><\/h3>\n\n\n\n<p>Interaction within AI is defined as engagement with the user in a way that allows them to confirm and respond. It is the answer to the question \u201chow will AI do X?\u201d and is based on a user\u2019s knowledge on the functions and right to refuse the function. To use voice assistants as an example again, if you ask your in-car assistant to call your best friend Brandon to chat about last night\u2019s game, you\u2019d be pretty upset if it started dialing your ex Brenda without first checking with you.&nbsp;<\/p>\n\n\n\n<p>The points of interaction between a user and an object like AI are called <span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">affordances.<\/span> These affordances allow the user to understand the object\u2019s features and functions. If a door has a long, bar-like handle, that affordance informs you that the door can (or at least should) be pulled. When functions outnumber the affordances, that\u2019s poor design as you haven\u2019t made clear to the user the full capability of its functions. See our blog here for an example of unclear affordances with the Japanese toilet. When affordances are greater than the intended functions, that could be an area for opportunity. A popular example is the story of Listerine which started as a surgical antiseptic before ending up as a mouthwash. Keeping in touch with users to see how they use a product over time can shed light on improvements to encourage certain use and make initial functions more \u201cuser-affordable.\u201d&nbsp;<\/p>\n\n\n\n<p>Interaction and affordances of not only one group, but across demographics is key to marketing strategy domestically and internationally. If AI is only developed within a certain <span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">cultural context<\/span>, it may fail to be transferable to different groups. An easy example is language. The English used by AI is usually of a polite tone, but does not different too much from more casual. Japanese language has much more rigidly defined levels of formality and would expect an AI to function and understand any given query regardless of the level spoken. This isn\u2019t even accounting for the daunting task of learning required for an AI to understand a different sentence structure, mixed language input, and vague contextual clues. Not only in Japan\u2019s case, but for most any country, <span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">an AI expected to be used globally must account for context and interaction globally in its development.<\/span>&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">Trust&nbsp;<\/span><\/h3>\n\n\n\n<p>The final principle of the AI-UX framework asks the final question, \u201cshould a user do X with AI?\u201d Lew and Shumacher define the principle as performing a task as intended \u201cwithout any unexpected outcomes.\u201d If an AI is designed to be an assistant, it would similarly be adherent to the expectations of a trusted personal assistant. In other words, it must be competent enough at its job (accuracy) and it must keep the business personal (privacy).&nbsp;<\/p>\n\n\n\n<p>AI is the purest example of \u201cyou are what you eat.\u201d An AI that consumes quality, variated data that accounts for biases within the data will output quality calculations and information. The opposite is, of course, also true. The solution isn\u2019t necessarily more data (more alcohol doesn\u2019t cure a hangover), and if the AI in question is a true enigma of a black box, the least that can be done is ensure the input data for initial learning is as clean and free of bias as possible.&nbsp;<\/p>\n\n\n\n<p>Privacy falls on the company to manage. If a user has trusted a company with the data of their schedule, their location, or their credit card number, the center handling that sensitive information is bound, often legally, to the protection of that data. Trust is hard to build and easy to break. However, if users, especially Japanese users, trust a company or product, they will be okay, even willing, to sacrifice some level of privacy for added value.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"796\" src=\"https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/Firefly_A-cozy-home-interior-scene-with-an-Amazon-Echo-device-glowing-softly-on-a-kitchen-cou-371729-1024x796.jpg\" alt=\"\u5bb6\u5ead\u306e\u30ad\u30c3\u30c1\u30f3\u3067\u98df\u4e8b\u3092\u56f2\u30804\u4eba\u306e\u5bb6\u65cf\u3002\u30c6\u30fc\u30d6\u30eb\u306b\u306f\u6599\u7406\u304c\u4e26\u3073\u3001\u80cc\u666f\u306b\u306f\u30d5\u30a7\u30a2\u30ea\u30fc\u30e9\u30a4\u30c8\u304c\u706f\u308b\u3002\u30ab\u30a6\u30f3\u30bf\u30fc\u306b\u306f\u9752\u304f\u5149\u308b\u30b9\u30de\u30fc\u30c8\u30b9\u30d4\u30fc\u30ab\u30fc\u304c\u7f6e\u304b\u308c\u3001AI\u304c\u65e5\u5e38\u306b\u6eb6\u3051\u8fbc\u3080\u69d8\u5b50\u3092\u8868\u73fe\u3002\" class=\"wp-image-4872\" style=\"width:696px;height:auto\" srcset=\"https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/Firefly_A-cozy-home-interior-scene-with-an-Amazon-Echo-device-glowing-softly-on-a-kitchen-cou-371729-1024x796.jpg 1024w, https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/Firefly_A-cozy-home-interior-scene-with-an-Amazon-Echo-device-glowing-softly-on-a-kitchen-cou-371729-300x233.jpg 300w, https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/Firefly_A-cozy-home-interior-scene-with-an-Amazon-Echo-device-glowing-softly-on-a-kitchen-cou-371729-768x597.jpg 768w, https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/Firefly_A-cozy-home-interior-scene-with-an-Amazon-Echo-device-glowing-softly-on-a-kitchen-cou-371729-1536x1195.jpg 1536w, https:\/\/uism.co.jp\/wp-content\/uploads\/2025\/06\/Firefly_A-cozy-home-interior-scene-with-an-Amazon-Echo-device-glowing-softly-on-a-kitchen-cou-371729-1920x1493.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><span style=\"color: #34775c\" class=\"sme-text-color\">Conclusion&nbsp;<\/span><\/h3>\n\n\n\n<p>To briefly summarize, the reason why user experience is vital in the development and propagation of AI is the implicit idea that <span style=\"background-image: linear-gradient(transparent 60%, rgba(255, 240, 151, 0.5) 60%)\" class=\"sme-highlighter\">AI should be created for and properly serve its users.<\/span> By first understanding the needs, use cases, and concerns of a user base, AI can be developed positively from the start and throughout its cycle.&nbsp;&nbsp;<\/p>\n\n\n\n<p>It\u2019s worth noting that this framework by Lew and Schumacher was proposed before the emergence of ChatGPT and other generative AI. This doesn\u2019t render the framework obsolete. On the contrary, the post-GPT landscape has expanded the role of UX. Interaction is no longer confined to screens or taps, it now includes prompt design, tone calibration, and user adaptation in real-time. This makes it even more essential to revisit these principles with a critical, updated lens. Our role as UX researchers is not just to follow frameworks, but to update and challenge them as the world changes.&nbsp;<\/p>\n\n\n\n<p>We at Uism are experts in creating and orchestrating user-based studies on AI, as well as other fields. We even work closely with the two authors of the book quoted throughout, through our connection and active involvement in the UX alliance, ReSight Global. If your company is asking those questions, \u201cwhy do X with AI?,\u201d \u201chow will AI do X?,\u201d or \u201cshould a user do X with AI?\u201d reach out to us, and let us help you find the answer.&nbsp;<\/p>\n\n\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI innovation alone is not enough. Discover why context, affordances, and user trust determine whether AI products create value or trigger frustration and abandonment.<\/p>\n","protected":false},"author":30,"featured_media":4873,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_locale":"en_US","_original_post":"https:\/\/uism.co.jp\/?p=4883","footnotes":"","wp-seo-meta-description":"Explore how user-centered design can make or break AI development. This article introduces the three UX principles\u2014Context, Interaction, and Trust\u2014outlined in AI and UX by Lew and Schumacher. Learn how these principles help avoid AI winters, build user trust, and ensure AI tools like ChatGPT and Alexa deliver real value in diverse cultural contexts, including Japan.","wp-seo-meta-robots":[]},"categories":[374],"tags":[237,312,311,313,314],"class_list":{"0":"post-4885","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ux-thinking","8":"tag-uxdesign","9":"tag-generativeai","10":"tag-usercenteredai","11":"tag-aiuxframework","12":"tag-humancomputerinteraction","13":"en-US","14":"c-entry"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/posts\/4885","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/users\/30"}],"replies":[{"embeddable":true,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/comments?post=4885"}],"version-history":[{"count":7,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/posts\/4885\/revisions"}],"predecessor-version":[{"id":9383,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/posts\/4885\/revisions\/9383"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/media\/4873"}],"wp:attachment":[{"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/media?parent=4885"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/categories?post=4885"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/uism.co.jp\/wp-json\/wp\/v2\/tags?post=4885"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}