155th NOTE on Human-Centered AI
- Event
- CHIIR Conference
- Host
- daniel.s.schiff@gmail.com
- Location
- Unknown
- Year
- 2026
Title: 155th NOTE on Human-Centered AI Subject: Fwd: Digest for human-centered-ai@googlegroups.com - 1 update in 1 topic From: daniel.s.schiff@gmail.com Date: Tue, 31 Mar 2026 09:09:37 -0400 Forwarding for academic-record intake testing. ---------- Forwarded message --------- From: human-centered-ai@googlegroups.com Date: Mon, 30 Mar 2026 15:57:41 -0700 Subject: Digest for human-centered-ai@googlegroups.com - 1 update in 1 topic To: Digest recipients human-centered-ai@googlegroups.com ============================================================================= Today’s topic summary ============================================================================= Group: human-centered-ai@googlegroups.com Url: https://groups.google.com/forum/?utm_source=digest&utm_medium=email#!forum/human-centered-ai/topics - 155th NOTE on Human-Centered AI [1 Update] http://groups.google.com/group/human-centered-ai/t/a0874fc5c4239327 ============================================================================= Topic: 155th NOTE on Human-Centered AI Url: http://groups.google.com/group/human-centered-ai/t/a0874fc5c4239327 ============================================================================= ---------- 1 of 1 ---------- From: Ben Shneiderman ben.shneiderman@gmail.com Date: Mar 30 01:51PM -0700 Url: http://groups.google.com/group/human-centered-ai/msg/2f5f1639919ed Dear HCAI Google Group, My keynote https://chiir2026.github.io/program.html#keynotes for the ACM CHI+IR (Computer Human Interaction and Information Retrieval) https://chiir2026.github.io/index.html conference (March 23, 2026) returned to fundamental questions about how people come to seek information, to refine their understanding, express their intent, extend their knowledge, and then use the new knowledge to take action and possibly seek further infromation. A widely held current belief is that Agentic AI will facilitate human epistemic exploration and resolve users’ information needs by way of natural language dialogs thru Large Language Models (LLMs). Many people consider products, such as OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot, or Google’s Gemini, as strong examples of how AI can respond to user needs. While agentic definitions vary from broad ones that include any process with an input and an output to other definitions that suggest that agents are like travel agents or butlers who respond to requests from people. Ryen White wrote about the widely held belief that “Natural language is an expressive and powerful means of communicating intentions and preferences with search systems. … The ability of agents to better understand intentions and provide assistance beyond fact finding … will advance the search frontier” (Advancing the search frontier with AI agents https://dl.acm.org/doi/abs/10.1145/3655615 Communications of the ACM, September 2024). He argues that agents will relieve users of the need to decompose their high-level tasks into specific subtasks (“By decomposing tasks into subtasks and assigning them to specialized agents, we can manage complexity more effectively”) and concludes confidently that “AI agents will transform how we search.” A follow up article, co-authored with Chirag Shah, carried the theme further: “An agent … is an autonomous entity or program that takes preferences, instructions, or other forms of inputs from a user to accomplish specific tasks on that user’s behalf” (Agents Are Not Enough https://ieeexplore.ieee.org/abstract/document/11178155/ IEEE Computer, October 2025). They review the half century of work on agents, identify the strengths and weaknesses of current agents, and propose a new ecosystem that will “represent the next age of evolution for capable AI systems… that go beyond information retrieval and generation to perform deep reasoning and take actions on a user’s behalf.” I encouraged attendees to read these two well-organized and well-written papers. A few months later, Shah and *Lynda Tamine wrote: “we argue that this metaphor [human-AI collaboration] fundamentally misrepresents these interactions and obscures critical issues of agency, accountability, and labor” (Why “human-AI collaboration” obscures what actually happens in information seeking https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/asi.70059 Journal of the Association for Information Science and Technology, December 2025). At the CHIIR Conference Shah presented his latest comments: (Rethinking Human-AI Collaboration in Information Seeking: Why Epistemic Incompatibility Demands New Design Paradigms https://dl.acm.org/doi/full/10.1145/3786304.3787946 March 2026). He positions AI systems as educational scaffolds that support learning rather than bypass it. The paper ends well with “we must advocate for human information seeking as a valuable cognitive activity – not merely a means to acquire information but a process through which understanding emerges.” I wrote to Shah before the conference: “For me, humans have agency, goals, intentions, desires, and abilities, while AI, IR, and any computing machine is just a tool (or a supertool). The use of the term ‘agent’ has elevated computers to be like a human, but that is the source of the many misconceptions, which you describe. … I think we are largely in agreement that people and computers are different, that only people have intentions, and that human learning is a desirable product of search. I would add that humans take pleasure from learning and are delighted by new-found understandings.” Shah wrote back “Where I would gently push back: I think the ‘just a tool’ framing, while correct at the ontological level, may understate the practical stakes. A hammer does not reshape what users believe they know, alleviate their motivation to learn, or redistribute epistemic credit in ways that obscure accountability. These AI systems do all three — not because they have agency, but precisely because they are deployed as if they do. The ‘supertool’ framing you offer actually captures this well, and I wish I had used it.” My talk described this debate and argued that instead of natural language exchange, users could express their intents by starting with a natural language request, accompanied by a control panel that reminded users of other possibilities. Intent refinement would be facilitated by revisions to the natural language statements and further selections from additional control panels, customized to users’ needs. The inclusion of compact visual control panels that favor recognition over recall and present abundant choices facilitate understanding and forming mental models. Control panels could include buttons, pulldowns, check boxes, sliders, form fill-in, and other direct manipulation controls, which facilitate exploration by way of easily reversible actions and feedback that clarifies possibilities. AI technologies can be used to set defaults, adjust for context, and makes recommendations. I used examples from restaurant reservations systems (OpenTable, Resy), airlines (United, Air Canada, Delta), hotels (Hilton, Marriott, Holiday Inn), and shopping (Amazon). Then I turned to the current designs for LLMs (ChatGPT, Copilot, Claude). There are 25 styles built in to OpenAI’s ChatGPT, but most users are unaware of these hidden features, so they are unlikely to specify these possibilities. I proposed an interface that would give users greater control by emphasizing recognition over recall (Figure 1, see attached file), provide some examples, and put the emphasis on the user. Figure 1: Proposed interface for ChatGPT that exposes some of the goals, styles, tones, and formats that users might choose. This interface removes the suggestion that there is a human-like agent, giving users greater control, which supports their self-efficacy, enables creative exploration, and clarifies their responsibility for the results. I believe that this style of interface would result in more users taking on more ambitious tasks and experts becoming SuperExperts. Shah wrote that “Perhaps the most useful next step is not another framework paper, but a design competition: who can build a search interface that is both genuinely useful and measurably better for learning than a blank prompt window?” After the conference White wrote “you made an interesting point about people wanting information to perform action (the information is rarely an end in itself). … A key question is clearly how to keep people at the center of these AI advances: empowered, in control, learning, and able to benefit from what automation makes possible. I find the control/agency/engagement aspects especially interesting given AI’s emerging abilities to tackle long-running, complex tasks and new mechanisms for coordinating agent work.” I believe that thinking about user interfaces helps clarify theories of information seeking and learning, as well as benefitting users. * Best Wishes, Ben Other Items: The U.S. National Science Foundation announces their AI‑Ready America program https://www.nsf.gov/funding/opportunities/techaccess-ai-ready-america which is a “national-scale initiative to accelerate Artificial Intelligence (AI) readiness and adoption across the U.S. by strengthening coordination, leveraging partnerships and resources, filling gaps, and scaling what works—so local and state priorities can lead in shaping an AI-driven economy that benefits all Americans.” This seems like a valuable approach to promoting human-centered thinking in the teaching and use of AI tools. The required Letter of Intent is due on June 16, 2026 and the full proposal submission deadline is July 16, 2026. *Serena Oduro, Briana Vecchione, Meryl Ye, *and *Livia Garofalo *wrote about Protecting the Public from Chatbot Harms: Aligning State Policy with Research https://datasociety.net/points/protecting-the-public-from-chatbot-harms-aligning-state-policy-with-research/ (March 25, 2026). “In response to mounting cases https://techjusticelaw.org/2025/11/06/social-media-victims-law-center-and-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/ of users harmed by their interactions with chatbots, including those who have died by suicide, state legislators have been spurred to action. California’s companion chatbot legislation (SB 243) and New York’s AI companion law (A6767), for example, require disclosures that notify users that chatbot responses are not human, and protocols to recognize suicidal ideation and refer users to crisis hotlines. In Illinois, the Wellness and Oversight for Psychological Resources Act bans chatbots that are specifically designed to provide therapeutic or mental health services and also prohibits companies from positioning or marketing chatbots as providing that kind of support.” In a related article, Vecchione wrote an article seeking to understand What Happens When People Turn to Chatbots for Therapy? https://datasociety.net/points/what-happens-when-people-turn-to-chatbots-for-therapy/ (August 6, 2025). *Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, *and Dan Jurafsky reported that Sycophantic AI decreases prosocial intentions and promotes dependence https://www.science.org/doi/10.1126/science.aec8352 ( Science, March 26, 2026). The wrote that “High-profile incidents have linked sycophancy to psychological harms such as delusions, self-harm, and suicide. Beyond these cases, research in social and moral psychology suggests that unwarranted affirmation can produce subtler but still consequential effects: reinforcing maladaptive beliefs, reducing responsibility-taking, and discouraging behavioral repair after wrongdoing.” I was particularly interested in the result that sycophantic AI responses reduced the human user’s willingness to accept responsibility for their actions. The authors continue, reporting that users “became more convinced they were ‘in the right’ and less willing to take initiative to apologize or repair relationships.” Users “rated sycophantic responses as higher quality, trusted these models more, and