I wrote an article discussing The Future of User I...
# thinking-together
s
I wrote an article discussing The Future of User Interfaces .. and the role of Conversational AI: https://www.linkedin.com/pulse/future-conversational-crafting-next-gen-user-ai-salmen-hichri-lpw3e/ I'm curious to know what the community thinks! 🧠💡 • Do you foresee a world where chatbots and voice are the dominant user interfaces? • Or will the mouse/screen always maintain a leading role in human/machine interactions?
e
Thanks for sharing this, it was a really interesting read. To your questions, I’d love your insights from thinking in the space. My take is that it’ll never be either one or the other, but probably a big mix of both — as is I and I’d wager many folks talk a lot at work, just not to computers. It is during conversations with other people. Generally, that talking leads to mouse and keyboard stuff. To me, conversational interfaces are less about a novel input method and more about shifting a mode of collaboration — when you can talk with a computer it becomes less of a tool to use to complete some work, and more of a collaborator in that task.
d
conversational UX is inherently 1D so generally less efficient if the search space is small and the classification scheme relatively obvious to the audience. Additionally, when done via voice/response, it's inherently public, which is usually a detriment rather than a feature. Lastly, people generally expect a conversational reply in a shorter time than other UI feedback, so it feels slower even if it isn't.
that being said, I think the current generation is very good at rewording to optimize for both a goal tone and easily measurable outcome like response tone or action taken (though the outcome part can be a monkey's paw situation so requires supervision)
g
One thought on the "integrated AI companion": Coding assistants may have been low hanging fruit, because it's relatively easy to extract context to provide to the AI system. For other systems, I suspect event-driven architectures may become more popular: they may allow for building that context on-the-fly to provide to the AI.
d
if you can mine github for it, it's in a coding assistant today; otherwise the amount of labelled training data you need is pretty massive (let alone copy/paste and stackoverflow bias). Having a ton of event data helps, but unless there's some clever way to autolabel, 🤷
s
I agree that conversational interfaces are not relevant for everything (the 1D aspect @Don Abrams ) .. Text or voice won't replace the efficiency of visual data formats like charts or tables ...
They can replace complex menus though .. I find it easier to type a command in natural language through an AI assistant to perform a task, rather than trying to figure out how to do it myself via a complex UI ... but when I'm using a familiar tool with shortcuts and customised UI, I can simply use keyboard shortcuts ..
a
Another example of the discontinuity between a UX optimized for discoverability by non-experts, and one optimized for proficient use by experts. “The unreasonable effectiveness of text”
also see: visual programming (70% serious, 30% trolling) … once you understand the details of the solution space well enough, the high-level semantic representation needs to get out of the way, and something you can break on purpose with search & replace becomes more productive
also see: IBM VisualAge, which was written in Smalltalk for Smalltalk, and later extended for other languages. Keeping your namespaces, classes, method signatures, and method bodies in separate windows kinda makes sense from an exploratory perspective, but it’s incredibly useful to occasionally do impolite layer-breaking things like rename files, global search & replace, etc.
Synthesizer keyboard nerds have an evocative term “menu diving”, where sometimes the tweak you need isn’t available on, or even assignable to, a physical control surface
One extreme of that spectrum is modular synths, where nearly every discrete function is a physical module, connected with physical wires… Unbounded creativity, not at all approachable for someone who wants to solve a simple problem quickly, and also rigid in the sense that your choice of which modules to use, and how you wire them up, is a special IRL artifact that you can’t just save as a preset and try something wildly different for awhile
Somewhere close to the opposite extreme, you have things like smartphone/tablet apps that can do surprisingly powerful synthesis, but due to limited physical control surfaces, you end up menu diving all the time, but on the plus side, you can load a new preset in a second or two
d
I can’t help but think of the last time chat bots were a UX paradigm receiving significant attention. This has been an ongoing cycle since Eliza. To be fair, this time it genuinely is different. It’s interesting what this article prioritizes. It talks a lot about having the context of the conversation. Which we kind of get for free now. What’s fascinating to me at the moment is how people are modifying things like ChatGPT with custom directives or domain specific GPTs. I don’t love that they’re building on a closed ecosystem (or is it _Open_AI?), but that’s another story. These customizations are more in line with UX concerns like the ones illustrated in this pre-consumer LLM article on Chat Bots: https://chatbotsmagazine.com/the-future-healthcare-and-conversational-ui-87182d9045e0 Here we have the healthcare sector and the author does mention context but spends most of her time talking about empathy and what constitutes simplicity in such an interface.