As a Designer
A core consideration for the design of interfaces that expose LLMs is how unstructured the inputs to AI-assisted features should be.
Chatbot UIs, which enable users to freely go back and forth with the AI using natural language and expect the AI to remember some previous context. They are more unstructured.
Button UIs, which only accept specific inputs that might be generated by your SaaS application with a command that embeds a prompt you send to the LLM. They are more structured.
There’s Pros and cons for both, but really it comes down to three main factors: use case, complexity of input / outputs, and abuse potential.
For instance, if the inputs are very complex and you expect the user to want to guide outputs, you might prefer a chatbot-style interface. If the abuse potential is large, however, you might want to restrict how open-ended a task your user can give your LLM.
Note that you don't need a universal policy on this. At LogicLoop, it made sense for us to have both Chatbot and Button UI modes, just in different areas of the product.
Last updated