Guide to building your first LLM-powered app
  • Building an LLM-powered Application
    • Who this guide is for
  • Before you build
    • As a Business User
    • As a Product Manager
    • As a Designer
  • Implementing LLMs
    • Model Selection
    • Prompt Engineering
    • Model Failures
  • Important Next Steps
    • On LLM Usage Analytics
    • On LLM Fraud, Waste & Abuse
    • On Pricing LLM-powered features
    • Beyond V1
Powered by GitBook
On this page
  • Onboarding
  • 10x Asset Creation
  • Debugging
  1. Before you build

As a Product Manager

PreviousAs a Business UserNextAs a Designer

Last updated 1 year ago

Once we identified a business problem to solve with LLMs, a few natural places to integrate LLMs emerged. We had to make some product decisions on what parts of the user journey to enhance with LLMs.

Onboarding

LLM-generated output is a great way to solve the cold-start problem users face on your platform.

For us, that was “what SQL query can I write on my data”. By adding a tab to ask our AI to suggest queries and data trends to explore while users were onboarding, we improved conversion and stickiness of downstream features.

10x Asset Creation

LLM-generated output is also great at helping your users work through your core product quickly and efficiently.

We added our AI SQL Copilot right on the page where users were creating, modifying and saving their SQL queries for maximum impact.

Debugging

The more constrained inputs to your LLMs can be, the less you run into hallucinations and other typical issues. This makes LLMs good debugging assistants.

We've had success integrating LLMs into workflows where we expect the user to be debugging SQL queries as part of the AI SQL Copilot. It's the perfect combination of being too frustrating for a human to figure out where they missed a semi-colon and a machine being able to do that easily.