64% of support tickets were answerable with existing knowledge — but reps couldn't find it in time.
In late 2024, Sprouts.ai expanded into the CX space — applying the same account intelligence infrastructure to customer support workflows. The problem: enterprise support teams had knowledge bases that were too large to navigate effectively and too unstructured to surface the right answer at the right moment.
64% of support tickets were answerable with information that already existed in the knowledge base. The problem wasn't that the knowledge didn't exist — it was that reps couldn't find it in the 2–3 minutes they had before a customer expected a response.
Scout was a new product built on Sprouts' existing intelligence infrastructure. It needed a design that could surface existing knowledge as a real-time signal during an active support conversation — without requiring the rep to interrupt their workflow to search for it.
64% of support tickets were answerable with existing knowledge — but reps couldn't find it in time.
The hardest design problem in Scout wasn't the search — it was the context. A support rep dealing with a frustrated customer doesn't have time to formulate a good search query. They need the right answer to appear before they've finished reading the ticket.
The challenge was designing a system that could do the cognitive work the rep didn't have time to do — scanning the knowledge base, ranking by relevance, and surfacing the top 3 results — all without the rep having to ask.
The 4-minute knowledge gap: by the time a rep found the right article manually, the customer had escalated.
The knowledge base indexing pipeline had latency — signals were available within 3–5 seconds of ticket creation, not in real time. The design had to account for a loading state that felt useful rather than empty.
Scout was a zero-to-one product. There was no existing design language for it — I had to define what a good CX intelligence interface looked like from scratch while staying within Sprouts' existing design system.
Surface answers before the rep asks for them
Scout analyzed the ticket text and surfaced the 3 most relevant knowledge base articles in a persistent side panel — no query required. Articles were ranked by relevance score and recency. Reps could expand, copy a formatted response, or mark an article as "not helpful" to improve the model over time.
Scout's side panel: proactive, not query-driven. The rep doesn't have to ask.
Design the 3-second wait as a feature, not a failure
The 3–5 second indexing latency was a technical constraint. We designed it as a deliberate moment: a progress indicator that showed the rep that Scout was "reading the ticket". This set accurate expectations and prevented reps from assuming the panel was broken.
"Reading ticket..." — language that creates trust. Not "Loading..." which creates anxiety.
The language on the loading state mattered more than the animation. "Loading..." created anxiety. "Reading ticket..." created the sense that the system was doing work on the rep's behalf — which it was.
We almost built a "suggested reply" feature — a one-click button that composed a draft response using the surfaced knowledge. We held back. Not because it wasn't useful, but because auto-composed responses have a quality floor problem: 80% great, 20% confidently wrong. We didn't want Scout to become the tool that sent the wrong answer at speed.
first-response resolution rate
within first quarter
reduction in ticket escalation time
90 days post-launch
increase in knowledge base utilization
vs. pre-launch baseline
Scout shipped in Q2 2025 to a select group of enterprise CX teams. First-response resolution — tickets resolved without escalation — improved significantly within the first quarter. Escalation time dropped 48%. Knowledge base utilization — articles being accessed from the platform — increased 3.1x.
The feedback that mattered most came from reps: "I stopped dreading complex tickets." That's the signal we were designing for.
The insight that changed my approach: support reps aren't researchers, they're executors under time pressure. Every second I spent designing a "better search" was a second spent on the wrong problem.
The right design had to make the search unnecessary — which meant the system had to do the cognitive work, not the rep. That reframe — from "how do we help them find it faster" to "how do we make finding it unnecessary" — unlocked the whole design.