SPROUTS.AI · 2024 · B2B WEB APP

Building Account intelligence to equip sales teams with buying signals, relationships, and AI-driven account insights to win and expand deals.

ROLEProduct Designer
TIMELINE2 months
STATUSShipped · 2024
PLATFORMB2B Web App
CONTEXT

Sprouts.ai is an Accel-backed GTM intelligence platform. Enterprise sales teams use it to research accounts, surface buying signals, and coordinate outreach. The core market problem: sales reps spend 60–70% of their time on accounts they'll never close — not because they don't work hard, but because they're working without the right intelligence. They don't know the people, can't reach the right decision-makers, and can't predict when an account is ready to buy.

The platform had the raw data. What it lacked was a unified surface for it. Reps were toggling between Sprouts, Sales Navigator, Apollo, Google, and ChatGPT in a single prospecting session — each tool answering one piece of the puzzle, none of them talking to each other. Deals fell through not from lack of effort, but from lack of context.

We ran research sessions with 10+ Sales Development Representatives — people already using Sprouts daily to source and enrich data. We shadowed live prospecting sessions, watching reps navigate Sales Navigator, Apollo, Outreach, and Sprouts simultaneously. Workarounds show you exactly where a product is failing. From this we mapped the full account journey: every step from account assignment to first meeting, tracking where time gets wasted, where confidence drops, and where deals die.

The task was to redesign the core account page into a single destination: one URL, every piece of intelligence a rep needs to research an account, prioritize outreach, and book the meeting.

THE PROBLEM

Enterprise reps spend 60–70% of their time on accounts they'll never close. The tool they needed didn't exist yet.

The root problem wasn't data quality. Sprouts had the signals — buying intent, org structure, funding activity, competitor tech stacks. The problem was presentation and fragmentation. Without hierarchy, without workflow context, a page with eight data types at equal visual weight isn't intelligence — it's noise.

Reps were doing a five-tab open every time they researched an account: Sales Navigator for org chart, Apollo for contacts, LinkedIn for recent activity, Google for news, ChatGPT to synthesize it all. Sprouts was one of those five tabs — not the hub.

The real challenge: could we collapse that five-tab workflow into a single page without making the page feel like five apps stapled together?

Diagram showing the five-tab workflow reps used before — Sales Navigator, Apollo, LinkedIn, Google, and ChatGPT — versus the single Sprouts account page after

The five-tab open: every account researched across five tools. No single source of truth.

CONSTRAINTS
TECHNICAL

Cortex — the AI co-pilot — couldn't generate relevant output without knowing what the rep was selling. The data pipeline aggregated signals from multiple sources (job postings, funding rounds, news), but without product context, every insight was generic. A Knowledge Base layer had to be designed and built before Cortex could be useful.

ORGANIZATIONAL

Two months. Team of five. HP and AT&T as early adopters meant we were designing for enterprise-scale workflows before we had enterprise-scale research. Usability testing had to be embedded inside the build cycle, not run before it.

THE WORK
THE ACCOUNT PAGE

One URL for every piece of intelligence a rep needs

We designed the account page as a layered system — not a flat list of every available data point, but a page that surfaces critical information first and reveals depth on demand. The page covers eight intelligence types: Overview (firmographics + top signals + strategic talking points), White Spaces (where your product solves a real problem), Signals (buying intent activity over time), Buyer Committee (decision-makers mapped to your ICP), Activity Feed (outreach history), Org Chart (department structure + open roles), Workforce Analytics (hiring trends + headcount growth), Technographics (full tech stack split by relevant vs. competitor tools), and Competitive Intelligence (SWOT + battle cards).

The redesigned account page showing the layered information hierarchy — Overview at top, then White Spaces, Signals, Buyer Committee, and deeper intelligence modules below

One page. Eight intelligence types. Progressive depth — critical information loads first, full detail on demand.

THE INFORMATION ARCHITECTURE

Eight data types on one page without overwhelming the rep

The design challenge was preventing information density from becoming a liability. We used progressive disclosure throughout: each module surfaces its headline view first, with full depth accessible via expand. The Overview — firmographics plus top signals — loads immediately and gives a rep enough to start a conversation. Everything else is available without leaving the page, but only surfaces when the rep is ready for it. We tested three information hierarchies with the SDR team. The one that landed matched the rep's workflow, not the data taxonomy: Signal → People → History → Tech.

Information architecture diagram showing the four-layer hierarchy: Signal, People, History, Tech — each collapsible and individually expandable

The four-layer hierarchy: Signal → People → History → Tech. Follows the rep's workflow, not the data source.

White Spaces were the highest-engagement section in early testing. Each surfaces a specific gap where the product can solve a real problem — with a rationale so the rep understands the logic, not just the recommendation.

THE KNOWLEDGE BASE

Cortex couldn't be useful without context

Before we built the Knowledge Base, reps were pasting company summaries into ChatGPT before every research session just to get answers relevant to what they were selling. The AI wasn't producing bad output — it just didn't know the product, the persona, or the sales motion. We built a Knowledge Base where teams can upload product catalogs, sales collateral, demo videos, and release notes. This fed Cortex — the AI co-pilot — with the right context permanently. No more session-by-session setup. No more toggling to Google or ChatGPT mid-research.

The Knowledge Base interface showing file upload for product catalogs, collateral, and release notes — connected to the Cortex AI co-pilot

The Knowledge Base: permanent context for Cortex. Upload once, relevant insights forever.

TRUST IN THE DATA

Every insight surfaces its source

Reps questioned AI-generated insights — and they were right to. We noticed it early in testing: a rep would read a Cortex recommendation, then immediately open a browser tab to verify it. The fix wasn't better accuracy — it was better transparency. Every piece of intelligence now surfaces its source so reps know where the information comes from and can decide how much weight to give it. We also added a feedback mechanism so reps can flag when insights feel off. Trust, in this case, was a design problem before it was a data problem.

Signal card showing the source attribution layer — each insight displays its origin data point and a feedback flag option

Source attribution on every insight. Reps know where the data comes from — and can push back when it's wrong.

The feedback mechanism wasn't just a UX feature. It fed signal back to the model. After one month of active flagging from HP and AT&T reps, insight relevance improved 80%.

WHAT I CHOSE NOT TO DO

We designed — and prototyped — a dynamic territory reordering feature: the rep's account list would automatically reprioritize based on which accounts were showing the strongest buying intent signals that week.

We paused it. Not because it wasn't useful — it was the most-requested feature during testing. But we hadn't solved the false positive problem. An AI that confidently de-prioritizes an account a rep has been cultivating for months creates a trust collapse that's hard to recover from. The signal card design had to be validated at scale before we gave the AI authority over territory decisions.

OUTCOME
30%

increase in demos booked and deals progressing

early trial estimate

80%

improvement in insight relevance and accuracy

after Knowledge Base introduction

3

external tools eliminated from the daily rep workflow

Google, ChatGPT, Sales Navigator

The account page launched to HP and AT&T first. Demos booked and deals progressing increased 30% over the trial period — early signal, but directionally clear. After the Knowledge Base was introduced, insight relevance improved 80% — reps stopped asking "why is this relevant to me" and started asking "how do I act on this".

The metric we didn't anticipate: reps stopped opening other tabs. The multi-tool workflow — five tabs per account — collapsed to one. That wasn't a number we tracked. But it was the outcome the whole design was built for.

WHAT I LEARNED

AI tools aren't judged by how often they're right — they're judged by how they handle being wrong. The source attribution, the feedback mechanism, the confidence signals: none of these were features reps asked for. They were design solutions to the question of how a rep continues to trust a tool after it gives them a bad recommendation.

I also learned that consolidation has a cost. When we collapsed five tools into one page, we made Sprouts responsible for all five. When the page is slow, reps can't open another tab anymore. That's a different kind of trust obligation — and it raises the bar on reliability in ways that a supplementary tool never has to meet.