I Build Products From Messy Data and Complex Workflows that Drive Outcomes
I turn complex, multi-stakeholder operational systems into products teams can use to make strategic decisions. My focus: answer what's actually working, validate quickly with data, and ship practical products grounded in real user behavior.
I've driven 5× growth at Max Retail, led AI-driven trial experiences at FormAssembly, and built a 0-to-1 marketplace at Opportunity@Work.
FormAssembly · Head of Product-Led Growth
Trial users were not converting before finding value. A 3-day AI prototype collapsed time-to-value and became the #1 roadmap priority.
FormAssembly's value proposition was clear: build a form, connect it to Salesforce, watch it work. But most trial users churned before reaching that moment. The problem was time. Building a workflow and connecting it to a form took days. By the time they got there, they'd already left.
Long Time-to-Value: Setup took days, users weren't finding value quick enough and not converting to paid
Unknown User Preferences: Unclear how much AI guidance vs. user control was needed
Need for Speed: Had to prove the concept with a working prototype fast
01. Designed backwards from the aha moment
Rather than asking users to build from scratch, what if AI generated the workflow and form based on what they wanted to accomplish? Built a functional prototype using Cursor in a few days showing AI generating and wiring to Salesforce.
02. Validated with real users in Week 2
Got feedback from 10 customers. The feedback confirmed the concept, particularly around how much guidance was needed versus where users wanted control.
03. Delivered working prototype and validation that drove roadmap prioritization
Working prototype with customer validation. The concept became the prioritized initiative for the 2024 roadmap.
Users weren't failing because the product was bad. They were churning because time-to-value was too long. Reducing the path to value beat improving the value itself. Interactive prototypes reveal what users want control over vs. where they want to be led. Walking into the retreat with a working prototype and user feedback spoke louder than a deck.
Max Retail · Head of Product
Retailers were churning before making their first sale. A 3-week manual test with Excel spreadsheets proved a new model. The business pivoted from B2B to B2B2C.
Max Retail ran a B2B marketplace for surplus inventory. Retailers were churning before making their first sale. The data was clear: sellers who made a first sale stuck around. Everyone else left.
Retention Problem: Time-to-first-sale was the churn predictor
Three Competing Solutions: Optimize B2B, build marketing tools, or pivot to B2B2C
Unclear Which Would Work: No data to validate which path was right
01. Validated cheaply first
Ran a 3-week manual test. Uploaded 1,000 items from top 20 retailers to a 3rd party marketplace via Excel. Sales spiked immediately. Proved the model with zero engineering investment.
02. Built repeatable integration pattern
Engineer one integration while manually testing the next marketplace. Validate demand before committing engineering time. Scaled to 12 integrations.
03. Focused on the metric that mattered
Too many metrics were being tracked initially. Cut through the noise to focus on time-to-first-sale because that metric predicted retention.
The liquidity problem was a supply-demand imbalance. Rather than build buyer demand from scratch, the solution plugged into existing marketplaces where consumer demand already existed. A 3-week manual experiment validated a major strategic pivot and saved months of debate.
"Test before you build. A 3-week manual experiment can save months of debate."
Opportunity@Work · Director of Product
People without degrees were being filtered out before humans saw their profiles. A skills-based marketplace placed 100 hires in 6 months.
Opportunity@Work had a mission (help people without degrees get better jobs) but no product. A skills-based hiring marketplace needed to be built from scratch. HR teams expected full ATS functionality: interview scheduling, offer management. Telling them those things weren't being built required a clear answer: what are we building, and why is that enough to make a hire?
Feature Expectations vs. Reality: HR teams wanted full ATS, had to justify building simplified product with less flare but same value
Limited Measurement Options: Needed to prove confirmed hires without building full ATS integrations
Three-Sided Marketplace: Candidates, employers, and training providers all needed working funnels
01. Built only what proved the value proposition
Search by skills, rich candidate profiles, direct messaging. Everything else got deprioritized: interview scheduling, offer management, ATS integrations.
02. Used mission-aligned partners for measurement
Skipping ATS integration was a constraint. Mission-aligned employer partners manually reported confirmed hires. Not scalable long-term, but validated the model in months rather than years.
03. Built tracking across all three sides to understand real behavior
Mixpanel tracking across all three sides revealed what actually happened. Expected: skills-first search. Reality: location and availability came first. Filters were updated to match real behavior.
Zero to one is about what you say no to. Assumptions about user behavior need testing against real usage. Mission alignment creates measurement infrastructure when technology cannot. Three-sided marketplaces require a funnel for every participant.
Max Retail · Head of Product
Operations was drowning in manual processing across 12 marketplaces. Shipped automation two weeks before Black Friday with 0 major issues.
Growth was exposing infrastructure fragility. Orders were flowing in from 12 marketplace integrations. Every channel depended on the same small operations team. With Black Friday approaching, the team was unsustainable. More sales meant more headcount, and costs were growing linearly with revenue.
Different Integration Standards: Each marketplace had different integrations and exception scenarios
Immovable Deadline: Black Friday was the deadline that could not move
High-Stakes Failure Risk: If automation failed during peak season, operations would be overwhelmed with no fallback
01. Ran parallel workstreams for speed and trust
Product track built a no-code workflow so operations team had hands-on experience before full automation shipped. Engineering track built Master SKU infrastructure for standardized order ingestion. Changing tools a team relies on requires trust.
02. Built exception framework to focus human effort
The goal was not to eliminate human involvement but to focus it. Automate the common path. Route exceptions (malformed addresses, inventory mismatches, API downtime) to operations where human judgment adds value.
03. Made scope cuts to hit deadline
Multi-item orders had been underestimated. De-scoped multi-item orders. Protected 86% of Black Friday revenue. Shipped on time. Multi-item orders could follow in the next release.
04. Launched with two-week buffer before peak season
The automation went live second week of November with 0 major issues. The system held.
The no-code workflow wasn't a stopgap. It was deliberate trust-building with operations before flipping to full automation. The near-miss was a planning lesson. Scope cuts were uncomfortable but necessary. The two-week buffer before Black Friday wasn't accidental. It gave the team real time to absorb what went wrong in QA and adjust mid-execution.
Open to product leadership, senior IC roles, or interesting conversations.