Jason Duke, Founder, Kronaxis
Tag: Tutorial
The lean startup methodology has a structural flaw that nobody talks about. Build, measure, learn. It sounds elegant. But the first word is "build", and building costs time, money, and the emotional bandwidth of a founding team that has a finite supply of all three.
The standard workaround is the landing page test. Put up a page describing a product that does not exist, drive paid traffic to it, measure signups or click through rates. This has been best practice for a decade. It also tells you almost nothing useful.
A landing page test tells you that some percentage of people, exposed to a specific headline, on a specific traffic source, at a specific time, were curious enough to enter an email address. It does not tell you whether they would pay. It does not tell you how much. It does not tell you which feature matters, which competitor they would compare you against, or what personality type finds your value proposition compelling versus confusing.
You are measuring curiosity. What you need to measure is purchase intent, price sensitivity, and competitive positioning, segmented by the personality types that actually drive consumer behaviour. You can do that before writing a single line of product code.
The old validation playbook
Before Panel Studio, a startup founder who wanted proper validation had three options.
Customer interviews. Find ten people who match your target market, ask them whether they would use a budgeting app, and watch them say yes because they are sitting in front of you and saying no feels rude. Even skilled interviewers cannot eliminate courtesy bias. The people who agree to talk to you are not representative: they are the subset who respond to cold outreach, which skews heavily towards high Sociability and high Yielding personality types. You are sampling a personality cluster and calling it a market.
Surveys. Write twenty questions, distribute via social media or a panel provider, get three hundred responses. The framing of every question shapes the answer. "Would you pay for a tool that saves you money?" is doing the work for the respondent. Even well designed surveys measure stated preference, not revealed preference, and the gap between those two is where most startup failures live.
Gut feeling. The founder uses their product themselves, asks a few friends, and decides that the market wants what they want. This works occasionally and fails catastrophically the rest of the time.
All three share the same weakness: they cannot tell you why different people respond differently. They give you averages when what you need is segmentation by personality.
The Panel Studio approach
Build a synthetic consumer panel that matches your target market. One hundred personas, census weighted for the country you are targeting, each carrying a full DYNAMICS-8 personality profile. Then test your product hypothesis against that panel the way you would interrogate a room full of potential customers, except these customers respond based on personality driven decision mechanics rather than social desirability or question framing.
Here is what this looks like in practice.
Worked example: a budgeting app
A SaaS founder is considering a personal budgeting app for the UK market. Target demographic: 25 to 40 year olds, mixed income levels, urban and suburban. Three value propositions are in contention:
A. "Automatic categorisation: we sort your spending so you do not have to."
B. "Savings goals with streaks: turn saving into a game."
C. "Bill negotiation: we find cheaper deals on your recurring bills."
Three price points: £4.99, £7.99, and £9.99 per month.
The founder builds a 100 persona UK panel on the free tier, then upgrades to Starter to access reasoning traces and the larger panel. The entire experiment takes thirty minutes.
Value proposition test. Each persona receives all three propositions and indicates which one would make them most likely to subscribe, and why.
Proposition A wins overall with 43% first choice. But the reasoning traces reveal something the headline number hides. High Discipline personas (scores above 0.7) prefer A at a rate of 71%. They already categorise their spending manually or want to. Automation saves them time on something they value. Their traces describe relief: "I spend twenty minutes a week sorting transactions into spreadsheet categories. If this does it accurately, that time back is worth real money."
Proposition B wins among high Impulsivity personas at 58%. These personas do not currently budget at all. Categorisation is not appealing because they do not want to look at their spending in that much detail. Gamification gives them a reason to engage. Their traces describe motivation: "I have tried budgeting apps before and abandoned them within a week. Streaks might keep me coming back."
Proposition C performs evenly across personality types at around 20%. Bill negotiation is a tangible benefit but feels like a one off rather than a recurring subscription.
Price sensitivity test. The same panel receives a pricing stimulus at each of the three price points, sequenced in a fresh conversation to avoid anchoring.
At £9.99, high Discipline personas convert at 34%. Their traces show explicit value calculations: "Automated categorisation saves me twenty minutes per week. That is 80 minutes per month. Even at minimum wage, that is worth more than ten pounds." High Impulsivity personas convert at 11% at the same price point. Their traces show sticker shock without analysis: "Ten quid a month for an app? No chance."
At £4.99, the pattern reverses. High Impulsivity personas convert at 38%. High Discipline personas also convert at 52%, but their reasoning traces flag a concern: "At this price, I wonder whether the product is any good. Serious financial tools cost more."
The optimal price depends on which segment you are building for. If you target high Discipline users with Proposition A, £9.99 is defensible and signals quality. If you target high Impulsivity users with Proposition B, £4.99 is the ceiling and annual billing will not work because commitment aversion is built into the personality profile.
Conjoint analysis. Panel Studio's conjoint engine tests feature and price combinations simultaneously. The founder discovers that "automatic categorisation + £9.99 + no contract" is the highest utility combination for Discipline dominant personas, while "savings gamification + £4.99 + one month free trial" wins for Impulsivity dominant personas.
In thirty minutes, the founder knows that the market splits along a personality axis, not a demographic one. Age and income do not predict conversion. Discipline and Impulsivity do. This shapes everything: the landing page, the onboarding flow, the pricing page, and the feature roadmap.
What this replaces and what it does not
Ten customer interviews through a recruiter: £500 to £2,000 in fees, two weeks of calendar negotiation, and a sample biased towards the personality types who agree to interviews. Panel Studio free tier: nothing, ten minutes.
A survey of 300 respondents via a panel provider: £1,000 to £3,000, one to two weeks for fieldwork, and results that tell you what people said they would do rather than what their personality predicts they will do.
A landing page test with paid traffic: £500 to £5,000 in ad spend, two to four weeks of data collection, and a signal that measures curiosity rather than purchase intent.
Panel Studio does not replace all of these permanently. Once you have identified your strongest hypothesis, validate it with real users. Talk to ten actual high Discipline budgeters and confirm that automated categorisation matters as much as the synthetic panel predicts. Run the landing page test on the winning proposition, not all three.
The difference is sequence. Instead of testing three propositions against real traffic and burning weeks and budget on the two that lose, you screen against a synthetic panel first, identify the winner with personality level explanation, and then spend your validation budget on confirming a single strong hypothesis rather than exploring a broad one.
Honest limitations
Synthetic panels tell you what personality types would do. They do not tell you what specific individuals will do. They are a population level tool, not a crystal ball.
If your product depends on network effects (it only works if your friends use it), synthetic panels cannot model adoption cascades. If your product depends on a habit loop that takes weeks to form, a single stimulus cannot test long term retention. If your competitive advantage is execution quality rather than concept, no amount of concept testing predicts whether your app will actually be good.
Use synthetic panels for what they are good at: rapid hypothesis screening, personality segmented analysis, and understanding the decision mechanics that drive different consumer types. Then take the best hypothesis and test it the old fashioned way, with real people spending real money.
The difference is that you arrive at the real people stage with a hypothesis that has been stress tested against a hundred personality profiles, not one that survived a team brainstorm and a conversation with your co-founder's partner.