The pain point that every single B2B company shares is pricing.
Countless philosophies flood the web on how to price your product (just see here, here, and here). More than 500 pricing products and services are offered in the tech space. All of this points to the fact that, when it comes to pricing, most people don’t know what they’re doing.
For a long time, neither did we.
Pricing is inherently risky because your revenue depends on getting it right. Yet there isn’t one de facto strategy that works. Mailchimp has had great success with their freemium pricing plan, which businesses can stay on forever. While CampaignMonitor—a functionally similar tool—has no free tier and scales their pricing according to email volume along with features.
We learned the hard way that there’s no silver bullet for pricing. More importantly, we found a way to constantly improve our pricing: here’s how we de-risked our pricing strategy.
When we were first developing our pricing strategy, we were very concerned about getting it right. We didn’t want to price ourselves out of the market, but we didn’t want to undervalue the tool we were so proud of building. Like many other companies, we didn’t want to make the wrong decision and be pigeon-holed as a certain type of brand.
After much deliberation, we ended up starting with a freemium plan, hoping to get our name out there and get some good feedback on the product.
After getting some initial traction, we continued on a similar trajectory to many other freemium SaaS companies —we went upmarket. We dropped our free plan and added a “tailored” enterprise plan, hoping to catch a few big fish.
In the course of two years we only changed our pricing three times. Each time, we factored in all the details we were told were important:
LTV/CAC. That golden ratio of 3:1 was a great starting benchmark, but as we grew we had bigger aspirations.
Market position. We knew there were a lot of customer support solutions in the market, so we made sure our pricing was competitive with existing solutions.
Buyer personas. We made sure we were packaging and pricing appropriately for each tier, and the progression made sense for upsells and upgrades.
Price vs. value. Pricing experts say that you should have price elasticity—a price that is less dependent on cost of building and acquiring and more dependent on the product’s value to the customers.
We were constantly second-guessing ourselves—did we want to go cheap for the masses, or expensive for the few? Were we delivering more value than our pricing indicated? Should we focus more on that top 20% of customers that contribute to 80% of our revenue? These are questions that gnaw on practically every startup team.
We weren’t wrong to be considering all these questions, but every change in our pricing felt like a huge risk. Because we changed pricing so infrequently, we could never tell how customers might react. Once or twice we tried to make a big change, but when the numbers dropped we’d quickly change it back. We needed to find a way to improve pricing more quickly and with less risk.
It was after a chat with the team at Intercom about eight months ago that we realized the obvious: iterating on pricing more frequently would give us more data on what works and what doesn’t. So instead of changing prices once a year, we decided to get experimental.
We set aside three weeks for our developers to make some internal adjustments to make pricing changes easier. We needed our software to give us the flexibility to experiment quickly and often. Naturally it cost us resources up front, but the investment was well worth it.
We went from iterating once a year to once every three weeks. Now when we make changes, we compare the new cohort to the previous ones: if the cohorts behave similarly or the new one is even better, we stick with the new pricing plan. If not, we just roll back to a pricing structure that’s worked well previously.
The best part about the experimental approach is that it’s safe. Because we test on small cohorts, there are fewer repercussions for totally screwing up. So instead of making big, infrequent jumps in our pricing, we evolve our pricing little by little at a steady cadence.
With every iteration, we have a better sense of how our customers value our product. Our experiments are also what led us to discover that even enterprise clients want to see a number on our pricing page.
A lot of people fear changing pricing too often because they think it will scare away their customers. And for some—that might be true. But *never *experimenting with your pricing means you may never learn the value of your product and its potential for growth.
We don’t have pricing all figured out. But we have found a low-risk way to experiment and get closer to our optimal pricing point—and we thought that alone was worth sharing. Here is how our pricing page looks like now.
To build your own experimental pricing strategy, you need more than just numbers and spreadsheets. You need to focus on behavioral and revenue data and find a way to group and compare that data over time.
Start by nailing down your “value metric”—what your customers are paying for. This can be something like number of videos hosted, inboxes created, or reports processed. You can gather that info from simple surveys sent out to your customers, either through email or in-app.
From there, decide how often you want to experiment (we recommend no longer than every two months). At each iteration pay attention to how the use of your value metric has grown. Maybe people were hosting 10 videos on your platform but are now hosting 100—that means you should raise your pricing accordingly.
At the end of each period, compare the cohorts and pay particular attention to:
Revenue: If you raise prices, your revenue should increase. If your price raise comes at a cost to your acquisition numbers, then your revenue will reflect that.
Engagement: Your price changes should positively impact user activity — you want to be attracting users that are engaged with your product. If your pricing changes cause engagement to drop, revert to a previous iteration.
Retention: It’s important to look at whether you’re acquiring users that stick around. Make sure that your acquisition numbers aren’t masking a churn problem.
If you don’t have the time or dev resources to build an analytics tool in-house, consider hooking up to a third-party tool like Amplitude. You can group users based on cohort and look at views that compare each cohort’s behaviors. Every test you perform will point you toward better pricing choices without risking your entire revenue for the month.
Header image credit goes to our friends at Proxyclick.