How to define your pricing strategy
Pricing across multiple markets
In the online world, marketing places a strong emphasis on data-driven evaluation. The same should apply to pricing. Every change and pricing tactic should be analyzed with the same level of scrutiny.
This chapter covers the fundamentals of evaluation and analytics. Our goal is not to replace formal statistics education, so we will simplify concepts where needed, even at the cost of minor statistical inaccuracies.
The first and most important principle of evaluation is understanding what you are actually measuring. As we’ve defined earlier, pricing is a set of activities that determine product prices. When testing whether your pricing approach is effective, you are actually testing a specific strategy or, more often, a specific tactic—not pricing as a whole.
Testing and evaluation are not just add-ons to pricing; they are an integral part of the process. The ideal approach is to implement a structured optimization cycle, which consists of three key steps:
Let’s use a simple example:
As product managers, we oversee the “Pet Food – Standard Canned Food” product group. Today, we price all products with a fixed 25% margin. We frequently receive customer service calls whenever prices change slightly, suggesting that customers in this segment are extremely price-sensitive. Additionally, we know that this is a highly competitive market.
Our hypothesis: If we match the lowest competitor’s price, while maintaining a minimum 20% margin, we will increase total revenue, leading to a 10% increase in absolute margin due to higher sales volume and additional items purchased.
Once your hypothesis is ready, you can set up a new tactic for the group and start testing. Hypothesis testing theory is its own branch of statistics and involves too many variables for our purposes.
That’s why we recommend simplifying it at the start. Test until you reach at least 250 sales—the more, the better. Always test in multiples of weeks—at least 14 days, but no longer than 63 days.
At least 250 sales ensure basic statistical relevance. Testing in multiples of weeks (at least 14 days) helps smooth out the impact of weekends. Testing for no longer than 2 months reduces data noise—the longer you test, the more external factors (competitor activity, discounts, seasonality, etc.) will distort the results.
If you don’t reach 250 sales within 60 days, the group is too small to test, and we recommend merging it with another group.
In an ideal world, evaluation is simple. You measure the performance (most often revenue, profit, or margin %), compare it to the same period before the test, and determine which option performed better.
But we don’t live in an ideal world. In pricing, we evaluate the impact on revenue and profit, but these metrics are influenced by many other factors. Your data may easily become “noisy”—meaning it contains signals that impact sales and thus affect test results.
In the worst case, this could lead to choosing a pricing strategy that would have lost in a clean test, ultimately harming your revenue and margin.
What should you compare your results against?
Classic A/B test
Show one price variant to half of your visitors and the other variant to the rest. Statistically, this is the most accurate method. However, if you operate in the EU, this technique is prohibited. Worse yet, customers tend to react negatively to this type of testing.
Comparison with another time period
Measure your group’s performance and compare it to a different time frame, either the previous period or the same period from the previous year. Advantage is that it's easy to implement and analyze. Disadvantage is that if seasonal trends impact performance, they will distort the data. For example, if your industry and store grew 15% year over year, but your test period showed the same performance, it would falsely appear 15% better than the previous year.
Comparison with the previous period while accounting for seasonality
More accurate approach is to adjust for seasonal trends when analyzing results. For example, if your tested group’s margin grew by 10%, but overall store performance increased by 15% in the same period, the actual impact of the pricing tactic would be -5%. Advantage is that it's relatively simple and quick. However store-wide performance isn’t always a perfect benchmark, especially if there are major performance differences across product groups.
Comparison with a similar, non-competing product group
A strong option is to compare your tested group with another group in the same time period. This eliminates the issues of seasonality. Advantage is that it avoids distortions caused by comparing across different time periods.
However finding the right control group is difficult—it should be in the same season but not contain substitute products that could affect sales. For example, comparing Huawei phones to Sony phones wouldn’t work—if Huawei prices dropped significantly, Sony sales would decline as customers typically buy only one phone.
Better comparisons might be: women’s running shoes vs. men’s running shoes or Canon laser toners vs. Epson laser toners.
It depends on your business specifics, market conditions, product mix, and team capabilities. These factors contribute the most to data noise, so choosing the right evaluation method can help filter out irrelevant signals.
Most clients compare results against the previous period while adjusting for seasonality. However, the cleanest method is testing against another product group—though this isn’t always possible if no comparable product group exists.
What causes data noise?
Try to minimize data noise as much as possible. This can be done by choosing the right evaluation method or actively reducing external influences—for example:
Don’t be discouraged if both test variants yield similar results—this is the most common outcome. It simply means that customers were not sensitive to this pricing change—either the price adjustment was too small, or the increase in profit from a higher price was offset by a decline in sales volume (i.e., demand was unit-elastic).
Given the inherent imprecision of small sample sizes (fewer than 1,000 sales) and data noise, consider whether differences smaller than 5% are worth acting on. If your tests consistently show differences below 5%, try running more aggressive pricing experiments.
Congratulations—you’ve completed your first pricing test! You formulated a hypothesis, tested it, and evaluated the results. The goal was to determine which pricing tactic had a greater impact on revenue and profit.
Testing is an ongoing cycle, continuously exploring hypotheses related to price sensitivity, competitor actions, price elasticity etc.
By systematically testing and refining your approach, you can continuously optimize your pricing strategy.
This doesn’t mean you need to test all product groups forever. According to the exploration/exploitation dilemma, a system can only focus on one activity at a time—either searching for a new optimal strategy (exploration) or maximizing the benefits of an existing successful strategy (exploitation).
In strategic pricing, this translates into the dilemma of whether to launch another test with uncertain results or stop testing and capitalize on proven pricing tactics.
The decision to continue testing is entirely up to you. It can depend on how close you are to achieving your strategic goals, the seasonality of your business, or the workload of your product team. In practice, exploration and exploitation phases tend to alternate regularly. You also don’t need to test your entire product range at once—you can focus on selected segments.
That said, long-term testing remains essential. As the market and demand evolve, the effectiveness of pricing tactics may change. We recommend scheduling test periods periodically to ensure your pricing strategy stays optimized.
Who is responsible for evaluating pricing? What method do they use?
Which product groups in your business are suitable for evaluation using homogeneous product groups?
What were the biggest market disruptions in your industry over the past 12 months? What disruptions do you expect in the future?