ADVERTISEMENT

A Bad Metric for Good Pricing

December 2017 Pricing, Selling 1 Comment

How do you measure the effectiveness of a pricing transformation? If you invest in improving pricing policy, how will you know that you succeeded in driving improvement?  More specifically, how will you prove that a pricing improvement drove profitability?  Moreover, how will you demonstrate that an improvement in profit was driven by improving pricing and not some other exogenous or endogenous factor?  

“If you can’t measure it, you can’t manage it” is a common quip.  In response, we put metrics on aspects of a business that matter. For instance, the KISS principle (Keep It Simple Silly) drives us towards using commonly understood metrics. Unfortunately, simple metrics applied broadly leads to everything between clear mandate and a perverse incentive. This is as true for corporate governance as it is for sales incentive plans and even metrics of pricing transformation effectiveness.

In this article, we look at how I have often seen pricing improvements measured and why I have some serious reservations with this common metric. I do this in the hopes of generating responses on how you accurately measured the effectiveness of pricing at your company.

Profit Drivers

Starting with the standard profit equation of the firm, the profitability in two different periods is defined as:

\(R_i=Q_i\cdot(P_i-V_i)-F_i\)
\(R_f=Q_f\cdot(P_f-V_f)-F_f\)

where R is profit, Q is quantity sold, P is price, V is variable cost, and F is fixed cost.  The subscripts i and f denote two different periods, perhaps i for the year before a pricing transformation and f for the year after the pricing transformation.  

The change in profit between any two periods, \(\Delta R\) would be simply the difference in profits between those periods.

\(\Delta R=R_f-R_i\)

Inserting the definitions of the standard profit for the two periods in the change of profit equation, and making some notation changes, we find

\(\Delta R=\Delta Q\cdot(\overline{P}-\overline{V})+\overline{Q}\cdot\Delta P-\overline{Q}\cdot\Delta V-\Delta F\)

We have used \(\Delta\) to represent the difference in a business factor between the two years and the upper bar to represent the average of that business factor for those two years.

 

Change Average
Quantity \(\Delta Q=Q_f-Q_i\) \(\overline{Q} = \frac{Q_f+Q_i}{2}\)
Price \(\Delta P=P_f-P_i\) \(\overline{P} = \frac{P_f+P_i}{2}\)
Variable Cost \(\Delta V=V_f-V_i\) \(\overline{V} = \frac{V_f+V_i}{2}\)
Fixed Cost \(\Delta F=F_f-F_i\)

 

The above expression deceptively looks like a simple way to disaggregate the impact of improving a business function from overall changes in profits.  One might be tempted to assign an increase or decrease in sales impacts profits through the first term \(\Delta Q\cdot(\overline{P}-\overline{V})\), that from a change in pricing through the second term \(\overline{Q}\cdot\Delta P\), that from a change in costing through the third term \(-\overline{Q}\cdot\Delta V\), and that from a change in overhead through the fourth term \(-\Delta F\).  But this is just as often a deception as it is insightful, which makes it a suspect assignment.  

Don’t Be Deceived  

Consider the impact of an improvement in pricing. Yes, the term \(\overline{Q}\cdot\Delta P\) does include the difference in prices over the two periods—and hence may seem like a good way to measure part of the impact of a new pricing policy—but it is woefully incomplete and misleading.

As long as prices went up, this metric would imply that the new pricing policy improved profits.  As such, it perversely provides an incentive for every pricing professional, consultant, and pricing software vendor to always raise prices and decrease discounts, even when higher prices are not justified.  While I am all for raising prices when they can be, measuring pricing effectiveness strictly around raising prices oversimplifies and misrepresents what good pricing is about.  

Good pricing does not always mean higher prices. It means more accurate pricing.  By accurate, I mean prices that accurately reflect the value of the offering to the customer compared to their alternative possibilities. Pricing policy that accurately provides discounts and rebates where warranted and doesn’t where unwarranted, prices that accurately capture the firm’s fair share of the value delivered to customers while still encouraging the right target market to purchase the good or service.

Sometimes, accurate pricing leads to lower prices.  If we always define the profit impact of a pricing improvement as \(\overline{Q}\cdot\Delta P\), then we would perversely state that more accurate prices which are lower is always bad.  This is nonsense.  

We also know very well that price increases are generally associated with volume decreases.  If prices went up driving volumes down which in turn drove overall profits down, the simple metric of \(\overline{Q}\cdot\Delta P\) would imply that the pricing project was a success even though it harmed profits.  Conversely, if more accurate pricing led to lower prices but significantly higher volumes which in turn drove profits up, the simple metric of \(\overline{Q}\cdot\Delta P\) would imply that the pricing project was a failure even though it improved profit.  

Hence, we must consider the term that reflects the impact of a change in volume on profits, \(\Delta Q\cdot(\overline{P}-\overline{V})\). If all we have is the business factors in the two different periods, we can’t objectively state that any change in volume was due to changes in market conditions, selling effort, or pricing. From basic microeconomics, we strongly suspect that at least some of the change in volume reflects the impact of a pricing improvement effort.  How much?  We would need to precisely know the price elasticity, and this is rarely measurable with any meaningful usefulness at the firm level.  

We might be tempted to ignore this term in evaluating a pricing improvement effort as price changes moderating impact on volume is a secondary effect. Unfortunately, this would quite frequently be a large error.  Numerically, I have observed that the term describing the impact of volume changes on profits often dwarfs that of the impact of price changes on profits.  Without considering the impact on volume changes driven by price changes, any measurement of a pricing improvement effort will be erroneous.  

Similarly, we could make arguments for including some of the term \(-\overline{Q}\cdot\Delta V\) in the profit impact of a pricing effort, especially if the pricing effort drove changes in portfolio mix and therefore variable costs.  And, if the pricing improvement effort added headcount our software, we would also need to include a portion of the term \(-\Delta F\). How much?  We wouldn’t know from just the standard business factors that are commonly measured.

That is: measuring the impact of a pricing improvement effort by \(\overline{Q}\cdot\Delta P\)  is deceiving, misleading, oversimplifying, and perverse.

Ok, Now What?

If we can’t simply measure the impact of a price improvement effort between two different periods as \(\overline{Q}\cdot\Delta P\), then what can we do?  What objective metric should we use to measure the impact of all pricing improvement efforts?  I can accept that this is a useful metric some of the time, but I also know full well it is a bad metric at other times. And by bad, I mean misleading at best and deceptive at worst.  So, what should be used?  Another common metric, margin percentage, has even greater challenges.

Why do I care?  Increasingly I am hearing from pricing professionals that they have been driving pricing improvement efforts for a number of years and are hitting a ceiling.  They say they can’t raise prices any more and expect customers to buy.  When prices can’t be raised, a skeptic on the management team will suggest reducing the overhead associated with the pricing department.  That is, someone says “I can’t see how pricing is driving profitability anymore.  Why not downsize the pricing department and put resources in a better place?”  Rather than let such a misguided approach to management become engrained, I would like to head it off with a good, universally applicable, metric.  Unfortunately, I don’t have one yet.  

(Yes, I could suggest using whatever metric looks positive, but that seems disingenuous. Only the simple minded can evade the challenge of being intellectually honest while also being effective.)

Given the desire to be truthful, I find myself reverting to management testimonials and story-telling.  This may be more truthful and less deceptive, but it is hard to convince a numbers-driven skeptic when they apply a bad metric as justification for their skepticism. Bad metrics of good pricing can lead to disaster.  



About the author

Tim J. Smith, PhD is the Managing Principal of Wiglaf Pricing, and an Adjunct Professor at DePaul University of Marketing and Economics. His most recent book is Pricing Strategy: Setting Price Levels, Managing Price Discounts, & Establishing Price Structures.

Tim J. Smith, PhD
More by Tim J. Smith, PhD