May 1, 2007
strategic insight | Metrics Survey

Performance by the numbers

performance by the numbers

There's no need to rely on guesswork when it comes to measuring DC performance. Our fourth annual survey provides a full set of benchmarks collected from the best … and the rest.

By Karl B. Manrodt and Kate Vitasek

They can quote their facilities' order fulfillment rates off the top of their heads. They can recite line fill rates out to the hundredth of a percentage point. They can reel off stats for worker turnover, order cycle times, and distribution costs as a percentage of sales. But when it comes to gauging what really matters—how their customers view their performance— America's DC managers seem to rely more on guesswork than on the numbers.

As part of our fourth annual warehouse metrics survey, we asked more than 1,000 DC professionals from across the country how well their operations performed last year against several key customer-oriented metrics—on-time delivery, percentage of "perfect orders," and the like. The responses were a model of consistency: In nearly every case, the majority (roughly 80 percent) of the respondents reported that in their customers' eyes, they were doing an average or above average job.

But a closer look at the numbers throws that assumption into question. As part of our analysis, we also used the figures the respondents supplied to run some calculations of our own—specifically, to determine just how many of their shipments were actually "perfect orders" (that is, on time, complete, damage free, and with the correct invoice). What we found was that DCs don't always perform to even this basic standard. To put it another way, a surprising percentage of orders shipped by the respondents' DCs appear to be decidedly less than perfect.

Customer service was just one of the aspects of DC performance covered in the fourth annual warehouse metrics survey,which was conducted in January among members of the Warehousing Education and Research Council (WERC) and readers of DC VELOCITY. Along with service levels, the survey also looked at how DCs are performing against a number of operational, financial, and employee metrics— 45 measures in all.

Exhibit 1

More than 1,000 managers participated in this year's survey, which was conducted by Georgia Southern University and consultant Supply Chain Visions. (See the accompanying sidebar for demographic data on the survey respondents.) Respondents were asked to provide performance data for 2006, which the research team used to create a set of benchmarks across the distribution profession.As part of the study, the researchers also analyzed the survey results by industry, type of operation (pallet picking, broken case picking, etc.), business strategy, type of customer served, and company size. (Download the full results of the 2007 survey.)

Exhibit 2

Exhibit 3

Exhibit 4

Exhibit 5

Exhibit 6

Exhibit 7

Exhibit 8

In stable condition
So how well are America's DCs performing these days? The latest survey results indicate that they're holding their own. As Exhibit 1 shows, overall DC performance against the 10 most commonly used metrics showed little change from the previous year's levels.When we looked at the best-performing operations, however, the picture was a bit brighter. The best performers (defined as the top 20 percent of all responses) actually recorded improvements in six of the nine metrics for which we had data for comparison.

Though the numbers say a lot about how DCs are performing, they're still just one part of the story. The true measure of a DC's service is how its customers see it. For data on this important matter,we had to rely on the respondents' reports. In addition to asking for statistics on their performance against several customer-oriented metrics (on-time delivery, percentage of orders shipped complete, and so forth), the survey asked respondents outright how their customers viewed their performance. As indicated above, roughly 80 percent of the respondents replied that their customers would rate their performance as average or above average.

If that seems statistically improbable, it is. Basic math tells us that only 50 percent of a given population can be average or above average. But we would caution against dismissing these results out of hand. It's important to note that the study's respondent base does not represent a cross-section of the industry. In fact, it's more than likely that the respondent pool—members of a leading professional association like WERC and/or regular readers of professional journals like DC VELOCITY—is skewed toward the highest-performing segment of the industry.

Of course, that's just one possible explanation. Another is that some of the respondents have overestimated the quality of the service their DCs provide. In our experience, it's rare that an organization perceives itself as performing at a below-average level.

In hopes of getting a fuller picture of the situation, we approached it from another direction. As noted above, we took the performance data they had provided and calculated the respondents' composite score on the Perfect Order Index. (We should note here that although two grocery industry trade groups recently proposed an updated definition of the "perfect order," for purposes of this discussion, we will use the traditional definition, which considers order completeness, timeliness, condition, and documentation.)

As for how the respondents' performance stacked up against the Perfect Order Index, the composite score turned out to be a not-so-perfect 84.99 percent. That's a slight improvement over last year's score (84.46 percent), but it nonetheless indicates that a full 15 percent of orders still fell short of expectations in one way or another.

The case for benchmarking
In recent years, the quest for a more objective way to compare their DCs' performance to others has led many companies to pursue benchmarking. It's not hard to understand why. Benchmarking (comparing performance data) gives companies an alternative to guesswork for identifying who's best at something. It allows them to compare their performance against other companies' best practices—and ideally, to determine how those companies achieved their results and use their findings to improve their own performance.

Trouble is, benchmark data haven't always been easy to come by. To help fill that gap, we've taken the data collected in this year's survey and used it to create a benchmark of key measures for the distribution profession.

Exhibits 2 through 8 present the latest DC benchmark data—performance numbers (both median and best practice) across the full range of metrics. Because of the large number of metrics included in the survey, we have divided them into the following groups based on type of measurement: customer, operations, financial, capacity/quality, employee, perfect order, and cash to cash.

Objections overruled
For all the talk about the benefits of benchmarking, there are plenty of companies that are still stuck on the sidelines. What's holding them back? Oftentimes it's a lack of data for their particular industry. Companies typically don't see much value in benchmarking with a company in another industry because of the dissimilarities in their operations.

At first glance, the survey's results would appear to confirm that assumption. Take dock-to-stock cycle time, for example. Among consumer product manufacturers, the median dock-to-stock time was 4.5 hours (2.0 hours for best-in-class companies). But for companies in the life sciences sector, the median was 6.0 hours (1.3 hours for bestin-class companies).

But we decided to dig a little deeper. The idea that there are inherent variations among industries—or to be precise, variations significant enough to rule out cross-industry benchmarking—runs contrary to our experience. Combined, we have been in hundreds of facilities—and our position has always been that aside from costs, performance is performance.

In fact, we would argue that dock-to-stock time is no exception, despite the figures cited above. We have seen good companies—in all industries—that make it a standard practice to clear their docks in four hours or less. And we consistently see companies—in all industries—that take 24 to 48 hours to clear their docks.

To settle the question, we used statistical software to analyze the survey data with the aim of separating statistically significant results from those that could be attributed to normal variation. The result: With three exceptions, there were no statistically significant differences in performance among industries. Disparities like the different dock-tostock times reported by consumer product manufacturers and their life sciences counterparts were due to normal variation.

What were the three exceptions? They were: 1) distribution costs as a percentage of sales and as a percentage of COGS (cost of goods sold); 2) percentage of orders sent with correct invoice; and 3) days of supply – forward coverage.

As for why these metrics stand out, we offer the following hypotheses:

  • Distribution costs as percentage of sales and cost of goods sold. Costs are really a derivative of the product type. For example, a company moving large plates of glass on a double-drop trailer will necessarily have higher costs than a company moving a truckload's worth of standard pallets in a standard 53-foot trailer.
  • Correct invoices. Industry variations in performance against this metric may well reflect the retail industry's crackdown on suppliers that had gotten sloppy with their paperwork. In the past five years, more and more retailers have begun imposing penalties on suppliers that provide inaccurate invoices. The survey results likely reflect efforts among those suppliers to get their accounts in order to avoid penalties.
  • Days of supply – forward coverage. Simply put, some industries are more "inventory oriented" than others. Take the high-tech industry, for example. Given its products' high costs and short life cycles, it stands to reason that these manufacturers would focus on keeping inventories to a minimum.

Overall, we believe this is great news for companies interested in benchmarking their DCs' performance. As we see it, the finding opens up the field of potential benchmarking partners. Would-be benchmarkers have long lamented the difficulty of finding suitable partners. Even if they could find a high-performing company of a similar size, strategy, etc., they often couldn't find one in their own industry that wasn't scared off by potential competitive concerns.

These new findings, however, suggest that they don't have to limit their search. They can set their sights high, seek out the best, and use what they learn to stretch performance to previously unimaginable heights.

Authors' note: We invite readers' comments, suggestions, and insights into the research and their own use of measures. We can be reached by e-mail: Karl B. Manrodt at ; Kate L. Vitasek at .

About the Authors

Karl B. Manrodt
Karl Manrodt is a professor of logistics and supply chain management at Georgia College.

More articles by Karl B. Manrodt
Kate Vitasek
Kate Vitasek is a member of the faculty at the Center for Executive Education, University of Tennessee and founder of the consulting firm Supply Chain Visions. She is a regular blogger for DC Velocity on the topic "You might have a bad warehouse if ..."

More articles by Kate Vitasek

Material Handling Videos

Join the Discussion

After you comment, click Post. If you're not already logged in, you will be asked to log in or register.

Subscribe to DC Velocity

Feedback: What did you think of this article? We'd like to hear from you. DC VELOCITY is committed to accuracy and clarity in the delivery of important and useful logistics and supply chain news and information. If you find anything in DC VELOCITY you feel is inaccurate or warrants further explanation, please ?Subject=Feedback - : performance by the numbers">contact Chief Editor David Maloney. All comments are eligible for publication in the letters section of DC VELOCITY magazine. Please include you name and the name of the company or organization your work for.