They can quote their facilities' order fulfillment rates off the top of their heads. They can recite line fill rates out to the hundredth of a percentage point. They can reel off stats for worker turnover, order cycle times, and distribution costs as a percentage of sales. But when it comes to gauging what really matters—how their customers view their performance— America's DC managers seem to rely more on guesswork than on the numbers.
As part of our fourth annual warehouse metrics survey, we asked more than 1,000 DC professionals from across the country how well their operations performed last year against several key customer-oriented metrics—on-time delivery, percentage of "perfect orders," and the like. The responses were a model of consistency: In nearly every case, the majority (roughly 80 percent) of the respondents reported that in their customers' eyes, they were doing an average or above average job.
But a closer look at the numbers throws that assumption into question. As part of our analysis, we also used the figures the respondents supplied to run some calculations of our own—specifically, to determine just how many of their shipments were actually "perfect orders" (that is, on time, complete, damage free, and with the correct invoice). What we found was that DCs don't always perform to even this basic standard. To put it another way, a surprising percentage of orders shipped by the respondents' DCs appear to be decidedly less than perfect.
Customer service was just one of the aspects of DC performance covered in the fourth annual warehouse metrics survey,which was conducted in January among members of the Warehousing Education and Research Council (WERC) and readers of DC VELOCITY. Along with service levels, the survey also looked at how DCs are performing against a number of operational, financial, and employee metrics— 45 measures in all.
More than 1,000 managers participated in this year's survey, which was conducted by Georgia Southern University and consultant Supply Chain Visions. (See the accompanying sidebar for demographic data on the survey respondents.) Respondents were asked to provide performance data for 2006, which the research team used to create a set of benchmarks across the distribution profession.As part of the study, the researchers also analyzed the survey results by industry, type of operation (pallet picking, broken case picking, etc.), business strategy, type of customer served, and company size. (Download the full results of the 2007 survey.)
In stable condition
So how well are America's DCs performing these days? The latest survey results indicate that they're holding their own. As Exhibit 1 shows, overall DC performance against the 10 most commonly used metrics showed little change from the previous year's levels.When we looked at the best-performing operations, however, the picture was a bit brighter. The best performers (defined as the top 20 percent of all responses) actually recorded improvements in six of the nine metrics for which we had data for comparison.
Though the numbers say a lot about how DCs are performing, they're still just one part of the story. The true measure of a DC's service is how its customers see it. For data on this important matter,we had to rely on the respondents' reports. In addition to asking for statistics on their performance against several customer-oriented metrics (on-time delivery, percentage of orders shipped complete, and so forth), the survey asked respondents outright how their customers viewed their performance. As indicated above, roughly 80 percent of the respondents replied that their customers would rate their performance as average or above average.
If that seems statistically improbable, it is. Basic math tells us that only 50 percent of a given population can be average or above average. But we would caution against dismissing these results out of hand. It's important to note that the study's respondent base does not represent a cross-section of the industry. In fact, it's more than likely that the respondent pool—members of a leading professional association like WERC and/or regular readers of professional journals like DC VELOCITY—is skewed toward the highest-performing segment of the industry.
Of course, that's just one possible explanation. Another is that some of the respondents have overestimated the quality of the service their DCs provide. In our experience, it's rare that an organization perceives itself as performing at a below-average level.
In hopes of getting a fuller picture of the situation, we approached it from another direction. As noted above, we took the performance data they had provided and calculated the respondents' composite score on the Perfect Order Index. (We should note here that although two grocery industry trade groups recently proposed an updated definition of the "perfect order," for purposes of this discussion, we will use the traditional definition, which considers order completeness, timeliness, condition, and documentation.)
As for how the respondents' performance stacked up against the Perfect Order Index, the composite score turned out to be a not-so-perfect 84.99 percent. That's a slight improvement over last year's score (84.46 percent), but it nonetheless indicates that a full 15 percent of orders still fell short of expectations in one way or another.
The case for benchmarking
In recent years, the quest for a more objective way to compare their DCs' performance to others has led many companies to pursue benchmarking. It's not hard to understand why. Benchmarking (comparing performance data) gives companies an alternative to guesswork for identifying who's best at something. It allows them to compare their performance against other companies' best practices—and ideally, to determine how those companies achieved their results and use their findings to improve their own performance.
Trouble is, benchmark data haven't always been easy to come by. To help fill that gap, we've taken the data collected in this year's survey and used it to create a benchmark of key measures for the distribution profession.
Exhibits 2 through 8 present the latest DC benchmark data—performance numbers (both median and best practice) across the full range of metrics. Because of the large number of metrics included in the survey, we have divided them into the following groups based on type of measurement: customer, operations, financial, capacity/quality, employee, perfect order, and cash to cash.
For all the talk about the benefits of benchmarking, there are plenty of companies that are still stuck on the sidelines. What's holding them back? Oftentimes it's a lack of data for their particular industry. Companies typically don't see much value in benchmarking with a company in another industry because of the dissimilarities in their operations.
At first glance, the survey's results would appear to confirm that assumption. Take dock-to-stock cycle time, for example. Among consumer product manufacturers, the median dock-to-stock time was 4.5 hours (2.0 hours for best-in-class companies). But for companies in the life sciences sector, the median was 6.0 hours (1.3 hours for bestin-class companies).
But we decided to dig a little deeper. The idea that there are inherent variations among industries—or to be precise, variations significant enough to rule out cross-industry benchmarking—runs contrary to our experience. Combined, we have been in hundreds of facilities—and our position has always been that aside from costs, performance is performance.
In fact, we would argue that dock-to-stock time is no exception, despite the figures cited above. We have seen good companies—in all industries—that make it a standard practice to clear their docks in four hours or less. And we consistently see companies—in all industries—that take 24 to 48 hours to clear their docks.
To settle the question, we used statistical software to analyze the survey data with the aim of separating statistically significant results from those that could be attributed to normal variation. The result: With three exceptions, there were no statistically significant differences in performance among industries. Disparities like the different dock-tostock times reported by consumer product manufacturers and their life sciences counterparts were due to normal variation.
What were the three exceptions? They were: 1) distribution costs as a percentage of sales and as a percentage of COGS (cost of goods sold); 2) percentage of orders sent with correct invoice; and 3) days of supply – forward coverage.
As for why these metrics stand out, we offer the following hypotheses:
Overall, we believe this is great news for companies interested in benchmarking their DCs' performance. As we see it, the finding opens up the field of potential benchmarking partners. Would-be benchmarkers have long lamented the difficulty of finding suitable partners. Even if they could find a high-performing company of a similar size, strategy, etc., they often couldn't find one in their own industry that wasn't scared off by potential competitive concerns.
These new findings, however, suggest that they don't have to limit their search. They can set their sights high, seek out the best, and use what they learn to stretch performance to previously unimaginable heights.
Authors' note: We invite readers' comments, suggestions, and insights into the research and their own use of measures. We can be reached by e-mail: Karl B. Manrodt at ; Kate L. Vitasek at .
Even a diehard survey junkie might be deterred by a 23-question survey that asks respondents for hundreds of detailed numbers. But the nation's DC managers were undaunted. More than 1,000 companies participated in our annual warehouse metrics survey, which was conducted online in January. That was the highest response to date.
Who were these intrepid souls? Nearly 20 percent identified themselves as C-level executives: chief executive officers, chief operating officers, and senior vice presidents. The remainder indicated that they were managers, supervisors, or directors.
The survey questionnaire also asked respondents to identify the industry they worked in. As might be expected, the respondent pool reflected the makeup of WERC's membership and DC VELOCITY's readership. A majority of respondents (52 percent) said they worked in manufacturing/distribution. Another 16 percent said they worked for third-party warehousing companies, and 10 percent worked for retailers. The remainder were scattered across a variety of other sectors: utilities, government, carriers, and life sciences/medical devices.
The survey also asked respondents to indicate their "location" in the supply chain—that is, whether their direct customers were end users, retailers, wholesalers/distributors, or manufacturers. In this case, the respondents were fairly equally distributed across the supply chain. Twentyfive percent indicated that their customers were retailers, 26 percent end customers, 20 percent manufacturers, and 29 percent wholesalers/distributors.
As for company size, about half of the respondents indicated that they worked for corporations with revenues above $1 billion, and half below. But that's not to suggest that most worked for the giants of industry. Just over 30 percent of the respondents said they worked at companies with revenues of under $100 million.