To hear their managers tell it, america's dcs are getting better all the time. Asked how well their DCs are doing today, respondents to our third annual metrics survey offered an upbeat picture of facilities where the commitment to service is strong, the operating stats are good looking, and performance is above average, to borrow a phrase from a popular public radio program.
The numbers appear to bear them out. The results of the study, which was conducted among members of the Warehousing Education and Research Council (WERC) and readers of DC VELOCITY, did indeed indicate improvement over the 2005 study's findings. When asked how their customers rated their DCs' performance in five key service areas—including percentage of "perfect" orders—respondents overwhelmingly reported that their clients thought they were doing an average or above average job.
But when we examined the data more closely, a somewhat different picture emerged. For example, we ran some calculations to see how closely respondents' perceptions matched their actual performance against the Perfect Order Index. The results showed that some of those "perfect orders" weren't so perfect after all.
DC performance was only one of many subjects covered in the 2006 survey, which was conducted by Georgia Southern University and Supply Chain Visions. The study also collected data on what activities DC managers measure and how they measure them, which we then analyzed by type of industry, supply chain structure, and overall corporate strategy. (Download a copy of the full results of the 2006 survey.)
Annual performance review
So how well are America's DCs performing these days? It appears that they're continuing to make strides. Operating statistics provided by the survey respondents confirmed that DC performance in 2006 compared favorably to 2005's. As Exhibit 1 shows, performance (as measured against 14 key metrics) either held steady or improved. In only one case (units picked per hour) did performance dip. More encouraging still, the areas where improvements were made all centered on customer service: percentage of orders shipped complete, average time from order placement to order shipment, fill rate per line, order fill rate, and order picking accuracy. But that's only part of the story. Comparing a DC's performance against benchmarks—whether industry averages or "best practices"—provides an incomplete picture of its service at best. The true test is how the customer perceives the service.
To learn as much as possible about how well DCs are serving their customers, we used two different data-gathering approaches—one direct and one not so direct. The first was to simply ask respondents how their customers rated their DCs' performance. To be precise, the survey asked them to indicate how their customers viewed their performance in five key customer-focused areas—fill rate, ontime delivery, percentage of orders shipped complete, percentage of accurate invoices, and percentage of perfect orders. The responses proved to be a model of consistency. In every case, close to 80 percent of the respondents said their customers considered their performance to be either "average" or "above average."
Statistically, of course, it's highly improbable that 80 percent are actually performing at an average or above-average level. But it's not impossible. It seems safe to assume that the study's respondent base—members of a professional association like WERC and/or regular readers of professional journals like DC VELOCITY—is skewed toward the highest-performing segment of the industry.
It's also possible, however, that some of the respondents have stumbled into what we call the 50-percent trap. To explain the 50-percent trap, we like to use the analogy of how parents interpret their children's grades.
When presented with an all-Bs report card, most parents assume that their offspring are average, if not above-average, students. But that's not a realistic assumption.
In any given class, 50 percent of all students perform at an above-average level and 50 percent below average. It's unlikely, however, that half the class is receiving As or Bs and the other half Cs or Fs; the grades are far more likely to be clustered in the middle. So while parents may think their B students are average, the reality is that the B, and particularly the B minus, students are actually performing well below the 50th percentile mark.
And so it may be with warehouse or DC performance. Managers may not be aware of it (or willing to admit it), but the fact remains that nationwide, 50 percent of all facilities— though perhaps not those represented in our survey—are performing below the midpoint level.
The POI doesn't lie
Recognizing that the respondents might have difficulty providing an objective assessment, we also tried a second, less direct, approach to determining how well DCs are serving their customers. Using the performance data respondents had provided, we calculated the respondents' compos- ite score on what's known as the Perfect Order Index (POI).
The Perfect Order Index is a widely recognized measure that incorporates four critical customer service elements: order completeness, timeliness, condition and documentation. In other words, to be considered perfect, an order must arrive complete, be delivered on time, arrive free of damage, and be accompanied by the correct invoice and other documentation.
To calculate a given company's score on the Perfect Order Index, you simply take those four metrics (expressed as percentages) and multiply them together. For instance, a supplier that ships 95 percent of its orders complete, 95 percent on time, 95 percent damage-free and with the correct documentation 95 percent of the time would earn a score of 81.5 percent (95 x 95 x 95 x 95).
So how did the respondents' DCs score on the Perfect Order Index? As Exhibit 2 shows, the composite results were a less-than-perfect 84.46 percent. What this means is that slightly more than 15 percent of all orders shipped are marred by some sort of failure.
What's "on time" anyway?
As for what accounts for those "failures," the survey data suggest that part of the problem may be confusion (or disagreement) among DCs and their customers about whether orders are "on time" or not. It may sound like a simple enough determination, but our survey indicates otherwise.
To begin with, there's the distinction between shipped on time and delivered on time. DCs are much more likely to interpret "on time" as meaning shipped on time than delivered on time. That stands to reason. It's far easier for a DC to document when an order leaves its dock than to obtain reliable data on when it's delivered.
Customers, however, look at it differently. They're not so much interested in when an order leaves the supplier's dock or what happens to it along the way as in when it arrives. For most customers, "on time" means delivered on time. But even if DCs could be persuaded to abandon the "ontime shipment" metric in favor of "on-time delivery," there's still another problem. Even the customers themselves don't agree on what constitutes an "on-time" delivery. Nearly 69 percent of the survey respondents reported that their various customers defined "on-time delivery" differently.
How much variation could there be? Quite a bit, it seems. As Exhibit 3 shows, customers define "on time" at least six dif- ferent ways. The majority of the respon- dents (63.1 percent) reported that their cus- tomers considered a shipment to be on time if it arrived on the requested or agreed-upon day. But other clients seem to be much more exacting. For example, for 26.9 percent, "on time" means delivery within 30 minutes of the appointed time. And for 4.3 percent, it means delivery within 15 minutes of the appointed time. Still others define "on time" as "No line down time" or "By 4: 00 p.m." All this variation may go a long way toward explaining why suppliers sometimes have difficulty delivering "on time."
Room for improvement
Overall, what we see from this survey is encouraging. It appears that more companies are concentrating on their performance against customer-focused metrics than in the past, and that their performance against those metrics is improving.
But the survey also indicates that some DCs, at least, may not be performing as well as they think they are. To those DCs, we recommend taking the following three steps to improve performance:
Our second call to action is to urge industry groups to get involved. Associations can provide a valuable service to their members by working with their constituencies to gather and disseminate benchmark data.
In the meantime, we invite readers' comments, suggestions, and insights into the research and their own use of performance metrics. We can be reached by e-mail: Karl B. Manrodt at Kmanrodt@georgiasouthern.edu, Kate L. Vitasek at kate@scvisions.com.
a look at the survey respondents
Talk about a study in contrasts. Last year, 380 DC executives responded to our annual metrics survey. This year, the total was a whopping 900. Almost as soon as the survey invitations went out, replies began pouring in from DC executives across the country. The response was particularly strong among C-level executives. The percentage of top executives (senior vice presidents, CEOs, CFOs and presidents) participating in the survey soared to more than 27 this year, compared to 11.4 last year.
What accounted for the difference? The survey's length may have been a factor. Last year's questionnaire asked respondents to rate their DCs' performance against a set of 55 metrics. This year, we cut the number of metrics to a more manageable 35, and the response rate more than doubled. Coincidence? We think not.
As for the respondents themselves, they came from companies of all sizes across a wide range of industries. Exactly half said they worked in manufacturing/distribution. Just over a quarter (26 percent) came from the third-party warehousing industry, and 13 percent reported that they worked in the retail industry. The remainder were scattered across other sectors: carriers, utilities, life sciences and the government.
The survey also asked respondents to indicate their "location" in the supply chain—that is, whether their direct customers were end users, retailers, distributors/wholesalers, or manufacturers. As it turned out, most were either at or very close to the end of the chain. Roughly 60 percent indicated that their customers were either retailers or the products' end users. Some 20.1 percent reported that their primary customers were manufacturers, and the remaining 21.6 percent sold to distributors.
In terms of company size, the respondents' businesses turned out to be equally distributed among the survey's size categories. About onethird worked for businesses reporting annual sales of less than $100 million, about one-third reported that their companies' sales fell into the $100 million to $500 million range, and the remaining third reported sales in excess of $500 million.
Copyright ©2023. All Rights ReservedDesign, CMS, Hosting & Web Development :: ePublishing