Archive for December, 2009

Improving Product Robustness 101

Improving product robustness is straightforward and difficult. Here’s how to do it.

Identify specific failure modes, prioritize them, and go after the biggest ones first. Failure modes can be identified through multiple sources. Warranty data is sometimes coded by failure mode (more precisely, symptom type), so start there. The number one failure mode in this type of data is typically “no problem found”, so be ready for it. Analysis of the actual products that come back is another good way. Returned product is routed to the appropriate engineer who analyzes it and enters the failure mode into a database. A formal design FMEA generates a list of prioritized failure modes through the risk priority number (RPN), where larger is more important. To do this, engineers are hauled into a room and a facilitator helps them come up with potential failure modes. One caution – the process can generate many failure modes, more than you can fix, so make the top five or ten go away and don’t argue the bottom fifty. It makes no sense to even talk about number eleven if you haven’t fixed the top ten. But the best way I have found to identify failure modes (problems) that are meaningful to the customer is to ask the technical services group for their top five things to fix. They will give you the right answer because they interact daily with customers who have broken product. They won’t expect you to listen to them (you never listened before), so surprise them by fixing one or two on their list. They will be grateful you listened (they’ll likely want to buy you coffee for the rest of your career) and your customers will notice.

Once failure modes are identified, define the physics of failure – why the product breaks. This is tough work and requires focused thought and analysis. If, when you break the product, it “looks like” the ones coming back from the field, you have defined the physics of failure. This is the same thing as replicating the problem in the lab. Once that’s defined, create an automated test rig or experimental setup that breaks the product in a way that captures the physics of failure. I call this test rig a robustness surrogate because it stands in for the actual failure mode seen in the field. The robustness surrogate should break the product as fast as possible while retaining the physics of failure so you can break it and fix it many times before product launch. The robustness surrogate should be designed to break the product within minutes, not hours or days – the faster the better.

To know if product robustness is improved, the baseline (or existing) design is broken on the robustness surrogate. The new design must survive longer on the robustness surrogate than the baseline design. The result is A/B data (baseline design/ new design) that is presented at the design review using a simple bar graph format which I call big-bar-little-bar. Keep improving robustness of the new design even if it outperforms the baseline design by a factor of ten – that’s not good enough for your customers.

Don’t stop improving robustness until you run out of time, and don’t stop if you meet the arbitrary MTBF specification. Customers like improved robustness, and in this case too much of a good thing is wonderful.

Using this method, I reduced warranty cost per unit by 75% over a five year period. It worked.

Improve Product Robustness at the Expense of Predicting It

In a previous post I defined the term brand-damaging threshold and said I’d talk about how to improve product robustness. So, here goes.

Every company is at a different stage in their formalized product robustness efforts, so it’s challenging to talk meaningfully to everyone. But there are two especially meaningful principles that have served me well through the years.

I had the privilege of working with Don Clausing – Total Quality Design, The House of Quality, Enhanced QFD, and Robust Quality. I vividly remember the conversation where Don shared one of his secrets. As we watched a robustness test run, Don, in his terse way, barked out a guiding principle of improving product robustness. He said:

“Improve robustness at the expense of predicting it.”

I asked Don what the hell he meant (he liked to make his students work for it), and after some prodding, he went on to explain why it’s so important. He said people spend far too much time running tests to predict robustness and then spend even more time calculating mean time between failures (MTBF). If that’s not enough, then they spend time arguing about MTBFs and the confidence intervals. He said companies should dedicate all their time and energy improving robustness. “That’s what matters to the customer,” he said. And then he continued with something like: “Predicting robustness is worse than a simple waste of time.” (He wasn’t that polite.) But I still didn’t get it. What’s the big deal about predicting robustness? Read the rest of this entry »

Lack of product robustness can damage your brand

There are many definitions of product robustness and just as many formally trained specialists willing to argue about them. I get confused by all that complexity, I don’t like to argue, and I am not a specialist, I am a generalist. I like simplicity so I use operational definitions every chance I get. Here’s one for product robustness:

A customer walks up to your product, turns it on, and it works without breaking or getting in its own way.

Bad product robustness is bad for your brand. Very bad. Customers do not like when they pay money for a product and it doesn’t work, especially when they rely on those products to make money for themselves. And they remember the experience in a visceral way.

You can’t fix bad product robustness with great marketing; you can’t fix it with spin selling; you can’t tell customers you fixed it when you didn’t (since they use your product, they know the truth); and you can’t hide it because customers talk (so do competitors). There is no quick fix – it takes tools, time, training, and new thinking to improve product robustness. And when you do manage to fix it, customers won’t believe you until the see it for themselves. They don’t want to get burned again.

No product is infinitely robust, nor should it be. It doesn’t make financial sense. The product would be infinitely expensive and would take an infinite amount time to develop. But how much robustness is enough? An easier, and possibly more important, question to answer is – how much is too little? Or, stated another way, what is the minimum level of product robustness?

The specialists won’t agree with my assertion that there is a minimum threshold for product robustness, but I don’t care. I think there is one. I call this minimum value the brand-damaging threshold. Here’s an operational definition of product robustness that’s below the brand-damaging threshold:

Customers don’t buy your product because they know it breaks or gets in its own way and they go out of their way to tell others about it.

It is difficult to know when customers don’t buy, never mind know why they don’t. But there are some tell-tale signs that product robustness is below the brand-damaging threshold. Here are a few.

The CEO takes enough direct calls about products that don’t work to feel obligated to send you a thoughtfully-crafted, four word email saying something like “Fix that @#&% thing!” Customers have to be really pissed off to call the CEO directly, so the situation is bad. It’s also bad for a reason that’s closer to home – the CEO sent the email to you.

You get a little sick to your stomach when sales increase. You know you should be happy, but you’re not. Deep down you know you’ll see many of those products again because they’ll be sent back by angry customers, in pieces.

The volume of returns is so significant you create a refurbishment department. Or you create a new group to scavenge the reusable stuff off the piles of returned product. Not good signs.

Your product’s lack of robustness is the headline message in your customers’ marketing literature.

Now that the brand-damaging threshold is defined, the next logical topic is how to improve product robustness so it’s above the threshold. But that’s for another post.

Product Design – the most powerful (and missing) element of lean

Lean has been beneficial for many companies, helping improve competitiveness and profitability. But, lean has not been nearly as effective as it can be because there is a missing ingredient – product design. Where lean can reduce the waste of making and moving parts, product design can eliminate the parts altogether; where lean can reduce setup times for big machines, product design can change the parts so they no longer need the big machines; where lean can reduce inventory, product design can eliminate it by designing out parts; where lean can make the supply chain more efficient, product design can radically shorten it by designing out the long lead time elements.

The power of product design is even more evident when considering the breakdown of product cost. Here is some data from Nick Dewhurst taken from multiple-hundred DFMA analyses showing the typical cost breakdown of products.

Nick's Cost Buckets

Of the three buckets of cost, material cost is by far the largest 74%, and this is where product development shines. Product design can eliminate 40 to 50% of material cost resulting in radical cost savings. Lean cannot. I will go a bit further and say that material cost reductions are largely off limits to the lean folks since it requires fundamental product changes.

Side note – Probably most surprising about cost breakdown data is labor cost is only 4%. Why we move our manufacturing to “low cost countires” to chase 50% labor reductions to net a whopping 2% cost reduction is beyond me, but that’s for a different post.

Let’s face it – material cost reduction is where it’s at, and lean does not have the toolbox to reduce material cost. There’s no mystery here. What is mysterious, however, is that companies looking to survive at all costs are not pulling the biggest lever at their disposal – product design. Here is a bit of old data from Ford showing that Product Design has the biggest lever on cost. We’ve know this for a long time, but we still don’t do it.

 Nick's design lever on cost

Clearly, the best approach of is to combine the power of product design with lean. It goes like this: the engineers design a low cost, low waste product that is introduced to the production line, and the lean folks improve efficiency and reduce cost from there. We’ve got the lean part down, but not the product design part.

There are two things in the way of designing low cost, low waste products in a way that helps take lean to the next level. First, product development teams don’t know how to do the work. To overcome this, train them in DFMA. Second, and most important, company leaders don’t give the product development teams the tools, time, and training to do the work. Company leaders won’t take the time to do the work because they think it will delay product launches. Also, they don’t want to invest in the tools and training because the cost is too high, even though a little math shows the investment is more than paid back with the first product launch. To fix that, educate them on the methods, the resource needs, and the savings.

Good luck.

Mike Shipulski Mike Shipulski
Subscribe via Email

Enter your email address:

Delivered by FeedBurner

Archives