Saturday, April 9, 2022

"The Science of Production"

From Construction Physics, March 31:

Book review of the Western Electric Statistical Quality Control Handbook

We've previously talked about the negative effects that variability has on any sort of production process - in a basic queuing production model, where a product moves through a process step by step, variability is what prevents material from flowing smoothly between workstations, and forces expensive buffering (in the form of extra inventory, extra time, or extra capacity) to achieve a given production rate. Variability reduction is a key component of things like the Toyota Production System, as well as the Japanese method of ship production.

Because of its importance, it's worth looking closer at the idea of variability in a production process, and what it actually means to reduce it. This concept - that variability in a process can be understood and controlled - forms the basis of what’s known as statistical process control.

Shewhart and the origins of statistical process control
Statistical process control traces its origins to the 1920s and the work of physicist and engineer Walter Shewhart. Shewhart worked for Western Electric, the company that made telephone equipment for Bell Telephone, and was one of the early employees of Bell Labs [0]. Western Electric was one of the largest manufacturing operations in the US (by the 1930s, their Hawthorne Works [1] employed around 45,000 people), and it was while working there that Shewhart developed his ideas about process control.

These ideas would diffuse their way into company practice, and eventually be summarized in “The Western Electric Statistical Quality Control Handbook” in 1956 [2]. This book apparently became something of a bible in the field of quality control - it remains referenced almost 70 years later (despite being out of print), and copies that show up on Amazon invariably collect a series of 5-star reviews. Though the book is about the application of statistical methods to manufacturing processes, because it was written as a guide for practitioners (factory personnel charged with keeping quality high), it was, and is, an exceptionally lucid explanation of how process control works [3].

The basics of statistical process control
The basic idea behind process control is that all processes have some amount of variation in them - not even the most carefully controlled process can produce a completely uniform output. The example given is of writing the letter 'a' - if you write the letter repeatedly, no two 'a's will be exactly alike, no matter how good your penmanship. Slight variations in the paper, or in the lead composition, or to the thoughts running through your head, will all have tiny impacts on the path of the pen on the page, making each ‘a’ slightly different. Every process - whether it’s writing the letter ‘a’ or manufacturing engine parts - has some limit to its fidelity, beyond which the output will vary.

The point to be made in this simple illustration is that we are limited in doing what we want to do; that to do what we set out to do, even in so simple a thing as making a’s that are alike, requires almost infinite knowledge compared with that which we now possess. It follows, therefore, since we are thus willing to accept as axiomatic that we cannot do what we want to do and cannot hope to understand why we cannot, that we must also accept as axiomatic that a controlled quality will not be a constant quality. Instead, a controlled quality must be a variable quality. - Walter Shewhart, Economic Control of Quality of Manufactured Product

But not all variation is alike. We can break variation into two types - what Shewhart calls ‘natural’ and ‘unnatural’ variation....

....MUCH MORE