Some of the earliest widespread uses of data analytics were in operations and manufacturing, where techniques were applied to help businesses improve efficiency and effectiveness. Statistical process control, for example, has been in use since the 1920s. Originally applied to manufacturing processes, the basic idea around statistical process control is that we establish a process, measure the natural variability in that process, then set up a monitoring mechanism that alerts us if the process changes for some reason. Often this technique is used as part of a larger objective of process improvement where we understand a process, reduce the variation in the process, then work to improve the overall level of performance in that process.
Let’s say we wanted to measure the percentage of defects at particular stage in an assembly line, and we wanted to be alerted if there is a spike in defects or if the average percentage moved up or down over time.
We could use statistical process control to help detect these changes in pursuit of quality control. Similarly, we could use the same technique on quantity measures, like the amount of material required to accomplish a job or even time, like how long it takes to accomplish a task or to complete a process as a whole. Again, while this technique originated in manufacturing, it can really be applied to any repeatable process. And there are applications in just about every industry in functional area. As far as methods go, those used in statistical process control are actually pretty straightforward. Normally it involves application of classical statistics and sampling methods, coupled with some specific ways of looking at the data.