The world of DOE

I recently returned from an overseas trip and passed some of the flight time by reading additional documentation of the software packages, JMP. As a bit of background, to date I have focused my use of JMP to the  "Analyze" menu — standard x by y distributions, regression, partitioning, etc. For my graphing needs, I currently prefer using Tableau to JMP's graph builder. As for the "DOE" menu? I had never used it. With the dedicated flight time, I dug into the 350+ PDF "Design of Experiments Guide."  I worked through a few of the samples, step-by-step.

Wrapping my head around the DOE process and theory provided me with numerous ideas of how to inject DOE into my team's testing efforts. Not just "next year," or "when we get the new servers," but right now. 

We already use at least some semblance of the scientific method into our testing, as I would expect many in our situation would do.  When we are optimizing a server for a new app, we'll setup different RAM/CPU/worker combinations, testing hypothesis after hypothesis. Using that approach, however, limits the strength of our conclusions. All we can say after that type of testing is that our configuration is the "best we've tried." Using proper DOE, we can plan our infrastructure much more effectively, and we can return meaningful server performance experience back to the vendor.  In the course of my flight, I wrote down several other potential use cases for my team to immediately apply DOE.

What made the JMP documentation so insightful on this particular occasion? I was able to immediately begin setting up experiments using data I had with me. I'm very much a visual learner, so having the documentation and data to work with was a powerful (dangerous?) combination for me. 

The bottom line? Even a modest injection of DOE into our environment will add not only true value, but also credibility to those who rely on our work. ​