In my recent reading of “Theory of Constraints”, it makes a claim that “Science moves from categorization, to correlation, to effect-cause-effect” as it advances. It describes the early days of star gazing as an activity of categorization – stars were placed into groups, that we now call constellations. They were given names based on their shapes resembling natural or imagined things. The next generation of star gazers attempted to capture, numerically, the movements of the stars and correlated that information. They even invented mathematically tools, such as calculus, to predict the motion of the stars. This generation of star gazers gave us the modern understanding of the solar system, with the sun at the center, planets as the answer to “wandering stars,” and the equation to model the gravitational force between two objects, as a function of distance and their masses. Today, after observing these initial phenomena or effect, we have caused the very effect we observed: we have 1000’s of objects in orbit on our planet and sun, and even used those correlations to land spacecraft on one of those distant “roaming stars.”
As engineers, we harness science to achieve a desired outcome. In many cases, we open our textbooks and apply these very equations (or “models”) to achieve our goals. As our objectives have grown in scale and complexity, so have the systems grown in scale and complexity to meet those objectives. These complex systems and their interdependent and emergent properties are often beyond the human capacity to predict (and are certainly beyond any one human’s ability to hold in their head at a given moment). We might have a qualitative feel, as they are our creation. But sometimes, even our own creations surprise us. We often need to study, to reduce to mathematical and statistical form, the very systems we build. We must build a model. By the time we get to Verification & Validation, we sure need a firm grasp on the cause & effect phase of this process.
My first job as a software engineer was in a modeling, simulation, & analysis (MS&A) group. We were paired with a systems engineering department – together we were responsible for operations analysis, trade studies, training systems, design optimization, and 3D visualizations. Our immediate response to most problems was to model it – create a software module that interfaces with our library of other models to understand and characterize how solutions would benefit a specific mission. But, the answer was always “model it.” Being on this team, contributing as a junior engineer, caused me to throw off “I can’t, I don’t know enough” mentality for a “learn and figure it out.” Along the way, I was given this empowering statement.
No model is perfect. Some models are useful.
Combine this with Winston Churchill’s line:
Perfection is the enemy of progress.
Was Kepler’s original equation for planetary motion perfect? Nope. Hundreds of years later, Einstein improved it with his theory of relativity, giving us a more correct ability to predict the movement of the planet Mercury. Should Kepler, having failed to perfectly model planetary motion, not given us his correlation model? No, it was still really useful for hundreds of years. Another proverb to live by:
Start Simply. Simply Start.
I was working on a location estimation problem. I had sensors detecting a moving object at considerable range. Those observations were fed into a Kalman filter and compared to telemetry data from the object itself with an onboard GPS/Inertial system. I trusted the telemetry. They data didn’t line up.
I broke out excel and my SOHCAHTOA skills and wrote a simple model – it behaved exactly what we expected (our mental model), but not what we were seeing. We then added all the problems we knew about, for example measurement errors for certain parts – aspects of the data we were seeing explained. When we “subtracted” this new understanding from the actual data, other issues became obvious. More corrections and calibrations.
The Excel model eventually became unreasonably complex and we re-coded in a software application that could rapidly generate dozens of graphs and views based on multiple data collects, comparing theoretical predictions to actual results. We developed techniques to calibrate-away certain problems, re-ran the test and new issues were uncovered, modeled, and corrected for. We found and fixed bugs in my math, to my shame. We eventually had a calibration routine that solved for all the unknowns we knew about. We understood the impact, triaged, and mitigated asynchronous, non-real-time (as defined by Computer Science) software processing issues.
The result: we reached the theoretical performance limits of the sensors in our system. These limits were less than we hoped, but were only known because of our model. We continued to adapt to meet the requirements of the mission.
Our model was not a fundamental improvement to science (I am no Kepler) – it was a model of the system that we designed and built with our own hands, and yet didn’t understand. The model was a necessary and useful tool for us to understand the very thing that we built and needed to improve.
Our model started (simply) with a 9th grade understanding of math and physics. The end result fell short of rocket science, but it rationalized freshman physics, statistics, and a bunch discrete concepts from the software.
Over my career of working in a modeling & simulation group, of working proposals, leading product development teams, and rapid response tiger teams, I have built dozens of these excel models and handful that matured to “software application” status. Rarely is the math and physics complicated – the cost models only ever use add, subtract, multiply, and divide. But each model is useful: a tool to capture, understand, and use what I know in ways beyond what my mind can immediately imagine.
That’s my point: we solve big problems with equally complex systems – beyond the immediate imagination of anyone on the team.