What Richard Feynman has to do with Risk and Reliability analysis?
What Richard Feynman has to do with Risk and Reliability analysis? Having worked with both methodologies for more than 8 years now, I feel like I can easily answer this question: we are not very good at explaining “things”.
Richard Feynman talking with a teaching assistant after the lecture on The Dependence of Amplitudes on Time, Robert Leighton and Matthew Sands in background, April 29, 1963.
For those who don’t know, Richard Phillips Feynman was one of the most famous American theoretical physicists working mostly in the field of quantum mechanics. He worked on the Manhattan project and got a Nobel Prize in Physics in 1965 for his contributions to the development of quantum electrodynamics, jointly with Julian Schwinger and Sin-Itiro Tomonaga. You can watch this extremely interesting interview with Feynman to BBC TV in the TED’s website.
Or this TED talk from Leonard Susskind:
And even get a brief explanation in this episode of The Big Bang Theory (there is another one where they borrow Feynamn’s old van but I couldn’t find it).
Feynman is also well-known for his ability to translate complex theories in physics into simplified concepts. The work of the “The Great Explainer”, as he was once recognised, has been put together in a collection of books – The Feynman Lectures on Physics – which is probably one of the most popular on the subject. The collection sold more than 1.5 million copies in English and it is now available for free at Caltech’s website feynmanlectures.caltech.edu. These books were authored by Feynman, Robert B. Leighton, and Matthew Sands.
His work should be a great inspiration for all of us working with risk and reliability techniques.
How many times have you been to a meeting and you’ve mentioned that the FN-Curve reported X and you could see people disconnecting at that moment? What about when you are explaining about MTBF? Or when describing the statistical distributions you are considering? Or why you decided for a Weibull with two parameters instead of three?
We can’t really simplify our concepts! We can’t provide direct benefits! It is very difficult for us.
It feels like most of the risk and reliability engineers that I came across in my career suffer from the Dunning–Kruger effect. And not because they were incompetent – quite the opposite – it was because they were too competent! The dunning-kruger effect says that high-ability individuals may underestimate their relative competence and may erroneously assume that tasks which are easy for them are also easy for others. Pretty much there is always a ”miscalibration” -the incompetent stems from an error about the self, whereas the highly competent stems from an error about others. I truly believe that we are sitting in latter group (unfortunately myself included).
But not everything is lost!
New dashboard technologies (such as Power BI), powerful 3D visualisation (such as the new radiation shielding prototype available in the recently launched Safeti 7.2), availability of tools only for the visualisation of results (Safeti Offshore Viewer) are enabling new options to display results and make it much easier to show the modelling approach, the dynamics of the simulation process and how the calculations are typically performed.
We need to find better ways of explaining the basics of our discipline which will allow us to expand the reach of our work. We shouldn’t be limiting ourselves to be talking to our peers (which is always fun and productive) but we should be asking how to make safety (especially safety) and performance studies more understandable to the general public.
So the question is: Are you improving risk communication in your company? How are you reporting risk to the community surrounding your facility? Are you up to take the challenge of being the Feynman’s of our discipline?