Human beings, in general, are not very good at predicting the future. The Science Fiction write Arthur C. Clarke in his remarkable little book titled, "Profiles of the Future" remarked that failures to predict the future have occurred due to two problems he called "failures of nerve" and "failures of imagination". The former is not being able to see an inescapable conclusion given all the facts. The latter is lacking some crucial facts and our inability to imagine them. Obviously, predicting the future is hazardous, especially the near-term future, where events often overtake our predictions. For scientists, the prediction game can be personally hazardous because they can be held accountable if they did not predict something that happened or blamed when they do predict something that either does not happen or happens with less consequence. Most importantly, we as humans, cannot fully internalize probabilistic predictions. However, as engineers we are required to predict the future because systems age and we need to know when to warn, fix, or discard a system.
I believe the key is to combine the insights from physics-based models with data driven models, and various rules of thumb in any predictive framework and continually test against experience. Bayesian networks provide the right framework for this. However, Bayesian networks are complicated and difficult to communicate. Often, models and data are not shared publicly. Despite these considerable limitations, Bayesian networks capture the essence of holistic thinking of a system.
What is needed is a benchmarking activity whereby a known failure is modeled by Bayesian network using information prior to the failure and the result compared to the known evidence. Even this is difficult as access to information is difficult and companies are concerned about legal implications. A known failure where all legal aspects have been resolved will be a good benchmark.
Comments