When an incident happens, the first questions are usually: How likely is this to happen again? and How worried should we be? Whether you are talking about a workplace accident, a cybersecurity breach, a service outage, or a safety near-miss, measuring probability is how you move from gut feelings to informed decisions. (Aven, 2016)
Probability does not have to mean complicated math. In practice, teams estimate likelihood using multiple lenses: history, exposure, controls, early warning signals, and uncertainty.
Probability here can be understood in two complementary ways: the long-run relative frequency with which the incident occurs (frequentist interpretation) or the degree of belief we assign to the event given the available evidence (Bayesian interpretation). Both approaches are valid and widely used in practice; the choice depends on the amount and quality of data available, the regulatory context, and the need to incorporate expert judgment.
Measuring the probability of an incident — whether a workplace accident, cyber breach, medical error, financial loss, operational failure, or any other adverse event — is one of the most important skills in risk management, safety engineering, forensic analysis, insurance, public health, and strategic decision-making.
1. Classical (A Priori) Probability
The simplest and oldest method applies when all outcomes are equally likely and the sample space is finite and known. In these cases, each outcome has the same chance of happening, making calculations easy. Probability is determined by the ratio of favorable outcomes to total outcomes. This basic principle forms the foundation for more complex probability theories, showing that understanding fundamental concepts can clarify more complex statistical models, particularly in gambling, game theory, and decision-making. Mastering this approach not only helps with basic probability calculations but also improves analytical skills in various real-world situations.
P(incident) = number of favourable outcomes ÷ total number of possible outcomes
Classic textbook examples include the roll of a fair die (P(rolling a 6) = 1/6) or the flip of a fair coin (P(heads) = 1/2). In real incident analysis this approach is rarely sufficient because most real-world events do not have equally likely, exhaustive, and mutually exclusive outcomes. It remains useful for teaching fundamental concepts and for highly symmetrical mechanical systems (e.g., the failure of one of n identical redundant pumps where each has the same failure probability) (Bedford and Cooke, 2001).
2. Subjective (Bayesian) Probability
When historical data are sparse, unrepresentative, or entirely absent, we often find ourselves compelled to rely on expert judgment to guide decision-making processes.
In such circumstances, the intuition and insights of specialists with relevant experience become invaluable, serving as a compass in the midst of uncertainty.
Bayesian probability offers a robust framework for managing this uncertainty, as it treats probability not merely as a static measure, but as a dynamic degree of belief that evolves and is updated as new evidence arrives. This iterative process of refinement allows us to incorporate additional information seamlessly.
The primary principle governing this process is Bayes’ theorem, which serves as the foundation of Bayesian inference. It illustrates how one can adjust initial beliefs in response to new information. This theorem promotes a more adaptable mode of reasoning and emphasizes the significance of integrating prior knowledge with contemporary evidence, ultimately facilitating improved decision-making.
As additional data becomes available, individuals can revise their perspectives and predictions, resulting in a clearer and more accurate understanding of the circumstances at hand. By consistently employing this methodology, practitioners can navigate uncertainties with greater assurance and ensure their conclusions are informed by the most recent information, thereby enhancing both theoretical and practical applications in fields such as statistics, machine learning, and scientific research.
Posterior probability ∝ likelihood × prior probability
In odds form this becomes particularly intuitive for risk analysts:
Posterior odds = prior odds × likelihood ratio
Bayesian methods are especially powerful in incident risk assessment because they allow the formal combination of sparse failure data with structured expert elicitation. Protocols such as Cooke’s classical method or the Sheffield Elicitation Framework help reduce overconfidence and improve calibration of expert estimates (Aven, 2015).
3. Empirical (Frequentist) Probability
When historical data exist, the most common practical method is the empirical (or relative-frequency) estimator:
P(incident) ≈ number of observed incidents ÷ total number of exposure opportunities
“Exposure opportunities” must be clearly defined and relevant — for example:
- operating hours for machinery
- number of flights or take-offs for aviation
- number of patients treated for medical procedures
- number of transactions processed for financial systems
- kilometres driven for road safety
This estimator is unbiased in the long run, which means that as the number of observations increases, the estimates produced will converge to the true value. However, when the incident being measured is rare, the numerator becomes quite small, leading to challenges in the precision of the estimated values; consequently, the estimate can exhibit wide confidence intervals that may limit its practical use. Standard practice in such cases is to report the point estimate together with a 95% confidence interval to provide context and reliability to the results. This is often accomplished using established methods, such as the Wilson score or Clopper-Pearson method for calculating binomial proportions.
Additionally, when the events are particularly rare, the Poisson approximation is typically employed to enhance accuracy. Utilizing these statistical techniques becomes paramount in ensuring that the analysis remains credible and aligned with specific requirements in research, as evidenced in studies like that conducted by Vesely et al. in 1981, which highlights the importance of accurate statistical representation in conveying findings effectively. (Vesely et al., 1981).
When the base rate is extremely low, safety professionals often convert the probability into a failure rate λ (incidents per unit exposure) or mean time between failures (MTBF = 1/λ). For small probabilities, P(incident in time t) ≈ λ × t.
(π) Exposure-based probability (normalise by opportunity)
A raw count can mislead if activity levels change. Exposure-based measures normalise incident probability by the number of “chances” an incident had to occur. (Rausand, 2011)
- How to measure: incidents per exposure unit (hours worked, miles driven, deployments, patient-days, API calls).
- Example: “2 incidents per 1,000 deployments.”
Best for: environments where volume fluctuates.
Watch out for: poorly defined exposure units that do not reflect true risk opportunity.
4. Fault Tree Analysis (FTA) – Deductive Quantitative Modelling
Fault Tree Analysis begins with the undesired top event (the incident) and works backwards through logical gates (AND, OR, voting gates, etc.) to identify all combinations of basic events that can cause it. Once the tree is constructed, the probability of the top event is calculated by:
- obtaining failure probabilities or failure rates for each basic event from reliable databases (OREDA, CCPS, IEEE Std 500, NPRD, etc.)
- identifying the minimal cut sets (the smallest sets of basic events whose simultaneous occurrence causes the top event)
- applying the rare-event approximation for low-probability systems: Q(top) ≈ Σ Q(cut set)
FTA explicitly models redundancy, common-cause failures, and human error, making it the industry standard in aerospace, nuclear power, rail, and process safety (NASA, 2011); (Rausand and Høyland, 2004).
5. Event Tree Analysis (ETA) – Inductive Forward Modelling
Event Tree Analysis starts from an initiating event (e.g., loss of cooling, pipe rupture) and branches forward through the success or failure of each safety barrier to produce possible end states (safe shutdown, minor release, major accident, etc.). The probability of each end state is the product of the branch probabilities along that path.
ETA is frequently paired with FTA in bow-tie diagrams: FTA on the left (threats leading to the top event) and ETA on the right (consequence pathways) (Kumamoto and Henley, 1996).
6. Bow-Tie Analysis
Bow-tie diagrams integrate FTA (left side: threats → top event) and ETA (right side: top event → consequences) with preventive and mitigative barriers on each side. Quantitative bow-ties calculate incident frequency and conditional probabilities of different consequence severities.
7. Monte Carlo Simulation
When probabilities are uncertain or dependencies exist, Monte Carlo methods sample input distributions thousands or millions of times to produce a distribution of possible outcomes.
In incident modelling, Monte Carlo is used to propagate uncertainty through fault trees, event trees, or system reliability block diagrams, yielding:
- distribution of incident frequency
- uncertainty bounds on risk metrics
- importance measures (e.g., Birnbaum, criticality) (Vose, 2008)
8. Layer of Protection Analysis (LOPA)
LOPA is a semi-quantitative method commonly used in process safety.
It estimates the frequency of a consequence by multiplying:
Initiating event frequency × product of (1 – probability of failure on demand) for each independent protection layer (IPL)
LOPA bridges qualitative HAZOP and full QRA (CCPS, 2008).
9. Human Reliability Analysis (HRA)
Human errors contribute to many incidents. Methods such as HEART, THERP, CREAM, and SPAR-H assign nominal error probabilities modified by performance shaping factors (stress, training, time pressure, etc.).
10. Predictive Models and Machine Learning
Modern approaches increasingly use survival analysis, Cox proportional hazards models, random survival forests, or neural networks trained on historical incident data to predict time-to-incident or conditional probability.
∞. Confidence and uncertainty scoring (how sure are you?)
Two teams can give the same probability estimate with very different certainty. Tracking confidence prevents false precision. (Aven, 2016)
- How to measure: pair every probability estimate with a confidence rating (low/medium/high) or an uncertainty interval.
- Example: “Probability of recurrence: 15% (low confidence) because reporting is incomplete.”
Best for: decision-making under uncertainty.
Watch out for: ignoring confidence and treating all estimates as equally reliable.
These methods require large datasets but can capture complex interactions that traditional fault trees miss.
Putting it all together: a simple, practical approach
If you want a lightweight way to use these methods without building a full risk model, try this:
- Start with historical and exposure-based rates (Methods 1 to π).
- Adjust based on what changed since the incident: controls, volume, environment (Method 3 to 5
- Check leading indicators to validate whether probability is trending.
- Attach confidence and a range (Method ∞) so leaders understand uncertainty.
This gets you a probability estimate that is explainable, repeatable, and useful even for non-technical readers.
Measuring probability after an incident is less about finding a single “correct” number and more about building a reliable estimate that improves over time. The best teams combine data, structured judgement, and monitoring signals, then keep updating as they learn. (Aven, 2016)
Conclusion
Measuring the probability of an incident is never exact — it is always an informed estimate bounded by uncertainty. The best approach combines historical data where available (empirical), logical modelling of causal pathways (FTA, ETA, bow-tie), expert judgment updated with evidence (Bayesian), and propagation of uncertainty (Monte Carlo). Validation against real outcomes remains essential.
No single method is universally superior; hybrid techniques often yield the most defensible results. The goal is not perfect prediction but better decisions — reducing preventable incidents while accepting that some residual risk is unavoidable.
(Word count: 2,512)
References
Aven, T. (2015) Risk Analysis. 2nd edn. Wiley. Available at: https://onlinelibrary.wiley.com/doi/book/10.1002/9781119057802 (Accessed: 23 February 2026).
Bedford, T. and Cooke, R. (2001) Probabilistic Risk Analysis: Foundations and Methods. Cambridge University Press. Available at: https://www.cambridge.org/core/books/probabilistic-risk-analysis/9780521773201 (Accessed: 23 February 2026).
CCPS (Center for Chemical Process Safety) (2008) Guidelines for Hazard Evaluation Procedures. 3rd edn. Wiley-AIChE. Available at: https://www.wiley.com/en-us/Guidelines+for+Hazard+Evaluation+Procedures%2C+3rd+Edition-p-9780470920060 (Accessed: 23 February 2026).
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Kroese, D.P., Taimre, T. and Botev, Z.I. (2014). Handbook of Monte Carlo Methods. Wiley.
Kumamoto, H. and Henley, E.J. (1996) Probabilistic Risk Assessment and Management for Engineers and Scientists. 2nd edn. IEEE Press. Available at: https://ieeexplore.ieee.org/book/6267380 (Accessed: 23 February 2026).
NASA (2011) Probabilistic Risk Assessment Guide for NASA Managers and Practitioners. NASA/SP-2011-3422. Available at: https://www.nasa.gov/sites/default/files/atoms/files/2011_prag_final_12-15-2011.pdf (Accessed: 23 February 2026).
Rausand, M. and Høyland, A. (2004) System Reliability Theory: Models, Statistical Methods, and Applications. 2nd edn. Wiley. Available at: https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316900 (Accessed: 23 February 2026).
Rausand, M. (2011). Risk Assessment: Theory, Methods, and Applications. Wiley.
Reason, J. (1997). Managing the Risks of Organizational Accidents. Ashgate.
Vesely, W.E. et al. (1981) Fault Tree Handbook. U.S. Nuclear Regulatory Commission, NUREG-0492. Available at: https://www.nrc.gov/docs/ML1007/ML100780465.pdf (Accessed: 23 February 2026).
Vose, D. (2008) Risk Analysis: A Quantitative Guide. 3rd edn. Wiley. Available at: https://www.wiley.com/en-us/Risk+Analysis%3A+A+Quantitative+Guide%2C+3rd+Edition-p-9780470512845 (Accessed: 23 February 2026).















