# Monte-Carlo simulations: Linking critical path schedules to project control

Monte-Carlo simulations can be used for various purposes to analyze the behaviour of projects in (fictitious) progress. It can be used to measure the sensitivity of project activities as described in “Schedule Risk Analysis: How to measure your baseline schedule’s sensitivity?” or to evaluate the accuracy of forecasting methods used in Earned Value Management (see “Predicting project performance: Evaluating the forecasting accuracy”). In this article, a simple yet effective Monte-Carlo simulation approach is proposed consisting of nine simulation scenarios that can be used to link critical path schedules to project control information.

A Monte-Carlo simulation run generates a duration for each project activity given its predefined uncertainty profile. This article will not give much information about the underlying principle of distribution functions that are used in simulation studies to define activity uncertainty. The reader is referred to the “Project risk: Statistical distributions or single point estimates?” article for more information. Instead, it will focus on the general concept of the simulation scenarios and the interpretation of each of the nine scenarios.

Simulation approach

Figure 1 graphically summarizes the nine simulation scenarios of the simulation approach. The nine scenarios are constructed in such a way to make a connection between the critical path schedule, the project performance measurement information along the project progress and the final project status. This connection and all details of figure 1 are explained along the following lines.

￼Figure 1. The nine scenarios of the simulation approach

Critical path schedule: Uncertainty in the activity durations can be defined differently for critical versus noncritical activities (see “Scheduling projects: How to determine the critical path using activity slack calculations?”). Both classes of activities can have simulated durations that are shorter, equal to or longer than their planned duration. In figure 1, this is displayed as follows:
• -: activity duration shorter than planned
• 0: activity on time
• +: activity duration longer than planned
Project performance (during simulation): Each simulation run imitates fictitious project progress in which periodic project performance can be measured using Earned Value Management techniques. In the simulation scenarios, the time performance is measured by the Schedule Performance Index SPI(t) (see “Earned Value Management: Reliable time performance measurement”) at periodic time intervals. This metric serves as a periodic warning signal that gives a project based view on the project's performance at the current time. The average of all SPI(t) values between the start and finish of the project is displayed in the body of figure 1 and gives an indication of the average signal reported by the EVM metric and has the following meaning:
• = 1: average ‘on time’ signal
• > 1: average positive signal (ahead of schedule)
• < 1: average negative signal (schedule delay)
Final project status (after simulation): After each simulation run, the real project duration (RD) can be different from the planned project duration (PD), leading to project under- or overruns.
• RD = PD: project on time
• RD > PD: late project
• RD < PD: early project
Interpretation

The 9 simulation scenarios of figure 1 can be classified into three categories, each having a different meaning and purpose. The interpretation of the 9 scenarios can be summarized as follows:
• True scenarios: Scenarios 1 and 2 report an average project ‘ahead of schedule’ progress where the project finishes earlier than planned. Scenarios 8 and 9 report an average ‘project delay’ progress and the project finishes later than planned. Scenario 5 reports an ‘on-time’ progress where the project finishes exactly on time. Consequently, these five scenarios report on average a true situation.
• Misleading scenarios: Scenario 4 reports an average project ‘ahead of schedule’ progress but the project finishes exactly on time. Likewise, scenario 6 reports an average ‘project delay’ progress but the project finishes exactly on time. Consequently, these two scenarios report on average a schedule deviation which is not true, and hence, they are called misleading simulation scenarios.
• False scenarios: Scenario 3 reports an average ‘project delay’ progress but the opposite is true: the project finishes earlier than planned. Scenario 7 reports an average project ‘ahead of schedule’ progress but the opposite is true: the project finishes later than planned. Consequently, these two scenarios report a false performance signal, and hence, they are called false simulation scenarios.