The Staged Site Trap: Why Static Phasing Fails in Uncertain Environments
Large infrastructure projects—whether data centers, industrial parks, or residential developments—are increasingly built in stages to manage capital exposure. The conventional approach uses pre-determined milestones: Phase 1 complete, then trigger Phase 2. This static gating assumes that the future is knowable, which it rarely is. Market demand shifts, regulatory timelines slip, and construction costs fluctuate. Teams often commit to Phase 2 too early, locking in capital before confirming demand, or too late, losing momentum and market position. The result is either stranded assets or missed windows.
Experienced practitioners recognize that phasing is not a binary go/no-go decision but a real option—the right, but not the obligation, to proceed. The value of this option depends on how uncertainty resolves over time. A static trigger ignores the information arriving between stages. Bayesian updating provides a structured way to incorporate new data—pre-leasing rates, soil test results, utility connection timelines—into the decision to exercise the option. This article details how to implement that framework.
Why Static Triggers Underperform
Consider a staged industrial park. The master plan calls for Phase 1 (infrastructure and first building) and Phase 2 (additional buildings) triggered 18 months after Phase 1 start. This fixed timeline ignores demand signals. If pre-leasing is weak, proceeding wastes capital. If demand surges early, waiting loses tenants. A static trigger cannot adapt.
The Information Gap
Between phases, data accumulates: construction cost indices, interest rate changes, environmental review outcomes. Static gating ignores this flow. Bayesian updating treats each new datum as evidence to revise the probability that proceeding is optimal. This transforms phasing from a calendar-driven to an evidence-driven process.
Real Options Thinking
In financial options, the holder waits to exercise until conditions are favorable. Infrastructure phasing is analogous: the developer holds an option to build Phase 2, and should only exercise when the net present value of doing so is sufficiently positive given updated beliefs. The strike price is the capital required; the underlying asset is the project's value.
Bayesian Updating as a Decision Engine
Bayesian updating mathematically incorporates prior beliefs (based on market research, historical data) and new evidence (site-specific data) to compute posterior probabilities. For phasing, the prior is the initial estimate of success probability (e.g., 60% chance Phase 2 will be viable). As evidence arrives, the posterior updates. When the posterior crosses a threshold (e.g., 80%), the trigger fires.
Common Misconceptions
Some believe Bayesian methods require vast data or complex software. In practice, a spreadsheet with a few scenarios suffices for many projects. Others think it eliminates judgment—it formalizes it, making assumptions explicit and testable.
When to Use Bayesian Phasing
This approach is most valuable when: (a) uncertainty is high, (b) learning can occur between phases, (c) the cost of being wrong is large, and (d) the option can be deferred without losing the opportunity. It is less useful for routine expansions with predictable demand.
Example: Data Center Campus
A data center developer plans a three-phase campus. Phase 1 builds one hall. Bayesian updating tracks pre-leasing commitments, energy costs, and fiber availability. If after 12 months, pre-leasing hits 70% of Phase 2 capacity, the posterior probability of success rises, triggering an early start. If pre-leasing stalls below 40%, the trigger defers, saving millions.
Benefits Over Traditional Gating
Bayesian phasing reduces capital at risk by deferring irreversible commitments until evidence supports them. It increases flexibility, aligns spending with actual demand, and provides a defensible rationale for decisions to stakeholders.
Limitations
It requires discipline to collect and update data regularly. It also demands clear thresholds and a willingness to accept that the trigger may never fire. Teams must avoid anchoring on the prior and must be honest about new evidence.
Core Frameworks: How Bayesian Updating Transforms Phasing Decisions
At its heart, Bayesian phasing applies Bayes' theorem: P(H|E) = P(E|H) * P(H) / P(E). In plain terms, the probability that a phase will be viable (H) given new evidence (E) equals the likelihood of observing that evidence if the phase is viable, multiplied by the prior probability of viability, divided by the overall probability of the evidence. This mathematical structure forces explicit assumptions and quantifies how beliefs should shift.
For a staged site, the hypothesis H is "Phase 2 will achieve target return." The prior P(H) is based on initial feasibility studies, market analysis, and comparable projects. The evidence E could be pre-sales, construction cost trends, or permitting progress. The likelihood P(E|H) is the probability of seeing that evidence if H is true. For example, if Phase 2 is viable, you'd expect high pre-sales. The marginal probability P(E) normalizes the result.
Setting the Prior
The prior should be informed but not dogmatic. Use historical data from similar projects in the region, adjusted for unique project features. A common starting point is a beta distribution with parameters alpha and beta representing prior successes and failures. If 7 out of 10 comparable phases succeeded, set alpha=7, beta=3, giving a prior mean of 70%.
Modeling Evidence Likelihoods
For each type of evidence, define the likelihood ratio: how much more likely is this evidence when H is true versus false? For pre-sales, if 60% pre-leased is typical for viable phases and 20% for non-viable, the likelihood ratio is 3. This ratio directly updates the odds of H.
Updating in Practice
Use the odds form of Bayes: posterior odds = prior odds * likelihood ratio. Convert probability to odds (p/(1-p)), multiply by the ratio, then convert back. For example, prior 70% gives odds 2.33. Evidence with ratio 2 gives posterior odds 4.66, corresponding to probability 82.3%.
Multiple Evidence Streams
Real projects have multiple evidence streams: pre-leasing, cost indices, regulatory milestones, etc. Update sequentially: the posterior from one evidence becomes the prior for the next. Order matters only if evidence is dependent; for independent streams, multiplication works.
Setting the Trigger Threshold
The threshold at which to proceed depends on risk tolerance and opportunity cost. A conservative developer might require 85% posterior probability; an aggressive one might proceed at 65%. The threshold should be set before data collection begins to avoid bias.
Example: Mixed-Use Development
A developer plans Phase 2 (residential towers) after Phase 1 (retail podium). Prior for Phase 2 viability is 65% based on market studies. Evidence: pre-sales reach 50% of target. For viable projects, that happens 80% of the time; for non-viable, 30%. Likelihood ratio = 80/30 = 2.67. Posterior odds = (0.65/0.35)*2.67 = 4.96. Posterior probability = 83%. If threshold is 80%, trigger fires.
Decision Trees vs. Bayesian Updating
Decision trees model sequential decisions but often use point estimates. Bayesian updating adds probabilistic learning, making the tree dynamic. Combining both is powerful: the decision tree structures the choices, and Bayesian updating provides the probabilities at each chance node.
Common Pitfalls in Framework Application
Teams often use an overly optimistic prior, ignore base rates, or treat evidence as perfectly informative. Calibration sessions with domain experts help. Also, likelihood ratios should be based on data, not guesses. Sensitivity analysis shows how robust the decision is to prior and likelihood assumptions.
Execution Workflows: A Repeatable Process for Bayesian Phasing
Implementing Bayesian phasing requires a structured workflow that integrates with existing project management processes. The goal is to make the updating routine, not a one-off analysis. Below is a step-by-step process used by teams that have successfully applied this approach.
The workflow comprises five stages: (1) setup, (2) data collection, (3) periodic updating, (4) decision review, and (5) post-mortem. Each stage has specific deliverables and roles.
Step 1: Setup the Bayesian Model
At project initiation, define the hypothesis (e.g., "Phase 2 NPV > 0"), the prior distribution, and the evidence streams. Document assumptions in a model charter. Choose a simple spreadsheet or dedicated software. Assign a team member to maintain the model.
Step 2: Define Evidence Collection Plan
For each evidence stream, specify what data to collect, how often (monthly, quarterly), who is responsible, and how to handle missing data. Create a data dictionary with likelihood ratios derived from historical benchmarks or expert elicitation.
Step 3: Establish Trigger Thresholds
Set the posterior probability threshold for proceeding, as well as a lower threshold for abandonment. Also define a "wait and see" zone. These thresholds should be approved by the investment committee before data collection begins.
Step 4: Execute Collection and Updating
On a regular cadence (e.g., quarterly), collect the evidence, update the model, and compute the new posterior. The update should take less than a day. Flag any evidence that significantly changes the posterior.
Step 5: Decision Review Meeting
When the posterior crosses a threshold, convene a decision review. The meeting reviews the evidence, the update, and any qualitative factors not captured in the model. The decision to proceed, defer, or abandon is made.
Step 6: Post-Mortem and Calibration
After each trigger event (or after the project), compare the predicted probabilities with actual outcomes. Use this to recalibrate the prior and likelihood ratios for future phases. This learning loop improves the model over time.
Tooling for the Workflow
A simple spreadsheet with a few tabs suffices: one for prior, one for evidence input, one for posterior calculation. For multi-stream updates, use a Bayesian calculator like the one in R or Python scripts. For enterprise use, integrate with project management software to automate data feeds.
Example: Pharmaceutical R&D Campus
A pharma company stages a new R&D campus. Phase 1 includes lab buildings. Phase 2 (pilot plant) is triggered by Bayesian updating using: (a) occupancy rate of Phase 1 labs, (b) number of new drug candidates entering pipeline, (c) regulatory approval timelines. The model updates monthly. After 18 months, the posterior reached 88%, triggering Phase 2.
Integration with Earned Value Management
Bayesian phasing can complement earned value management (EVM). While EVM tracks cost and schedule performance, Bayesian updating tracks market and demand performance. Both inform the go/no-go decision. Use EVM data as one evidence stream in the Bayesian model.
Maintaining Governance
The workflow must have clear governance: who updates the model, who reviews, who decides. Avoid having the same person collect evidence and make the decision. Use a decision log to document each trigger review.
Tools, Stack, and Economics: Practical Implementation Realities
Adopting Bayesian phasing does not require expensive software. Most teams start with Excel or Google Sheets, then graduate to specialized tools as the number of evidence streams grows. The economic case is compelling: even one avoided mis-timed phase can save millions.
The tooling stack typically includes: a calculation engine (spreadsheet or script), a data collection system (project management software, surveys), and a reporting dashboard. Below we compare three common approaches.
Comparison of Tooling Approaches
| Approach | Pros | Cons | Best For |
|---|---|---|---|
| Spreadsheet (Excel/Sheets) | Low cost, easy to audit, widely understood | Prone to error, limited for many streams, version control issues | Small projects, initial adoption |
| R/Python Script | Flexible, reproducible, handles complex models | Requires programming skill, harder to share with non-technical stakeholders | Teams with data science support |
| Commercial Decision Platform | Integrated data feeds, dashboards, audit trails | Costly, vendor lock-in, may be overkill | Large programs, enterprise-wide use |
Economic Justification
The value of Bayesian phasing comes from reducing the probability of two errors: proceeding when you should not (type I) and not proceeding when you should (type II). Each error has a cost. For a $50 million phase, a type I error wastes sunk costs; a type II error loses potential returns. Bayesian updating tilts the balance toward better decisions.
Cost of Implementation
Setting up the model takes 2-5 days of a senior analyst's time. Ongoing data collection and updating might take 1-2 days per quarter. Training the team on the framework may require a half-day workshop. These costs are trivial compared to the capital at stake.
Data Infrastructure Requirements
To make updating efficient, data should be collected systematically. Use a simple database or spreadsheet to store evidence over time. If possible, link to existing systems (CRM for pre-sales, ERP for costs) to automate data feeds. But even manual entry works.
Skill Requirements
The team needs someone comfortable with probability and spreadsheets. A project manager or analyst can learn the basics in a few hours. For complex models, consult a data scientist or use a decision analysis consultant for the initial setup.
Common Tooling Mistakes
One mistake is over-complicating the model. Start with 2-3 evidence streams. Another is using black-box software; stakeholders need to understand the logic. A third is failing to update the model when assumptions change (e.g., market downturn).
Example: Industrial Park Developer
A developer used a spreadsheet to track three evidence streams: pre-leasing inquiries, construction cost index, and local employment growth. The model updated quarterly. After two years, the posterior for Phase 2 reached 92%, triggering the phase. The developer saved an estimated $8 million by not proceeding earlier when the posterior was below 70%.
Growth Mechanics: Scaling Bayesian Phasing Across a Portfolio
Once a team masters Bayesian phasing on one project, the next step is scaling across a portfolio. The key insight is that evidence from early phases can inform later phases, and lessons from one project update priors for others. This creates a learning organization that improves phasing decisions over time.
Portfolio-level Bayesian phasing treats each phase in each project as a separate option, but with correlated uncertainties. For example, if market demand is a common factor, a strong signal in one project raises the posterior for all projects in that market. Bayesian hierarchical models capture this.
Building a Cross-Project Data Repository
Collect data from all projects: priors, evidence, decisions, and outcomes. Use this repository to recalibrate likelihood ratios and prior distributions. For instance, if the prior for Phase 2 viability was 70% on average, but actual viability was 50%, adjust the prior downward for future projects.
Using Early Signals from Phase 1
Phase 1 of a staged site often provides the richest evidence for Phase 2. Metrics like construction cost overruns, permitting delays, and early tenant satisfaction are predictive. Build these into the Phase 2 model. The more data from Phase 1, the more precise the Phase 2 posterior.
Portfolio Optimization
With multiple staged sites, you can optimize the order of phases across projects. If one project's evidence suggests a high probability of success, prioritize its Phase 2. Use Bayesian updating to rank opportunities by expected value, adjusting for risk.
Example: Multi-Campus Technology Company
A tech company develops three data center campuses simultaneously. Each campus has two phases. They implement a shared Bayesian model where campus-specific evidence (local energy costs, fiber availability) is combined with global evidence (cloud demand trends, interest rates). The model updates monthly. When campus A's posterior for Phase 2 reached 85%, they accelerated it, while deferring campus B where posterior remained at 60%.
Learning Rate Quantification
Track how the accuracy of your predictions improves over time. For each phase that is triggered, record the predicted probability and whether the phase ultimately achieved its target return. Use the Brier score to measure calibration. Over 10 phases, a well-calibrated team should have a Brier score below 0.15.
Organizational Challenges
Scaling requires cultural change. Teams used to static gating may resist probabilistic thinking. Senior executives may demand certainty. To overcome this, start with a pilot project, document successes, and gradually expand. Training and clear communication of the framework's logic are essential.
Automation and Integration
As the portfolio grows, automate data collection and updating. Use APIs to pull data from project management systems, CRM, and financial models. Build a dashboard that shows the posterior probability for each phase in the portfolio, with alerts when thresholds are crossed.
Risks, Pitfalls, and Mistakes: What Can Go Wrong and How to Mitigate
Bayesian phasing is not a silver bullet. Misapplied, it can lead to false confidence, paralysis, or missed opportunities. Understanding common failure modes is essential to using the framework effectively. Below are the most frequent pitfalls and strategies to avoid them.
The first category of risks relates to model specification: setting an inappropriate prior, using inaccurate likelihood ratios, or ignoring dependencies between evidence streams. The second category relates to behavioral biases: anchoring on the prior, confirmation bias in evidence selection, or groupthink in threshold setting. The third category relates to execution: failing to update regularly, ignoring model results, or overriding the model with gut feel.
Pitfall 1: Overconfident Prior
Teams often set a prior that is too optimistic because they are emotionally invested. Mitigation: use a reference class of comparable projects and systematically adjust for differences. Conduct a pre-mortem: imagine the phase fails and work backward to assign probabilities to failure modes.
Pitfall 2: Cherry-Picking Evidence
When evidence is ambiguous, there is a temptation to select only evidence that supports proceeding. Mitigation: pre-specify all evidence streams and their likelihood ratios before data collection begins. Use a third party to collect and report evidence objectively.
Pitfall 3: Ignoring Base Rates
If the base rate of success for similar phases is low, even strong evidence may not push the posterior above threshold. Mitigation: always start with the base rate as the prior and adjust only with evidence. Use the Bayesian framework to compute how much evidence is needed to overcome a low base rate.
Pitfall 4: Threshold Drift
As the posterior approaches the threshold, decision-makers may lower the threshold to justify proceeding. Mitigation: set the threshold in advance and commit to it. If the threshold needs to change, do it as a separate governance process with full documentation.
Pitfall 5: Model Overcomplexity
Adding too many evidence streams can make the model opaque and hard to maintain. Mitigation: start with 2-3 streams. Add more only if they materially change the posterior. Use sensitivity analysis to identify which streams matter most.
Pitfall 6: Failure to Update
Busy teams may skip updates, letting the model become stale. Mitigation: assign a specific person to own the update cadence. Tie updates to existing project review meetings. Use automated reminders.
Pitfall 7: Overriding the Model
When the model says "wait" but intuition says "go," teams often override. Mitigation: require a written rationale for any override. Track overrides and compare with outcomes to learn whether intuition adds value.
Mitigation Strategies Summary
To reduce risks: (a) use a reference class for priors, (b) pre-register evidence and thresholds, (c) update regularly, (d) conduct sensitivity analyses, (e) document decisions and outcomes for learning. Bayesian phasing is a tool to support judgment, not replace it.
Decision Checklist and Common Questions
Before implementing Bayesian phasing, run through this decision checklist to assess readiness. It covers model setup, data availability, governance, and team capability. Use it as a diagnostic tool to identify gaps that need addressing.
The checklist is organized into four sections: (A) Model Readiness, (B) Data Readiness, (C) Governance Readiness, and (D) Team Readiness. Each item is a yes/no question. Aim for at least 80% "yes" before proceeding. For each "no," develop an action plan.
Model Readiness Checklist
- Have we defined the hypothesis (e.g., Phase 2 NPV > 0) in clear, measurable terms?
- Have we set a prior probability based on a reference class of comparable phases?
- Have we identified at least two independent evidence streams that are observable before the trigger decision?
- Have we estimated likelihood ratios for each evidence stream, preferably using historical data?
- Have we defined the posterior probability threshold for proceeding and for abandoning?
Data Readiness Checklist
- Do we have a system to collect evidence data on a regular cadence (monthly or quarterly)?
- Is the data quality sufficient (complete, accurate, timely)?
- Do we have a process to handle missing data (e.g., imputation or sensitivity analysis)?
- Have we identified who is responsible for data collection?
Governance Readiness Checklist
- Is there a clear decision-making authority (investment committee, steering group) for trigger decisions?
- Have we documented the model, assumptions, and thresholds?
- Is there a process for updating the model when assumptions change?
- Have we established a post-mortem process to learn from decisions?
Team Readiness Checklist
- Does at least one team member understand Bayesian probability?
- Has the team received training on the framework and its limitations?
- Is there buy-in from senior leadership to use the model as a decision support tool?
- Are team members aware of cognitive biases and how to mitigate them?
Frequently Asked Questions
Q: How do we handle evidence that is qualitative, like "regulatory climate seems favorable"? A: Convert qualitative assessments into probabilities using structured elicitation. For example, ask experts: "If Phase 2 is viable, how likely is a favorable regulatory climate?" Average responses and use as likelihood ratio.
Q: What if we have only one evidence stream? A: A single stream still provides value. Ensure it is a strong predictor. Be cautious about overconfidence; consider using a wider confidence interval on the posterior.
Q: How do we update if evidence arrives at irregular intervals? A: Update when evidence is available, but document the timing. Use the most recent evidence as the current state. For irregular intervals, a Bayesian filtering approach (like a Kalman filter) can handle time-varying updates.
Q: Can we use Bayesian phasing for phases that are already underway? A: Yes, but the option is partially exercised. You can still use the model to decide on further investment within the phase or to accelerate/decelerate. The framework adapts to partial commitment.
Synthesis and Next Actions: Moving from Theory to Practice
Bayesian phasing offers a rigorous, evidence-based method to time infrastructure triggers on staged sites. By treating each phase as a real option and using Bayesian updating to incorporate new information, developers reduce capital at risk and increase the probability of successful outcomes. The framework is not complex to implement—a spreadsheet, a few evidence streams, and a disciplined review process are enough to start.
The key takeaways are: (1) static gating ignores the learning that occurs between phases; (2) Bayesian updating provides a structured way to incorporate that learning; (3) the framework requires explicit assumptions, regular data collection, and governance; (4) it scales from a single project to a portfolio; and (5) common pitfalls can be mitigated with proper setup and awareness.
Next steps for a team ready to adopt this approach: First, select a pilot project with a clear Phase 2 trigger decision. Second, assemble a small team including a project manager, an analyst, and a decision-maker. Third, follow the workflow in Section 3 to set up the model and data collection. Fourth, run the model for one or two update cycles before making a decision. Fifth, after the decision, conduct a post-mortem to capture lessons learned.
For teams that have already implemented basic Bayesian phasing, consider advancing to portfolio-level analysis, automated data feeds, and hierarchical models that borrow strength across projects. The ultimate goal is to embed probabilistic thinking into the organization's culture, turning uncertainty from a threat into a source of strategic advantage.
Remember that the framework is a decision aid, not a decision maker. It works best when combined with domain expertise, market intelligence, and good judgment. Start simply, iterate, and learn. The cost of inaction—continuing to use static triggers in an uncertain world—is far greater than the effort to adopt a more dynamic approach.
Comments (0)
Please sign in to post a comment.
Don't have an account? Create one
No comments yet. Be the first to comment!