PMI-ACP Exam Prep (Part 2 of 7): Value-Driven Delivery20 min read

Listen to this post:

PDF icon

Download PDF Version


This is part 2 of 7 posts on PMI-ACP Exam Prep (link to part 1). In this post, my focus will be on maximizing business value, including prioritization, incremental delivery, testing, and validation.

The reason projects are undertaken is to generate business value, be it to produce a benefit or improve a service. Delivering business value is a core component of agile methods and it’s embedded in agile values: “working software over comprehensive documentation.” And agile principles: “Deliver working software frequently” and “working software is the primary measure of progress.”

There are 2 reasons we need to deliver value early:

  1. The longer a project runs, the longer the horizon becomes for risks that can reduce value such as failure, decrease benefits, erosion of opportunities, and so on.
  2. Stakeholder satisfaction plays a huge role in project success.

In short, value-driven delivery means making decisions that prioritize the value-adding activities and risk-reducing efforts for the project and then executing based on these priorities.

Minimize Waste

Wasteful activities reduce value. There are 7 wastes to keep in mind and eliminate them when you come across them:

  1. Partially done work
  2. Extra processes
  3. Extra features
  4. Task switching
  5. Waiting
  6. Motion
  7. Defects

Assessing Value via Financial Metrics

For business projects, value is commonly estimated using financial metrics such as ROI, IRR, and NPV. For the purpose of the PMI-ACP exam, you don’t need to know how to calculate these but you do need to understand what they are and what they represent and how they differ from one another.

Return on Investment (ROI)

Definition: The ratio of the benefits received from an investment to the money invested in it, expressed as a percentage.

ROI is used to evaluate the money gained or lost in relation to the money invested in a project. ROI is also often referred to as gain/loss, profit/loss, or net income/loss. A Project Manager can use the ROI of one or more projects to determine which project is the better investment. For example, if Project A has an ROI of 27%, Project B has an ROI of 25%, and Project C has an ROI of 30%; Project C would be the better investment since it has the largest ROI.

Net Present Value (NPV)

Definition: The present value of a revenue stream (income minus costs) over a series of time periods.

Generally, any project that has a positive NPV is a good investment. NPV is used as a capital project financial metric to analyze the profitability of an investment at the time of review. It looks at the present values of cash inflows and the present values of cash outflows resulting in an NPV value. A Project Manager can compare the NPV value of one or more projects to determine which project is a more profitable investment. For example, Project A has an NPV of $2.3M, Project B has an NPV of $2M, and Project C has an NPV of $2.1M. Project A has the greater NPV and is the best investment for the organization.

The drawback of calculating NPVs is that you have to estimate what inflation and interest rates will be in the future – and those guesses may not turn out to be correct.

Internal Rate of Return (IRR)

IRR is used as a capital project budgeting metric to determine if an investment should be made. It looks at the present value of the cash flows as compared to the initial investment which results in an IRR value. For example, if as a Project Manager you need to compare two or more projects to determine which one would be the better investment for your organization you can use IRR to do this. If you are given the IRR for three projects; Project A IRR =25%, Project B IRR = 30%, and Project C IRR = 12% you can determine that Project B is the better investment for the organization because it has the largest IRR value.

Earned Value Management

One tool commonly used to track project spending is an S-curve. The advantage is that they are simple to interpret and can quickly tell us whether our project is over or under budget. However, S-curves don’t provide info on the schedule. Gantt charts can help with figuring out if a project is on schedule or not but there is still a gap and there needs to be a way to combine the two. Earned value management was created to address this gap. This approach combines spending and schedule data to produce a comprehensive set of project metrics, including planned value (PV), earned value (EV), schedule variance (SV), cost variance (CV), schedule performance index (SPI), and performance index (CPI).

Pros and Cons of Using EVM for Agile Projects

Earned value compares actual project performance to planned performance at a particular point in time. The quality of the baseline plan is a critical success factor in using this approach.

Another caution regarding earned value is that it doesn’t truly indicate whether the project is successfully delivering value. We may be on budget and on schedule but building a horrible product that the customer does not need or want to use. Simply looking the budget and schedule doesn’t paint the entire picture.

Having said this, one of the key benefits of earned value metrics is that they are a leading indicator. EVM looks forward, trying to predict completion dates and final costs.

Cost Variance (CV)

CV is the Earned Value minus the Actual Cost (CV=EV-AC) of a project. This formula measures the cost performance of a project and looks at whether the project is on budget or not. In order to calculate CV you need two pieces of information, the earned value and the actual cost of the project. If a CV result is a negative number the project is over budget, which is bad. If a CV result is a positive number the project is under budget, which is good. If CV is zero, then the project is exactly on budget. For example project A has an earned value of $75.1M and an actual cost of $75.3M. The CV calculation would look like: CV= $75.1M – $75.3M; resulting in a CV of -$0.2M; this project is over budget. Another example would be Project B has an earned value of $15M and an actual cost of $14.5M. The CV calculation would look like: CV=$15M – $14.5M; resulting in a CV of $0.5M; this project is under budget.

Cost Performance Index (CPI)

CPI is Earned Value divided by Actual Cost (CPI=EV / AC). CPI measures the cost performance of a project; is the project budget being spent as planned? In order to calculate CPI you need two pieces of information, the earned value and the actual cost of the project. There are three possible results when calculating this: CPI = 1 is good and means funds are being used as planned; CPI >1 is also good and means the funds are being used more efficiently than planned; and CPI <1 is bad and means the funds are being overspent.

Schedule Variance (SV)

SV is the Earned Value minus the Planned Value (SV=EV-PV) of a project. This formula measures the schedule performance of a project and looks at whether the project is behind schedule or ahead of schedule. In order to calculate SV you need two pieces of information, the earned value and the planned value of the project. If an SV result is a negative number then the project is behind schedule, which is bad. If an SV result is a positive number then the project is ahead of schedule, which is good. If SV is zero, then the project is exactly on schedule. For example project A has an earned value of $75.1M and an actual cost of $74.2M. The SV calculation would look like: SV= $75.1M – $74.2M; resulting in a SV of $0.9M; this project is ahead of schedule.

Schedule Performance Index (SPI)

SPI is Earned Value divided by Planned Value (SPI=EV / PV). This formula measures the schedule performance of a project, is the project performing as planned? In order to calculate SPI you need two pieces of information, the earned value and the planned value of the project. There are three possible results when using this formula: SPI = 1 is good and shows the project is progressing as planned; SPI >1 is also good and shows the project is progressing at a faster rate than planned; and SPI <1 is bad and shows the project is progressing at a slower rate than planned.

Although for the purpose of the PMI-ACP exam you don’t need to do any calculations or create S-curves and EVMs, here is a link to a great video on how to go about it in a project:

Key Performance Indicators (KPIs)

  • Rate of progress: How many user stories are you getting completed and accepted by the product owner per week or month? Agile teams usually estimate their work items in story points. So a simple piece of work might be sized 1 story point and a large work may be sized 8 story points and the project’s rate of progress KPI will be expressed as 20 points per week.
  • Remaining work: How much work is left in the backlog? This is again in story point units. e.g. 400 story points remaining.
  • Likely completion date: We look at how much work there is left to do and divide it by our current rate of progress. e.g. we are getting 20 story points done per week and we have 500 story points in our backlog, our likely completion date will be 500/20=25 weeks. Assuming no change in scope or breaks in the schedule (vacations, etc.)
  • Likely costs remaining: This is our burn rate multiplied by the remaining weeks left. Things to consider in burn rate: salary, licenses, training cost, equipment, etc.

Managing Risk

To maximize value, we much minimize risk. The primary tools that agile teams use to manage risk are the risk-adjusted backlog and risk burndown charts. More on this to come in later posts but basically, these tools allow us to seamlessly integrate and prioritize our risk response actions into our backlog of development work.

Prioritizing Value

prioritization is a fundamental agile process. if you recall in my previous post I mentioned that agile teams must welcome changing requirements, even late in the game. You may say this sounds great but how do you manage it? Well, you need to keep the customer in the loop every step of the way. If at the end of each iteration or sprint you sit with the customer and review the work that has been done and look at the backlog and reconfirm prioritization of work and user stories then new requirements shouldn’t affect you much as the requirements will come at the right time. Asking questions such as “has anything changed?” and “do we still want to move forward with feature B next?”


The MoSCoW prioritization scheme, which was popularized by DSDM, derives its name from the first letter of the following labels:

  • Must have: Requirements and features that are fundamental to the system. Without them, the system will not work or have no value.
  • Should have: These features are important and needed for the system to work correctly. If they are not there, then the workaround will likely be costly or cumbersome.
  • Could have: Features that are useful net additions that add tangible value.
  • Would like to have, but not this time: These are features that are duly noted but will most likely not make the cut.

Monopoly Money

In this approach, you give the stakeholders Monopoly money equal to the amount of the project budget and ask them to distribute the funds amongst the system features. This approach is useful for identifying the general priority of system requirements. This technique is most effective when it’s limited to prioritizing business features.

100-Point Method

Stakeholders are given 100 points each to distribute to product features and they can distribute it any way they like. e.g. 30 points here, 15 points there or 100 points to a single feature that’s the only priority they have.

Dot Voting or Multi-Voting

Similar to Monopoly Money method shared above but with a small difference. Each stakeholder is given a predetermined number of dots or stars or checkmarks to distribute between the risk identified or features that need to be prioritized. Say you do a brainstorming exercise and come up with 40 different risks or features. You would typically give out 20% of the number features identified as points to each stakeholder (40×20%=8) and have the stakeholders distribute the point/dots/stars among the features or risks identified.

Kano Analysis

Kano says that a product or service is about much more than just functionality. It is also about customers’ emotions. For example, all customers who buy a new car expect it to stop when they hit the brakes, but many will be delighted by its voice-activated parking-assist system.

The model encourages you to think about how your products relate to your customers’ needs while moving from a “more is always better” approach to product development to a “less is more” approach.

Constantly introducing new features to a product can be expensive and may just add to its complexity without boosting customer satisfaction. On the other hand, adding one particularly attractive feature could delight customers and increase sales without costing significantly more.

For more on how Kano works have a look at this article:

Delivering Incrementally

Incremental delivery is another way that agile methods optimize the delivery of value. Incremental delivery reduces the amount of rework by finding issues earlier and thereby contributing to the delivery of value on the project. An example of incremental delivery is having the plain vanilla version on production and the team working on features and once each feature is done, it sent to the test environment for verification before it’s past to production. You can also bypass testing and send the new features to production but the cost associated with fixing bugs before testing and in production is usually higher than fixing them in the test environment.

Minimal Viable Product (MVP)

MVP refers to a package of functionality that is complete enough to be useful to the end user or the market, yet still small enough that does not represent the entire project. Keep in mind that the functionalities included need to be complete. e.g. in case of an MVP for a phone, making calls may be the only feature we decide to have. This feature needs to be working properly and completely. MVP is more of a process. The goal is to receive feedback and add/remove more feature as you go along.

Cumulative Flow Diagrams (CFDs)

CFDs are valuable tools for tracking and forecasting the delivery of value. You can gain insight into project issues, cycle times and likely completion dates. Basically, they are stacked area graphs that depict the features that are in progress, remaining, and completed over time. The video below is a good tutorial on how you can create and read CFDs:

Here is another great article that can help you understand CFDs:

What’s important is that you need to know that the bottleneck is the activity that lies below the widening band. The widening band is the feeding activity, not the problem activity. Once you know where the problem is, you can start addressing the issue by applying the 5 focusing steps of Goldratt’s Theory of Constraints:

  1. Identify the constraint
  2. Exploit the constraint
  3. Subordinate all other processes to exploit the constraint
  4. If after step 2 and 3 are done, more capacity is needed to meet demand, elevate the constraint
  5. If the constraint has not moved, go back to step 1, but don’t let inertia (complacency) become the system’s constraint

Agile Contracts

There are a few ways you can create contracts for your agile project.

DSDM Contract

This contract focuses on work being “fit for business purpose” and passing test, rather than matching a specification. Usually used in the UK and some areas in Europe.

Money for Nothing and Change for Free

Money for nothing and Changes for free is a clause that is applied to a standard Fixed Price contract. As this type of contract foster collaboration between the supplier and the customer, and collaboration is the key to success in agile over contract negotiations (Agile Manifesto). Where Risk is shared between two parties and it’s a win-win situation.

The contract needs to be set up in Fixed Price and add the “Money for Nothing and Changes for Free” clause.

  1. The customer may cancel the project after any Sprint by paying 20% of the remaining contract fee.
    1. Supplier gets 20% for the work not done
    2. When the client met the ROI cut off they do not need to continue with the project and have unnecessary features.
  2. The customer can add new User Stories, Story Points or Features during any Sprint.
  3. The customer needs to agree on prioritizing the backlog in each iteration.
  4. The customer must agree that some work will not be done in case the clause is used.
  5. Both the parties need to agree on the work items that are estimated, sized.

When the Contract is reversed?

  1. When the customer is not prioritizing the backlog appropriately
  2. The customer does not operate with the scrum rules

Mony for nothing allows the customer to terminate the project early when they feel there is no longer sufficient ROI in the backlog to warrant further iterations.

Graduated Fixed-Price Contract

With this kind of contract, both parties share some of the risk and reward associated with schedule variance. See table below:

Graduated Fixed-Price Contract

Graduated Fixed-Price Contract

Fixed Price Work Packages

Fixed-price work packages mitigate the risk of underestimating or overestimating a chunk of work by reducing the scope and cost involved in the work being estimated. This also allows the customer to reprioritize the remaining work based on evolving costs. And for the supplier to update their costs as new details emerge, removing the need for the supplier to build excess contingency fund into the project cost.

Customized Contracts

Basically, you are creating a contract by combing all type of contracts discussed above to meet your need. On agile projects, procurement has always been particularly challenging since the details of the scope can’t be fully defined early in the project. The success of the project ultimately depends on the level of collaboration between the seller and the customer.

Verifying and Validating Value

Verifying and Validating Value

Verifying and Validating Value

What one person describes is often very different from how the listener interprets it. This semantic gap is called the “gulf of evaluation.” On manufacturing projects the work is visible, tangible, and familiar; and so the gulf is small and quickly crossed. in contrast on knowledge work projects, the work is often invisible, intangible, and new – this leads to a bigger gulf and misunderstanding become more common. This is why agile uses frequent and regular testing, checkpoints, and reviews to address problems before they get bigger. This process is known as frequent verification and validation.

Frequent Verification and Validation

Frequent Verification and Validation

Exploratory and Usability Testing

  • Exploratory Testing: The purpose of this type of testing is to uncover unexpected behaviors and discover issues. This is a complement to the scripted testing and not a replacement. The goal is to find system boundaries and unanticipated behavior outside of the regular functions that could be tested.
  • Usability Testing: This type of testing attempts to answer the question, “how will an end-user respond to the system under realistic conditions?” The goal is to diagnose how easy it is to use the system. typically involves observing users as they interact with the system for the first time. Data may be gathered using videotaping, eye-tracking tools, and performing post-test interviews.

Great article on functional and nonfunctional requirements:

Continuous Integration

Continuous integration is a practice used by software devs to frequently incorporate new and changed code into their project code repository. This helps minimize the integration problems that result from multiple people making incompatible changes to the same code base.

Pros and Cons of Continuous Integration


  • The team receives an early warning of broken, conflicting, or incompatible code.
  • Integration problems are fixed as they occur, rather than as the release date approaches.
  • The team receives immediate feedback on the system-wide impacts of the code they are writing.
  • The practice ensures frequent unit testing of the code – alerting the team of issues sooner than later.
  • If a problem is found the code can be reverted back.


  • Setup time can be long.
  • Cost could be high.
  • Time is required to build a suite of automated, comprehensive tests that run whenever code is checked-in.

Test-Driven Development (TDD)

The philosophy behind TDD is that tests should be written before code is written. So with TDD, developers begin a cycle of writing code and running the test until the code passes all the tests. Then, if necessary, they clean up the design to make it easier to understand and maintain without changing the code’s behavior. This last process is called “refactoring.”

Pros and Cons of TDD


  • Makes coders think of functionality first before implementation
  • Ensures we at least have some tests in place
  • Helps us catch defects early
  • Helps with writing systems in a small, modular, flexible, and extendable way


  • The same coder writing the test would result in misinterpretation of the requirements
  • Some types of functionality, such as user interfaces, are difficult or time-consuming to reliably test via unit tests
  • As the project grows and changes, the sustainment load for test scripts goes up
  • As people see a higher number of passing test, they may get a false sense of security about the code quality

Acceptance Test-Driven Development (ATDD)

ATDD moves the testing focus from the code to the business requirements. As with TDD, the tests are created before work starts on the code, and these tests represent how the functionality is expected to behave at an acceptance test level. ATDD has 4 steps: Discuss, Distill, Develop, and Demo.

Acceptance Test-Driven Development (ATDD)

Acceptance Test-Driven Development (ATDD)

  • Discuss the requirements: During the planning meeting, we ask the product owner or the customer questions that are designed to gather acceptance criteria.
  • Distill tests in a framework-friendly format: We get the tests ready to be entered in our acceptance test tool. Usually in a table format.
  • Develop the code and hood up the tests: During development, the tests are hooked up to the code and the acceptance test are run. initially test fail as they can’t find the code corresponding to the test item but as coding is completed we should see successful tests run.
  • Demo: The team does exploratory testing using the automated acceptance testing scripts and demos the software.

When we combine the tasks of defining acceptance criteria and discussing requirements, we are forced to come to a concrete agreement about the exact behavior the software should exhibit. In a way, this approach enforces the discussion of the “definition of done” at a very granular level for each requirement.


By |2018-10-07T12:37:54-04:00October 2nd, 2018|Agile Project Management, Product Management|5 Comments


  1. […] Hope you enjoyed this post and found it helpful! Here is a link to part 2 of 7. […]

  2. […] is part 3 of 7 posts on PMI-ACP Exam Prep (link to part 2). In this post, I focus on working with the project stakeholders, including establishing a shared […]

  3. […] On the chart above we can see that the scope increased after the 5th Sprint from 352 story points to 472 story points. We can also see the original plan vs. actual effort by the team and the estimated effort required to complete all story points. But what about working progress? We can add work in progress to our burnup chart to track both work started and work completed. When you do that, we create a cumulative flow diagram (CFD), as discussed in part 2 of this post. […]

  4. […] All project teams need to be proficient at managing problems, threats, and issues. Since problems will always arise on a project, our effectiveness in preventing, detecting, and resolving them is likely to determine whether our project succeeds or fails. This post is broken down into four themes: Understanding problems, detecting problems, managing threats and issues, and solving problems. The tools discussed in the managing threats and issues section time back to our discussion in my second post, risk management. […]

  5. […] may recall the figure above from my second post in this series. Each of these feedback loops – whether short and informal or langer and […]

Leave A Comment