If you were to do a search on the internet for project success rates you will likely find some amazingly bad statistics for project success rates. In a recent search, I was able to find articles saying that 50% of projects over $1M fail to come in on budget, 60% of projects fail to reach their goals and 78% reported the business was not aligned with the project goals. Wow! What have we gotten ourselves into, Project Managers!?

Here’s my problem, though, with those statistics. Have you met a project manager that has a 50% over-budget rate on their projects? Or a project manager that allows their projects to stray 75% of the time from business goals? If you haven’t, then you’re not alone. In fact, I often see those statistics are published by companies that promote project management tools or services as a means of buying the “latest and greatest”. While it’s true that not all projects succeed, I wonder if anyone of us would ever have a project management position if projects were so poorly run as some of these statistics would lead us to believe. It stands to reason that projects with skilled and capable project management professionals stand a much greater chance of achieving stakeholder satisfaction than those without, so, maybe those statistics are weighted towards projects executed without strong PM leadership in place.

But, what do we mean by project failure? Is there a difference between failing on a project and project failure? I say, YES there is and that’s the purpose of this next installment in our “10 things I’ve learned as a Project Manager – “Bad news doesn’t get better with time”.

Certainly, in both cases – a project failing and failure on a project – its the failing part that’s the bad news. No one likes to fail or be perceived as to having contributed to something that failed. In my experience, though, bad news is inevitable in any and every project that you will be assigned to participate with or manage. That is because of the nature of projects…they are unique endeavors. In other words, we may have done similar things in the past, but in a project situation we are delivering something that is unique, or new. It may be a different team, technology, business function or set of users but regardless, something is different. And whenever we are doing something new, the probability increases that we will fail with something along the way.

We also know that changes that come late in the project are most impactful to our delivery schedule and costs. I was made aware of a project run by Lidl, the European discount grocery store chain making investments in some US markets, that was not implemented after spending seven years and around €500 million (~$750 million). When they realized they couldn’t proceed with the project it was because they discovered the system configuration for valuing inventory could not meet their requirements. That may not seem like a big deal at first but upsetting their entire balance sheet to adapt to a different system was not something they were willing to do, nor was there a reasonable path to the needed customization. Now that’s some bad news!

Since the probability of at least some bad news in projects is highly likely, if not inevitable, we need to deal with this topic in all our projects. Many of your project team members may mistakenly correlate bad news with their own bad performance. This may not at all be true! As project managers we can help ensure that our team members understand they have a healthy team environment in which they can comfortably bring their concerns to light and for whatever help they need to “get it right”, so to speak. Part of that is helping them to understand the failure is not only inevitable, it’s something we can plan for.

What I’d like to provide are some thoughts on how to manage failures as a project team, and how we might help ensure there is an environment where “bad news” can be communicated early and often. I call this following the fail early – fail often – fail gracefully paradigm.

Fail early – In the Lidl example, my suspicion is the project team didn’t come to realize the inventory valuation was going to be problematic suddenly and after all the time and money spent on the project. I would expect that there were several team members from all segments of the project team; business, IT and even consulting that raised those concerns early in the requirements and design stages (assuming they did a traditional lifecycle). I can imagine there was probably a good project manager that logged it as a “risk” or as an “issue” in a log and it was carried for some time in the project discussion – presumably for seven years.

Another thing I can presume, having experienced similar situations in my projects, is that there wasn’t a full recognition of how “bad” that news truly was! There may have been some that called this out as a really important consideration, even to the point that it became contentious, but what we often hear is someone in leadership roll out the phrase “failure is not an option”, or “we must adapt our processes”. I must be careful here as I don’t want to be a judge over their project execution, but there are probably some similar situations you have experienced in your time as a project manager. However, I doubt that anyone, project team or management, would have continued down the road if they’d really understood how critical that component was!

In Agile projects we often follow the principle of “fail early”. By failing early we can test high-risk areas such as the one that Lidl encountered. We do so by experimenting, building prototypes and mockups and by also introducing the more complex problems into our solutions discussions as early as possible. There would certainly be a case for modeling the effect of the revenue recognition, inventory valuation and other considerations in a system prototype prior to investing their time and money. I want to be careful to note I have no inside information on that project other than what’s publicly available, but this is an approach that might have been warranted.

Fail often–or in other words, plan to fail. This may run counter-intuitive to our sense of accomplishment and value as solutions provider, but the underlying principle is that projects are learning organizations. Project teams learn by failing – because failing means that at least we were delivering! The other thing it implies is that we have a plan to test early and often.

Testing is a key aspect of Agile as well. One of the practices the team can choose in Agile is to perform “Test-driven Development”. This is a practice that takes the requirements, user stories or use cases, and turns those into test cases. Those test cases can be run through the system to prove that change is needed in order to fulfill the requirement. The development team then builds the product in an iterative fashion until it proves to pass those test cases. There are several advantages to this approach, including the reduction in overall code, elimination of unnecessary changes, and minimizing disruption to the existing code.

To illustrate this with a story, I was managing an enterprise project where we were combining two SAP instances following a merger/acquisition between companies. Our focus was on porting the PM module functionality from one instance into a different instance of SAP and combining PM and MM records as well. As we were going through the PM functions it became clear there was a particularly pesky module that was going to be very problematic. It was not functioning nor performing at an acceptable level. We were very fortunate to have a seasoned and wise ABAP development lead that decided to take this problem on himself while using it as an opportunity to train a lesser experienced ABAP programmer.

When we first analyzed the module there were some very interesting findings. First, the module had well over 7,000 lines of code. In reading through the commented code we found notes such as – “ticket#xxx – problem with component performance, so added section xx”. There were several of these notations, all of which added code to fix the performance problem. As the review proceeded, I could tell our lead was getting more agitated and committed to fixing this component. The irony was inescapable, how can you fix performance problems by adding code?

When I asked the lead how he would solve the problem he responded that he was going to test on a step-bystep basis until he could weed out where there were improvement opportunities and then move on. After a couple weeks he and his protégé had the component working very smoothly and the functionality was accepted by the customer on the first review. The component was down to just over 500 lines and he could have probably improved it even more if I’d given him more time on the schedule, but the sponsor mandated schedule constraint dictated we move on. “Refactoring” on a very large scale, indeed!

Another consideration of failing often is that in order to do the types and frequency of testing that is required for these practices, the team must have access to an automated testing capability. In fact, one of the key topics I like to introduce early in the engagement with a client wishing to go to an Agile approach, is how much they are willing to invest in either an existing automated testing tool, or in the acquisition of one if the organization does not already have the capability. If they’re unwilling to make that investment it puts the Agile transformation at risk of success.

Fail gracefully–if we plan to fail then we will also need to plan to fail in a way that improves the project, the confidence of the team, and results in a higher quality deliverable to our customer. You may be thinking, “I’ve heard the saying that failure to plan is a plan to fail”, or something similar – but do you really mean plan to fail?

Yes! If we plan to experiment by using prototypes, test-driven development, or other similar means then it is true that we are planning to have failure points. The plan is to have those failure points exposed early enough in our delivery cycles to where that “bad news” can understood before too much time in the project passes by. That is because, that “bad news doesn’t get better with time” means that there are likely additional costs and schedule disruptions if we find that bad news in the later stages of the delivery cycle.

To illustrate this point, let us look at a traditional delivery cycle that follows the requirements > design > build > test > deploy gated approach. This delivery cycle (plan) is to make sure you have a clear understanding of the requirements before you start your design, which then gives you certainty as to your build. When you enter your test cycle, though, you find that in some deliverables either a) the requirements were not fully understood, or b) the business situation no longer reflects the requirement that was built. In either case we handle these as “defects” and take them back through the same process steps starting with requirements.

We take the resulting defect back to the developer who fixes the problem (presumably) after getting clarification on the requirements and design, then resubmits for further testing. This cycle continues until the deliverable is accepted with or without defects.

If you think of the handoffs that occur in these steps it requires a lot of management by the project manager. We are primarily concerned with integrations on our projects and each handoff is another integration to be managed. It would be great if each team member did their corresponding handoff naturally, wouldn’t it?

This is one of the primary benefits of Agile types of project delivery cycles. It solves two main problems that we experience in the traditional life cycles.

  • In performing short iterations we lessen the overall impact of a “missed requirement” and give the team a means to rapidly adjust to the changes -whether from a misunderstanding on what is needed (bad news) or a shift in that need (bad news).
  • By building a cohesive team dynamic, the team does more naturally integrate their work. Collaboration across all team members then becomes the focus of the team leadership rather than managing each single integration.

All these items require a plan to do well. We set the expectation with our sponsor that we are going to address key questions through experimentation and trial to prove what fails early in the delivery cycle. We set the expectations with our teams that they are accountable to raise any questions, concerns or things that just do not work as quickly as they arise. This allows the team to become a self-correcting team that naturally adjusts as you go.

The human dynamic –I would be remiss if I didn’t include the human side of the equation in this discussion. People tend to hold tightly to things they believe they can fix before it is recognized as “bad news” to others. This may be a mistake they’ve made or something that they know they don’t have the understanding or skills to do well. Given that is the case, we are going to have to use our soft skills to ensure our team members and other stakeholders are encouraged to communicate anything and everything of value whether it’s “bad news” or not.

To create an environment that fosters the real-time sharing of all information that either contributes to success or failure, we start with influencing the key stakeholders for our projects. These are people like our team member’s managers, business managers, customers, and sponsor. In doing so we want to include such topics as:

  • How and why the quality of the deliverables will benefit from early experimentation and failure-based testing.
  • How experimentation and failure-based testing will help them make earlier and more informed decisions on what required. Most importantly, how customers can better understand their priorities in the value they are trying to achieve.
  • How ensuring there is a safe environment where failure is incorporated into the plan to help the team dynamic and confidence so productivity is improved.

And as we interact in our project leadership roles with the team members, we can stress things like:

  • The key stakeholders are on board with the concept of fail early and fail often, which allows us to fail gracefully as a team.
  • The team grows as an organization as each member supplies any key information that’s needed for decision making whether it sounds bad or not.
  • We, as leadership, are available to support whatever needs to be resolved in order to ensure we, as a team, fail gracefully – learning from every experiment and failure to influence the quality of the deliverables.

I know this may sound a bit “corny” so to speak, but I often tell the team that they will get 100% of the glory and 0% of the blame. This is to say that – it’s ok! There is room for experimentation, room for failure, and, yes, room for mistakes. My desire is to have each team member to feel safe and free to share whatever information is helpful to the team and deliverables.
Hopefully, you found some useful information, and maybe even a bit of encouragement in this article. “Bad news” is not necessarily bad, it’s just another piece of the puzzle as we endeavor to help our teams grow and learn as we produce the results that yields highly satisfied customers.