Estimating seems easy, but getting estimations right continuously in software development is incredibly hard. In this article, we tell you why estimations often fail and how to identify the right process for you and produce better estimates.
Let’s first start with a brief definition of the terms used in this article: Estimation, estimate, and measurement.
An estimate is an approximated value of the estimated property, e.g., the time it takes to finish a development task. The process of finding an estimate is called estimation and the act of doing it estimating.
Depending on what we are interested in, the value might be of different scales:
With a statistical approach, we would use sampling to derive a value from an underlying population based on its statistic. In software development, we usually lack structured data since each development task is unique to some degree, so we can’t just build statistics based on similarity to previous tasks.
But information is still available through software development experts’ experience: The expert decomposes the development task until he finds similar components with know size and aggregates the component values to estimate the task, a process called fermi decomposition.
Estimates are always an approximation of the “real” parameter since we derive them from incomplete information. If we can extract an exact value from available data, we call it measurement.
Estimating in software development is a challenge, mostly because of unclear targets and expectations resulting in an inappropriate choice or inconsistent execution of a methodology. Without clear requirements, we won’t be able to assess the quality of the estimates, which leads to project risks through wrong projections.
Several root causes are leading to low-quality estimations. The following list contains conclusions from my own experience. It enumerates what we perceive as the most frequent and damaging sources of wrong estimates. Proposals for additional root causes are very welcome!
Low-quality requirements will lead to wrong estimations inevitably. Development teams often see themself urged to estimate based on half-backed specifications early in the project with the assertion that this is just a first sizing attempt. But usually, these low-quality estimations survive, causing delays and frictions. Developers will counteract by adding buffers, further decreasing the accuracy of their estimates.
There is an inherent conflict between the need to financially size a project and the planning of project milestones and delivery targets. The effort required to finalize a work package is rarely equal to the time needed to deliver it due to its dependencies. The work in progress (WIP), as the summary of started but not finished tasks, is a mix of items actively worked on and things blocked due to dependencies. Only in the rare case where there are no dependencies at all effort will be equal to duration.
Having the right composition of team members is key to high-quality estimations. But this is not trivially solved in a real-world project. In the early phases of the project, the team might not yet be fully available. We might even have changing teams due to different organizational functions throughout the stages of the project.
Especially in the early phases of a project, the available information to do accurate estimations is partially missing or incomplete. Furthermore, the team does not yet understand the problem domain and the technical challenges well enough.
Nevertheless: No matter how thin the available information is, every proper estimation increases the project’s predictability. The genuine risk is that early estimates stay, although they could be reevaluated, resulting in a reduced variance and higher predictive quality.
There are several different proposals on how to address estimation inaccuracy. Let’s first try to get an overview of the most prominent methodologies.
NoEstimates advocates criticize that estimates in projects are meaningless since projects are always dealing with something new, which can’t be adequately estimated. In their view, project teams estimate because it is common practice, not because they believe in them. The propagated solution is item slicing: Tickets get sliced until they are all equally sized. Project forecasts will be done based on the delivery rate. The Kanban community promotes project forecasts based on flow-based metrics like throughput and duration.
The relative estimation approach drops estimations on a rational scale (required man-months) in favor of estimates on an interval scale (compared to a reference element, e.g., story points) or on an ordinal scale (t-shirt size). This approach supports trend analysis by measuring how long it took to resolve the work-packages and calculating trends without the need for exact value estimates.
Process-oriented approaches attempt to increase the estimation quality by defining clear conditions and setups to guarantee proper execution.
From our experience, most organizations do not follow a structured approach to analyze their requirements towards estimations and often alternate between different methodologies depending on the loudest voice, hip trends, and changing decision-makers.
So, is the problem already solved? We don’t think so. All outlined solution attempts to the estimations conundrum have unique tradeoffs. For example, I’m a big supporter of Kanban, and using Kanban flow-metrics as project predictors is an excellent data-driven approach. Nevertheless, it is not very helpful in doing initial project sizing and planning or project calculations. Treating all projects as agile projects that won’t need any upfront cost calculations or delivery milestones is, in my opinion, an oversimplification. Even for the extreme case of purely internal research and development projects, without external deadlines, you need some data to decide on your innovation pipeline (e.g. sizing vs. available capacity, expected return on investment, complexity). You will always have to tailor the solution you choose to your context-specific needs.
It’s important to realize that, no matter what approach you choose, you will always estimate. NoEstimates is suggesting that there won’t be any estimates, but this is grossly misleading. By trying to slice work into equally sized tickets, as promoted by the NoEstimates methodology, you need to assess the size, which is an act of estimating.
You first need to evaluate and define the forecasting and prediction capabilities required for your project and, secondly, design a process that delivers the required data.
The estimations’ usefulness heavily depends on building a clear strategy.
We first need to define:
And establish a practice to:
This is the first of an upcoming series of articles where my colleagues from Datarocks and myself will do deep dives to establish the theoretical background and define an estimations strategy that supports gathering actionable data and metrics as the base for sound project decisions.
We want to invite you to share your own experience, challenges, or success stories. Please don’t hesitate to ask questions or propose topics of interest to you. We will cover as many as possible in this series.