northern_lights_iss_20131009

Why software teams that are on time and on budget probably suck

Many people in the software industry are bad at their job. Probably more than in other industries. Plenty of data shows the abysmal results of software projects, but you can simply ask your nearest software professional for plenty of horror stories. I have more than ten years of experience “on the inside” of the software industry and I have found bad software teams everywhere I look.

Bad managers have a disproportionately high impact on software because bad managers build bad teams, but the individuals on the teams themselves—even the good ones—are, more often than not, complicit in and even the cause of bad software. But what is most surprising is that bad teams are almost always rewarded for their incompetence. They “meet deadlines” so they are considered to have delivered “results.”

What’s wrong with delivering results

I put the word “results” in quotes for a reason. Results are often subjective. When people talk about “delivery” and “results” they very often are talking only about meeting a commitment — nothing more than some poorly-defined functional requirements being met by a given date.

A common story plays out again and again in software. It looks like this:


Hey look at Team Red! They added that new user notification feature we wanted by the end of Q1, just as promised! This is going to be a huge revenue boost for the company. Everyone give them a hand! This is huge! We are all so thankful for the work of Team Red! They get an award!

Meanwhile, a significant percentage of notifications are not arriving on time, or even at all. The whole cluster has failed a couple times since the feature was released. The team is continually restarting nodes in the cluster every couple hours so the new “feature” doesn’t take the system down. That’s just part of being “on call” now.

Users are noticing because if the system goes down, they can’t even do their basic tasks, let alone receive notifications. So now the support team is taking a higher volume of calls, and is also feeling overworked.

Unfortunately, the organization is not equipped to make this kind of thing visible. Based on all information the decision-makers are seeing, Team Red is the highest-achieving team in the company. It was a big bet for the notification feature to be done in Q1! And they built it using lots of fancy technologies that are super-trendy on the Internet and at all the conferences.

So now the decision-makers have a serious business opportunity in mind. They decide to put Team Red onto this high profile project starting right away. They will transition that great user notification feature over to Team Blue.

Sure, Team Blue is overworked, but what team in the company isn’t?
And the user notification system is in “maintenance mode” right? The only thing left now is some stabilization. Team Red’s manager just said that in her presentation of Team Red’s Q1 results.

Team Blue starts digging into the application code. What they find isn’t pretty. On top of the operational nightmare that is happening, the code is a mess. Any “stabilization” is going to come from wide, cross-cutting changes because, as you might have guessed, the code is full of spaghetti, layers that are intertwined, unclear dependencies, high coupling, and low cohesion. No wonder it doesn’t run well in production!

But move along, nothing to see here. Team Red and Team Blue’s managers agree this is best for the company, telling the teams, “That’s why we’re Agile. We’ll put some items on the backlog.”


Unprofessional commitments

I have worked in good software teams in my career. I’ve also worked in bad ones. It is obvious to me whenever I am in a bad team because they make commitments mostly from gut-feel and almost never from evidence.

Sure it is often the case that the organization is coercive and dysfunctional, and so the team feels like the deadlines are just handed down from on high and they have to go along with them, but this is the definition of unprofessional behavior. The team knows that meeting these arbitrary deadlines will force them to deliver bad software. Bad software is all that can be built in that amount of time. Yet more often than not, the lingering thought of end-of-the-year reviews overrides professional behavior.

A bad review is one thing, but incompetence is another. It is incompetent to agree to deadlines without professionally vetting the project and its deadline.

Professional software developers…do not make promises that they can’t keep, and they don’t don’t make commitments that they aren’t sure they can meet.
~ Robert C. Martin

Determining deadlines in a professional way

A professional software team (including the “coders”) elicits the functional and non-functional requirements that are needed for a project before making a commitment or even an estimate. This is true even–or especially–in Agile methods, where that process is constant and ongoing. A professional software team maintains high communication with the project’s stakeholders to refine and filter those requirements. The requirements they select at the end of each iteration of this process are the right (valid) requirements, and the requirements contain the right (correct) information. After negotiating with the stakeholders, a professional software team identifies what is out of scope just as much as what is in scope. A professional software team creates an estimated budget and timeline for delivering to requirements. The estimate is evidence-based. It is not a SWAG. A professional software team works with the organization to source the right people and resources to give the project a real chance at success.

And, finally, most importantly, the stakeholders all agree, and a professional software team then delivers those requirements. All of them! Including non-functional requirements like maintainability, testability, security, and auditability! Not just the functional requirements!

And this is verified through objective means, near the end of the project but before the deadline.

Unprofessional commitments are unethical

It isn’t just a mistake to drive software teams blindly to deadlines while ignoring the real needs of users, it is also unethical. Item 3.4 of the ACM Code of Ethics and Professional Conduct specifically addresses the given scenario:

3.4 Ensure that users and those who will be affected by a system have their needs clearly articulated during the assessment and design of requirements; later the system must be validated to meet requirements.

Current system users, potential users and other persons whose lives may be affected by a system must have their needs assessed and incorporated in the statement of requirements. System validation should ensure compliance with those requirements.

It can be easy to read over the lines, “later the system must be validated to meet requirements” and “System validation should ensure compliance with those requirements” but they are there. So it is a matter of ethics when meeting time and budget take focus to such a degree. Stealing and lying are just as bad, but probably not worse, in my view. You may think I’m stating this strongly, but if you think about it, I think you will agree. It’s actually pretty close to both stealing and lying in terms of the actual impact on the users. Stealing, because it takes more time and money from the users (and the company as well, in the long run). Lying, because there’s a strong implication that by “meeting the deadline,” what was delivered is what should have been delivered.

Measuring delivery against commitments is misguided

Time and budget are poor metrics of software, yet, in the wild, they are often the primary measures of software project success. Our industry must change this. We should subject teams to greater scrutiny than their ability to meet time and budget. Such teams may not have the public good, the end users, or even the company in mind.

We need to ask some real questions about teams that are on time and on budget:

  • What was the quality of their commitment?
  • Was the proposed timeline based in real evidence?
  • Was the thing that was delivered actually what was desired? Is it any good? How do we know?
  • Was anything measured to verify that all requirements were delivered? What was actually measured? What data or evidence do we have that the team met the project’s requirements? Even the quality attributes?
  • Wait a minute! Did the team even define quality attributes in the first place?
  • Were the quality attributes SMART? Something that we can objectively measure and prove?

Some key questions

In other words: who cares if you’re on time and on budget when what you delivered is probably detrimental to your company, your users, and society in general? Would you rather get software that sorta kinda works on the day you picked randomly three months ago, or do you want to know that a team of professionals put in the effort and care to make sure it was the right thing to build, it really works, and the team can prove it?

I know you’re thinking right now that all of what I’ve mentioned should be implicit in the definition of “on time, on budget.” It should. But the fact is that it usually isn’t. In any project, what is specified and then measured and verified is what takes importance.

Does your software team measure software project success? How? What metrics do they use? Why those metrics? Will the team objectively measure and record them? Can you audit the results later? Can you repeat them? These might be the most important questions to ask about any software project, in my opinion. They should be asked in the first project meeting and continually after.

Maybe if our industry took steps toward a professional way of building software, we would finally reduce the software project failure percentage. Here’s to a future where that is a widespread reality.

Leave a Reply

Your email address will not be published. Required fields are marked *