My good friend Sam points out James Shore's comment on the analogy of software development as building construction (and architecture, I guess), and then David Ing commented on it as well. I can only say I wholeheartedly agree with them: it is not only a poor analogy; it is a scaringly bad analogy that does more damage than good to the profession. What's even more scary is that organizations such as the Worldwide Institute of Software Architects - WWISA - which really should know better, parade that analogy as a core principle of the institution. Disclaimer: I am not a member of the WWISA.
David and James do a pretty good job of summarizing what I think is wrong with the analogy, so I'd just like to point out a few extra things:
- Estimation in physical construction projects cover far more things than just time, which invariably seems to be the only thing we tend to estimate for sofware project besides people (I particularly dislike the notion of talking about resources on projects). And, many of those things that are estimated are physical in nature, where it's far easier to see if you went wrong. For example, you might estimate that you need 200 pounds of cement to build with, then at the end it's easy to see that you missed and by how much just by checking either how much extra cement was needed or how much was left over. Things rarely work like that in software; there are way too many intangible variables. Metrics help, but most metrics themselves are, again, fairly abstract and intangible, which, in turn, mean that they are hard to understand and easy to treat as statistics: you can make them appear to say whatever you want.
- Education is a key aspect, as well. In a sense, people are very used to understand, up to a point, what the work of construction is like. Sure, they don't understand all the intrincate details, but most people do have a broad idea of what it takes to build something significant. So, as David says, people are not so reticent to understand that building something is costly, and that the bigger it is, the higher the cost and the higher the initial architecture, prototyping and planning will take.
People rarely think like this about software. In fact, most people have no idea as to what developing software is like, even people how are supposed to know. Now, I'm not saying everyone should be an expert. What I mean is that we should try to educate our customers, particularly when we are working on custom software solutions as to the complexities involved and perhaps more important, about how cost and ROI work on software projects (and in particular successful software projects).
- Cost of rollback: Customer involvement is key for most software projects, and agile teams seem to approach this topic, in my mind at least, in the most straightforward matter: by making them first class citizens in the software development game. Now, one very key difference I believe here is that in construction, the cost of rollback - that is, of changing your mind about something already done - is far more explicit: If you didn't like where the wall was erected, then it's pretty obvious you'll have to tear it down and rebuild it, and it's fairly apparent it's going to cost, possibly quite a bit. With software, it's not always very obvious the cost of such changes, because while it should be generally easy to accomodate certain changes, there might be "hidden" costs to them. For example, it might be a change that requires a lot more testing you had not planned for, even if the change itself can be handled by the development team in stride.
The other part of this is that changes of this nature are far, far more frequent than in the construction profession once building [development] is actually started. Sure, people might jump here and say: See! that's why you need lot's of design up front so that you can change it cheaply, just like changing the blueprint! Bullshit. One of the facts that, for some reason, most people seem to completely miss in software is that people can't really spec out an application by just writing documents and drawing diagrams and have it be coherent and flawless. The initial spec is always wrong, always incomplete. This observation, from my experience goes both ways: From the developer's view, there are many facts you don't think about (or simply overlook) about your problem domain until you're actually writing code. From the customer's point of view, they really, really, rarely understand their problem domain as well as they think they do, and there's a lot of business users out there that simply are not as good as looking at the broad picture of the project, and thus fail to think about the consecuences of choosing a certain option in one part of the application relative to the rest of it.
Hence, the power of starting code as soon as possible. Hence the power of being able to adapt to change.
Let me tell you a couple of stories about why I believe so strongly in the second point about education. They are all true, btw :)
- A few years back, a manager where I worked had a team working on an ASP.NET web application for one of our customers. As part of that, the customer wanted to have, on a webform where some email addresses needed to be entered, an Autocomplete-feature based on people's names and addresses as registered in the company's Active Directory which had a few thousend users. The developers estimated it would take about X hours to develop something similar and test it worked correctly. I don't remember the exact figure, but it certainly was X>>>1. The managers answer? "It can't possibly take that long, this is really, really easy! After all, Outlook already does it, all you have to do is press a button!"
- On one project for customer X, it was decided than an external company would be brought in to do QA. Fine by us, we had no problem with that. About a month later, after we completed an iteration, some bug were found (certainly not very many) and we had a nice little discussion because the customer was pissed off that bugs appeared in the first place, because, according to them, they were expecting perfect software from us. This was nonsense, since we explicitly had agreed per contract that we would not charge them for QA as they were bringing the external company to do it (and hence, would not assign people to that role). Now, of course as developers we should strive to produce as much quality as we possibly can, and QA is a very important part of that. The problem here was, again, education: Many customers (and many software development shops, for that matter), don't seem to distinguish QA (an integral, wholesome part of any software project) from final acceptance testing, which then leads to confusion and unmet expectations.
Then again, this is just my opinion, I may be wrong :)