Lucky you, you have to manage some computer programmers, as they develop a piece of software. If you are already a programmer, you probably don't know much about how to do this, but you're unlikely to listen to my opinion, so you're reading this just to scoff at how wrong I am. But, if you have not worked as a programmer, perhaps you would be interested to hear a programmer's take on how to do this well.
Before we talk about estimating how long it takes to write software, let's consider chopping a pile of logs. In particular, let's say you ask someone how long it will take them to chop a pile of logs. If you give them a few minutes to come up with the answer, they can give you a reasonably accurate response. The process would be something like this:
This doesn't mean their estimate will be spot on. They may get tired, they may get better as they gain practice, there may be a few logs that are really knotty and hard to chop. But you'll be plausibly close to the real time, perhaps within +/- 50%. If you attempt the same with software, it is a not unusual event to find that the time required will be so much more than the estimate, that you will have to cut features en masse to get the software out at all. In other words, you run out of time and never do complete the originally intended task. Why is this?
It is not just that writing software is harder than chopping wood; in many ways chopping wood is harder. It is not just that it is a mental task instead of a physical one; estimating how long it would take to edit a pile of documents would give results more like chopping wood than like estimating software development.
The reason that software development time estimates are worse than useless (worse because they may be believed), is that once you know enough to give a reasonably good estimate, you are nearly done with the project. A programmer does not, after all, do the actual work, the computer does the work. Not even the typing is an important part of the work; perhaps 5% or less of the time required to develop a piece of software is the typing of it. It is also not the converting of plain english description of the task into a computer language; modern computing languages are not 1's and 0's, and are not really that far off from a very stylized form of English, and in any event a good IDE can often do a lot of the syntax for you. By far the majority of the time required, is the time to figure out what needs doing. Note that this is not the same as figuring out the syntax; that is usually only 10% or so of the work required. Rather, what takes most of the time in software development is figuring out what we actually need to program the computer to do, precisely.
It is not at all unusual in software for something that seemed to be a trivial task to turn out to unexpectedly require a major rethinking of the system architecture. So the task of predicting how long software will take to develop, is akin to estimating how long it will take to chop a pile of wood, without being able to look at the pile of wood first to see how big it is. You can come up with a number, but it doesn't really reflect anything but office politics.
As a manager of a software project, you will be required to give time estimates to your bosses; this is an inevitable stupidity, and there is likely nothing to be gained by explaining to them that they are asking for something that is intrinsically impossible. Managers are so deeply prejudiced against the idea that their job requires predicting something which is intrinsically unpredictable, that they will refuse to believe this regardless of how much evidence is presented. What you can do, however, is avoid spending extra time trying to make more accurate time estimates. Detailed planning of what's required will not result in significantly more accurate estimates, because once you actually start programming, your detailed plans will turn out to have not taken into account important factors about the work required, and you've spent a lot of time planning how to do something which is not what you're going to actually do. So make an estimate you think you can convince your bosses to live with, and get on with the programming.
What can you do instead? Well, if you cannot look at the task ahead, and estimate the time required, you can do the reverse. Estimate the time available, and then do as much as you can of the task in that time. Make sure that, at all times, you are working on the most important features, in case they turn out to be unexpectedly complicated and eat up all the remaining time. As you get past the halfway point of your time budget, lean more and more towards doing the lowest complexity work (e.g. tweaking the css to improve the final appearance, rather than changing the database structure to improve load times), because that is least likely to "blow up" on you and break other functionality. Some time well before you reach the halfway point of the allotted time, make sure you have a working version ready every day, and you have practiced how to fall back to that version, in case the current improvement turns out not to be as simple as expected.
The worst mistake here, which is not at all uncommon, is to make "half a bridge", a bridge which is six lanes wide in both directions but that only goes halfway across the river. That is, to spend a great deal of time making high quality infrastructure to which, someday, the actual UI or API might get attached to provide some functionality for the end user, but that's not happening for this release because time is up. The fact that such excellent, and unused, underlying architecture has been made (often at great expense in terms of time, effort, and stress), will be unappreciated, for the very good reason that it is not at all useful, because you were never able to actually hook it up to the user interface. The same goes for excellent design and UI work for a feature which there was no time to ever make the back-end of. Perhaps, some day, additional resources will be allocated to help finish the second half of that bridge, but perhaps not, and by that time enough may have changed that the code done previously will need to be rewritten anyway.
You can't predict, but you can prioritize. Focus on prioritizing, and spend no more time on predicting than is necessary to satisfy your boss(es). When the deadline approaches, you will have to cut features, so make sure you will be cutting ones less important than what you've already finished.
In some ways this is just a refinement of the previous section, but it's worth spelling out in detail. There is a mini-cycle in software development: try, fail, learn. This cycle should be made as small as possible. That means that making pieces which can be used (preferably by someone representative of the end user population), is by far the most preferable route. It is also the one which most developers will want to shy away from, and therefore which management needs to force them into.
Why do most developers shy away from this? Because it is the point at which they discover that they have, to some degree, failed. User testing, even of a small portion, will almost always turn up something which isn't working right. This means someone gets to feel bad about the work that they did. Programmers are, despite all appearances, humans too, and they will tend to shy away from opportunities to feel bad about their work.
The quicker such failures are discovered, however, the less painful they are to fix. "Try - fail - learn" is a circle which feeds back to "try" again. The more times you go around that cycle, the faster you learn, and the better the software will be. One of the primary tasks of good software management should be to push for more testing (especially user testing), earlier, than the developers are comfortable with. This will often require structuring the software in a way such that smaller pieces can be made in a modular fashion, in order to be tested by users. This is not only a reasonable cost to pay, it is actually a significant improvement in the software architecture for many reasons.
It is important to note that a programmer testing their own software is NOT a substitute for this. Not because they would lie (rarely would that happen), but because they will not try to use it in the way that your actual users will. What you need, is for each piece to be tested by someone as similar to possible to your real, eventual end users.
There are certain things that, if you ask a programmer, they will tell you are a good idea. Then, they won't do them, and they will tell you why right now, today, it's not a good idea. Some examples:
These are things which you need to do, in the same way you need to make a household budget, exercise more, clean your room, and have fire drills at work. They are important, and almost everybody knows it, but nobody particularly likes doing them. Therefore, they don't get done if there's any excuse not to. The odd thing is that, in software, the boss usually provides that excuse.
It's odd, because this really ought to be the sort of thing that management is forcing you to do. The more typical case, though, is that management will be the one telling you to stop writing unit tests (or whatever), and get back to writing code for the features the customer will see. This is probably because that's what their bosses are going to ask about, which is because that's what their boss is going to ask about, etc.
How do you know, though, which of these things are really important? Which things will do enough good to be worth spending time on, and which are just feel-good fads? You don't know. But, your programmers do. So, ask them. But don't ask them what needs doing today. Ask them what should be done in the future, and then hold them to it. On the spot, most programmers don't really want to clean their room, set up automated regression testing, or anything else like that, any more than you really want to make your household budget or eat your greens. If you ask them what needs to be done RIGHT NOW, it will always be something else.
But if you ask them, perhaps at the beginning of the project, what needs to be done along these lines, they will tell you. Get a team consensus early, and write it down. Then, be the Bad Guy (or fitness coach, or whatever analogy you wish to use) who reminds them later what they said needs to be done, and holds them to it.
In order to figure out how to tell a computer to do something, you have to tell it EXACTLY, in excruciating detail, how to do it (we are ignoring machine learning approaches for the moment, because that's not how most software is made and anyway the process turns out not to be as different as you'd think). But, your customer doesn't want to tell you what they want in excruciating detail. They want to tell you at a high level, as if they were talking to a person. But you're not the kind of person they work with (unless they work with programmers), so an explanation that is good enough for them will leave out a lot of detail that you need to know, but don't know enough to ask about.
The solution is, to sit next to your customer as they do whatever job your software is supposed to help them with. Don't just have them describe it; sit next to them as they do it for real. They don't know that they need to tell you about 573 different picky details of the job which they have long since learned so well that they aren't even consciously aware of them any more. You don't know enough to ask about them. Thus, you will not know what is truly required, and your customer won't know (or remember) that they need to tell you.
Sit next to them as they do whatever task it is. There is no substitute. Then have your programmers do it as well if that's possible, even if this means you are paying a highly-compensated programmer to do a task that is done by people earning just above minimum wage. Maybe the programmer isn't even exactly doing that work, they're just watching somebody else do it.
There are two fairly well-known metaphors in software development, that you should know.
"Technical debt" is analogous to the regular, financial kind of debt. It's something technologically wrong with your code (it needs refactoring, its naming conventions don't make sense anymore, the database schema should be changed, or whatever). You should "pay down" that by spending time working on it instead of working on new features, but you won't (right now, anyway). This is analogous to when you should pay down your credit card, but you don't right now. There is a cost to be paid. Instead of interest, in the case of technical debt it is that everything you do, which interacts with this code, will be slightly harder to do until that technical debt is paid off. The analogy to debt is meant to indicate that having a little bit is ok, but if you let it accumulate it could eat you alive. Instead of pretending that it's not a problem, admit that it's wrong, even though you're not fixing it now. You will fix it later, just like you will pay down your credit card/mortgage/car payment/student load/etc. Don't let too much of it accumulate, though, or the problem could get unmanageable.
If it does become unmanageable, you have the "ball of mud". This is the unfortunate fate of most software in the real world. Imagine a large, dry, ball of mud. You cannot change the inside of it, in order to sculpt it into a better shape. It's too brittle, and any attempt to change the insides of it will result in the whole thing falling apart in your hands. You can, however, add another layer on the outside. Software which has become a "ball of mud", must be treated analogously. It has too many things wrong with it to be rearchitected in any timeframe which you can afford, now or likely ever. It's too late. What you have to do now, is treat the entire thing as a black box. You can pass inputs in, and when it produces output you can transform that in whatever way the rest of the system requires, but any attempt to really improve the state of the insides will likely result in things breaking in unpredictable ways. You need to know about the "ball of mud" for two reasons:
There is a general failing of humanity, that it thinks it knows what it's doing. Nearly everyone thinks they're an above average driver, and nearly everyone thinks they're an above average communicator, and nearly everyone is wrong about at least one of those. A certain amount of self-deception about your weaknesses may be tolerable, but in software development it can lead to disaster.
There is, I have read, a saying in the movie industry that "nobody knows anything". Whether or not it is true for movies, it certainly is applicable to software. You DO NOT KNOW whether what you've made so far is any good, even usable, if you haven't user tested it. You DO NOT KNOW. There is one, and only one, way to know, and that is to get some users and have them test it. If they find the interface confusing, then it is confusing, no matter how convinced you were previously that it was clear. If they want to use feature A a lot and don't have any interest in feature B, then you need to prioritize feature A over feature B, no matter how convinced you were previously that it was feature B that would make it a "must have" piece of software.
Fortunately for you, unlike a poll of popular opinion, you don't need hundreds or thousands of people. In fact, if you get three people, you have probably learned a lot of what you need to know. That is because you are, almost certainly, so WRONG in your beliefs about how users will interact with your software, that just a few data points will already tell you a lot. Don't let user testing wait; do it as early as possible in the process. If you need to create the user interface first (with fake data running it because the database and backend isn't ready yet), that's fine for a user test. Make small pieces which can be used on their own, so that you can test them earlier rather than later. Your user testing has a very good chance of bringing you bad news, so do it early, not late.
You don't know. Yes, even you. Do some user testing, early. If you don't have anything the user can test early, you are doing things in the wrong order. Yes, you are.
Often, when a non-programmer manager wants to know the state of their team's code, they will ask their Alpha programmer. This is the veteran programmer, perhaps a little abrasive but definitely a reliable source of an honest opinion. That is the person who knows the most about the software; they have dealt with every nook and cranny of it. They know it like no one else, and the reason they do is that they are the most knowledgeable about programming generally. What could be better than having someone honest, skilled, and informed tell you the real situation?
It turns out, almost anybody else on the team.
The problem with the Alpha, is that they have looked at this software so long they don't even see the complexity. We see complexity when we look at something we have trouble understanding, and the Alpha understands too much. He'll give you an honest opinion, all right, but he's not looking at what everyone else is looking at. Like a chess master who sees the chessboard, not as individual pieces, but as "chunks" of half a dozen pieces in a particular, recurring pattern, the Alpha knows the system so well it doesn't look all that complicated to him.
The programmers in the middle ranks, are nearly as bad. In their case, it's not that they can't see the complexity, it's that they are (at some level) not wanting to admit that it looks complex. They are trying to emulate the Alpha (perhaps not consciously, but again, at some level), and the Alpha doesn't seem to think the codebase is too complex. Therefore, they are trying not to admit even to themselves that it is too complex, and thus they are unlikely to admit it to you. To the extent that they do understand it, they even LIKE the fact that it is complex, because it makes them feel highly skilled and knowledgeable that they have at least some understanding of such a system.
The newest programmer, though, MIGHT be willing to tell you which parts are too complex. Whether or not they are willing to admit it, they will at least be unable to hide that fact from you, or themselves, because they don't have much knowledge of it yet. If the new programmer is floundering, this probably does NOT mean they aren't any good (unless your hiring process is broken). It probably means your codebase has gotten too complex.
Now, in an medium-to-large sized systems there will be a few parts that have to be complex. These are the parts where only the Alpha and a few others, should be allowed to make changes. But if you find that new, average programmers cannot get traction in a reasonably amount of time, then you have in every part of your system a level of complexity which should only be found in a few parts. Beware if the Alpha, or the mid-level programmers, want to tell you that this is because the new kid is slow. You interviewed this person, the team thought they were good enough, and you saw something on their resume that led you to believe they could program. If there really is no (or almost no) part of your codebase where new, relatively unskilled programmers can contribute, then you have a problem, even if the Alpha and his emulators don't agree.
There are even organizations that like to brag about how you have to be exceptionally talented with lots of programming experience to work there. This is like bragging that your health is so bad that most doctors cannot treat you, or that your car is so unreliable that very few mechanics can work on it. Anybody can make complex code. Making simple code that does a lot, is what requires good programmers, and a good programming culture. Part of developing that culture is that you have to make simple code the objective, not complex code, and the Alpha programmer is the worst person on the team for being able to tell the difference between the two.
There is a tendency to try to optimize for the best case. This means, try to accomplish as much as possible. Occasionally, as with a software startup for example, this might be a good idea, but it will almost always have the side effect of maximizing the chance of disaster as well. Trying to keep adding "killer" features to your software, up until right before it is released (as for example when a website goes live, or a factory systems suite is turned over to production), is a good way to maximize the chance of your software not working.
This is because little is more common in programming than something which seemed harmless, turning out to cause a major problem. Perhaps it is a reflection of the underlying binary nature of computers, but for whatever reason, software has an ability to go from working fine to completely broken in one code change, even if it seems small. Worse yet, the part that breaks may not be related in any obvious way to the part that was changed.
Of course, one can minimize this risk by implementing things like a suite of automated tests, but respect the literal meaning of "minimize". It does not eliminate the risk. If you are close to the deadline, and you have just thoroughly tested every part of the software one more time, it is time for you, the manager, to announce a "code freeze". Each programmer will be tempted to want to get in one last fix to the part which he or she was working on last, and it is your job to forbid this, unless you have time to do all of the thorough testing again before the deadline. If you don't have time to test everything again, it is time for code freeze; spend the remaining time in documentation (see below).
In reality, it is almost always the manager who pushes for the opposite, and wants to try to squeeze in just one more added feature. The reasons why management is addicted to pathological levels of optimism are an interesting topic, but beyond the scope here. Suffice to say, if you want to minimize the chance of the software project being considered an embarrassing failure despite all your team's work on it, do not be the typical manager; the ones who got away with it were lucky, and they may not be lucky next time.
I mentioned earlier that software startups may be an exception; this is because they have little to lose, as most startups go bust. Since the whole venture is a gamble anyway, and shipping a reliable, dependable piece of software might result in failure, it may be that they should go for broke and try to pack in as many features as possible. It will still probably blow up in their face, but that is what happens with most startups, and to give yourself any chance of success you have to try for a big win. Even here, if you think your existing software is good enough to make your launch a success, freeze the code, and all programming should be done on a "new version", which will be released at a much later date, after there is time for re-testing. If you don't have time to thoroughly re-test before launch, you don't have time to change the code before test.
(such as lists needing to have 10 items)
There are, unfortunately, two kinds of people you should be concerned about the opinion of (in regards to your software project):
It would be great if these were the same people, and every once in a while that will be true, but more often they are not. When they are not, it would be great if the stakeholders were most concerned about the actual users' opinions, but that is not always the case.
This isn't a problem you can make go away, but it is at least more manageable if you are aware of it. The worst case scenario, which you must avoid, is the one in which the stakeholder is more or less completely unaware of what their lower-ranking minions need. In this case, when it comes time to deploy the new system, it will be impossible to pretend that it is a success, because the actual users literally cannot do their job with this new system. In this case, the stakeholder who told you what was needed has two choices:
Guess which one is most likely?
Now, there will probably be at least a few features which you will have to add, because the stakeholder wants them, even though the end users don't need or want that. This is often unavoidable, and nothing good happens if you push back on that. The stakeholder needs to know that they have made their mark on this project, so most of the time you should let them do it. But, try to avoid a situation where you spend most of your time on the stakeholder's idea of what is needed, such that you don't have enough left to work on what the end users actually need. Remember, just because you deliver exactly what the stakeholder asks for, doesn't mean they will be happy with you. If the system is unusable, they will not be happy with you, and reminding them that you delivered exactly what they asked for will not help (it will actually just make them angrier by embarassing them).
Again, false objectives (things the stakeholder thinks are important to their organization, that really aren't), are not always avoidable. But, make sure they don't keep your from discovering, and recognizing, what the end users really need to use your software, because if they cannot use it, there will be no way to pretend that things turned out well.
Having said that, it is not a good idea to label these features as "false objectives" or "only Bob wants this". Just let that be your own personal way of thinking about them, so that you will not accidentally let them chew up all of the time available.
It rarely gets done, and when it is done it is rarely worth much, because the actual end user it is intended for, almost never uses it. There is probably a solution to this problem, but if so I haven't found it yet.
What have we learned? Well, we have learned that software has its own set of unusual pitfalls and failure patterns. Expect there to be surprises, and expect at the end of the project to think something like, "if I knew then what I know now, we would do this totally differently." But, the upside is that managing a software project well is a skill that is both in demand and very rare, so if you become competent at it you will be a member of a truly elite group (most of whom, by the way, are not and never were programmers). Good luck.