Programming at the Speed of Trust
It is conventional wisdom that software development is complicated.
Triply so if you stick the word “enterprise” in front of it.
By complicated (and complicated x 3 for so-called “enterprise software”) I’m referring to the whole pomp and circumstance that surrounds the actual process of banging out code to solve a problem. Needs analyzing, requirements gathering, stakeholder interviewing, spec formalizing, story boarding, resource planning, milestone setting, time line estimating1, wire frame designing, and prototype building are all usually part of the dance, and that’s just the front end. On the back end you have testing, quality assuring, requirements verifying and acceptance reviewing. (Here to forth the sum of these activities will be referred to as “the formalities”.)
With so many disparate verbs involved, conventional wisdom accordingly asserts that it takes a team of at least half a dozen highly trained and specialized professionals to pull it off. This itself gives rise to boatloads of meetings, coordination, consensus gaining, gaps in understanding, and other communication overhead. The formalities effectively constitute a gatekeeper from a new piece of software getting out into the real world, lest it do something that’s embarrassing, incorrect or otherwise terrible. So you’re paying the price of all this work before a new software solution actually gets to do you any good in real world use. (This, incidentally, is how million dollar software gets built that actual users hate.)
Does this model have a place? Absolutely. Some projects are sufficiently complex/important/sensitive to justify that kind of slow, methodical approach. But many aren’t.
One of my roles during my tenure at MonsterCommerce was being in charge of development on the MonsterMarketplace, a shopping portal that aggregated product listings from several thousand of our e-commerce clients. The building of the MMP was originally outsourced to an international resources development firm, and the [accepted] proposal for the project contained line items of many (costly) hours for the formalities. It was a sizable system distributed over two web servers, one database server, and tied to its own [then newfangled] Google Search Appliance. It was a great site and they did a good job. But l don’t know that we got our money’s worth for all that supposed testing and quality assurance.
Like most sufficiently complicated systems there were bugs, hiccups, gotchas, and performance issues. Not necessarily glaring ones, but critical none the less. When new campaigns were launched to drive more traffic the system had to be carefully monitored due to memory leaks and other inefficiencies that would routinely crash a server. As traffic soared beyond 100,000+ visitors a day such reactive maintenance became the norm.
When I took over maintenance of the system these problems remained for me to solve, and there was also a fresh new look to be implemented. My calculated path of least resistance was to rewrite the entire front end myself, which was tantamount to breaking many rules of the development cycle and best practices as laid out by the outsource firm.
But it worked.
In 4 weeks I had made the new front end, polished by about a week of collaboration with the sharp-eyed spot checking of my boss. We launched and watched the CPU usage loads under traffic, which instead of dancing between 20 and 100% utilization like before were scarcely seen above 4%.
How did that happen? The first version of the MMP was built by a team of people, and subjected to (presumably) all kinds of formal testing and quality assurance. The version I cooked up was written just by me with only the benefit of my boss who loved to click around and see to it that all was well with his brain child. Granted, I had the benefit of an already laid out foundation for the system, but my aim was much more to rip out and replace that foundation than to bootstrap off of it.
This was a matter of raw ability trumping best practices, both technically and management-wise. I knew a few things about the lower levels of computer architecture that best practices were apparently oblivious to, so I could get rid of what were (to me) glaring performance problems 2. I was close enough to the vision and intended outcome of the project that I could be trusted to remain true to it. The objective win of this was the quicker end to hemorrhaging revenues due to sporadic downtime. The subjective win was the satisfaction my boss felt getting a major release out without all the painful fuss of formalities that plagued the first time around. (And the win for me was the satisfaction of delivering on what was viewed as a tall and unlikely promise, plus a $7000 raise given to me the day after launch.)
What I’m curious about is how many projects are beholden to the more complex level of rigor out of a false sense that they should be (or need to be) in order to turn out. In my experience the procedures and complexity of methodical collaboration ultimately become self-justifying: all the brain farts, delays, back-and-forth interactions, and so on make it harder for a quality developer to get actual work done. It’s then no wonder if he/she can’t be trusted to create a solid end product without micro-management and support. It’s worth considering that the end quality level is a wash at best, only you spent a lot more time and high salaries to cobble it all together.
Programming at the Speed of Trust
To do away with all the formalities, I assert, is the programming equivalent of Covey’s maxim “The Speed of Trust” 3. If you have a nimble, self-directed team of developers (even a team of one) that is intimately familiar enough with the aims and needs of a project to do it right, there exists the opportunity to sidestep all of it in favor of trusting them to responsibly do their job. The reason is that a lot of best practices are above all designed to prevent screw ups rather than promote excellence (having massive amounts of meticulous documentation and following someone elses’ protocols to the letter counts in many ways as a big C.Y.A. and little else).
Cultivating and using this approach to software development needs to be done with care. No doubt many of our modern practices were born of reactions to past disasters, which leads me to think there’s a largely unrecognized dichotomy in approaches to software: one approach is formality as preemptive damage control, the other is running at the speed of trust. Nimble start ups by default operate at the latter, and for it have created some of the most amazing software on shoestring budgets and in record time. Established software development houses and large companies with full blown IT department are almost inextricably married to the former, probably because cultivating the latter doesn’t seem like a viable option with any reliability.
What, then, does it take to cultivate the latter reliably? I think whomever cracks that code is going to have a major leg up in any business that calls for custom software. I do know of a couple conditions that help:
- The ability to release early and release often. This is the era of cloud computing and the mobile app, so this isn’t as tall an order as it used to be4. Being able to fix flaws immediately when discovered makes them much less harmful and much more palatable.
- Give developers at least a semi-regular taste of doing the job that the software is supposed to help with. This is important before, during and after development. Before because it lets them experience exactly what they’re solving for. During to keep true to what’s important as things progress. After because a good developer can almost always improve software they’ve made if they’re saddled with using it.
- Break the project into individually useful chunks, and put each chunk to good use as it’s ready. Making a complex uber-system is dangerous business only if you have to finish it up to the very end before getting to experience if it sucks or not. Structure a project so that the fruits of it are apparent sooner than later, and that the wins of the last release spur on the next one.
- Have a developer experience that it’s their baby. Whether intentional or not, nothing prompts a sense of “get ‘er done and move on to the next one” as when that is precisely how the job is set up. This makes cutting corners much more likely. If a developer knows he or she is on board to grow and evolve the system over time, a sense of pride and long term responsibility kicks in that can’t be beat for ensuring quality now.
- Give a developer some form of vested interest in the project’s success. This is similar to the last one, but not quite. It’s vested interest that strongly evokes above-and-beyond effort and innovation.
I think the formalities are wholly valid, and exist for a lot of good reasons. However like most things that are seldom questioned it is useful to question their necessity for a given project, simply because the reward is the opportunity to program at the speed of trust.
Notes:
- Aka “guessing” ↩
- Namely you don’t do string building via massive concatenation, and it’s much more efficient to reuse a cached database connections and run inline SQL than to connect/disconnect on every access and build parameter objects for stored procedure calls. Turns out a Master’s degree in CS with knowledge of algorithm design and computer architecture IS useful in day-to-day programming. ↩
- I haven’t actually ready Covey’s book, so if it contains a chapter on software development I apologize for any apparent lifting of ideas. ↩
- Except iPhone apps, until they drop the practice of requiring several-week approval processes for updates to already approved apps. ↩