Micro ISV Mistake #1

I’ve decided to kick off my regular posting here on CodeSnipers with a short series on common Micro ISV mistakes. In making my initial list, I’ve stuck to mistakes I’ve made myself in the last few years. I’m not ruling out the possibility of adding to the list as the series and discussion goes on, but if that does happen I might be relying on observations, I’m not promising to test drive every mistake for you.

Some of you may argue that not all of my “mistakes” are really mistakes, and you might have a good point, but I firmly believe that in my situation each was the wrong thing to do, so I don’t have any qualms about calling any of them mistakes. If you do have a different perspective, I hope you’ll stay and discuss it in the comments.

Skip the first solution(s)

Some people are really fast. They receive a problem set, spend a short time pondering, quickly come up with a solution and then sit down at their computer to begin coding. Sometime later, give or take some trial and error, the code is complete, maybe even tested and then submitted to source control and QA for black box testing. Managers and other bystanders happily point out: This is one fast developer!

Here's the problem: Fast development does not necessarily mean good development. Joel Spolsy hinted at that in his Hitting the High Notes: The quality of the work and the amount of time spent are simply uncorrelated.

Now, I would guess though that the really good people (no matter how fast they are) understand that the first solution is not necessarily the best one, even if it does provide the correct result. This means that even after having arrived at a solution, one does not stop, but rather look at other approaches, which just might turn out to be much, much better.

For sake of a simple example let's think about Fibonacci numbers. That's right, you maybe heard about them last in some algorithms and data structures class and probably never had to deal with them since.

In short Fibonacci numbers have these values: F0 = 0, F1 = 1, and Fi = F(i-2) + F(i-1) for every i > 1. The series of Fibonacci numbers starts like this then: 0, 1, 1, 2, 3, 5, 8, 13, ...

Hm, so a quick glance reveals that for every Fibonacci number for values greater than 1 we will need to know the values of the two Fibonacci number before that. That's clearly recursive!

The implementation might follow something like this then:

function fibRec(long input): long
if input < 2
return input
return fibRec(input - 2) + fibRec(input - 1)
end function

Now again, this is the first and quick solution. The problem is that this is also an extremely inefficient solution. Yes, it returns the correct result, but for increasingly big input values, the running time quickly becomes absolutely prohibitive. The running time grows in fact exponentially: Depending on your computer's performance and your compiler implementation, you probably will start losing patience waiting for the result of, say fibRec(60).

This can be vastly improved by using an iterative approach:

function fibIter(long input): long
long prev = -1
long result = 1
long sum = 0
for each number i from 0 to input
sum = result + prev
prev = result
result = sum
end for
return result
end function

The running time of this algorithm is linear - much faster than the first approach. This is a very solid approach, even for pretty big input values.

It takes a little bit more code than the recursive solution (i.e. writing the recursive routine might be faster), but is really similarly straightforward. Spending some more time and thought can bring up an even better solution for this problem, one with a logarithmic running time, outperforming the straightforward, iterative approach. Feel free to google this, I am sure this can be found pretty easily.

The point of all this: I think we can really think of it as a general principle. Part of being good at what you do is to not stop at the first solution that might present itself, but always look for alternative ways. This may often be easier said than done, considering schedule pressures, etc., but it definitely has been my experience that the first solution is most often really not the best one. Even if it is though, there's no way of knowing, unless we can compare it with others.

Unicows.dll Removes Unicode

UnicoWS.dll does not Unicode enable your program in Win9x. It does not add Unicode to Win9x (it works by converting Unicode to ANSI, i.e. removing Unicode). However, what unicows.dll does is an extremely useful thing: it allows your new Unicode (wide char) program to "work" on old Win9X/ME machines that don't support the Unicode APIs.

Because of the fog of trepidation surrounding Unicode/charset issues, I think the Microsoft announcement of MSLU made the right choice in not overemphasizing what MSLU, the "Microsoft Layer for Unicode on Windows" (unicows lib and dll), does not do. They let the discerning reader figure it out somewhere down in the article where they say it is "not a panacea" and vaguely spell it out under the second point made there. Hey, you can't start out by shouting what your product does not do just because people might think so.

I have never used unicows, but every time I hear about it people seem to suggest that it gives you Unicode on Win9x, so I take the bait and research into it only to be reminded again of what it actually is. What set me off to write this article was a response to a post on JoS as follows:

Does it make sense to *not* compile for Unicode these days? The old excuse was "Unicode doesn't work on Win9x", but now it does as long as you link in the MSLU library. So why not compile for unicode right off, and just forget that ANSI/MBCS exists? Chris Tavares

If you slightly reworded it 'The old excuse was "Unicode builds don't run on Win9x"' then I would agree, and to be fair that may be what the poster meant. But wherever I hear about unicows it often seems to come across as adding Unicode to Win9x.

On old Win95, 98 and ME boxes, your unicows linked (wide char _UNICODE) program will generally operate in the ANSI locale code page with anything involving Windows messaging and APIs. That means all the filename and directory APIs, text on Windows controls etc. Characters not supported in the system ANSI code page will be lost when they are passed into or retrieved from APIs.

But this is not as bad as it sounds because users generally don't have multilingual expectations on those old machines (not to mention fonts were limited). So MSLU answers the main concern of having a single executable that works on all Win32 platforms.

I can't write an article about unicows and not mention one of my 3 favorite bloggers, Michael Kaplan, who as far as I can tell is the guy behind MSLU (though he gives credit to others) and is a great presence on newsgroups, always humble and helpful (and his site has a section on MSLU). He was also mentioned by Joel Spolsky recently as the guy who helped Joel figure this Unicode thing out (some time ago).

The reason I am sensitive to this unicows issue is that I have worked with Unicode on Win9x. There are a couple of Wide Char APIs that have been available since Windows 95, namely GetTextExtentPoint32W and TextOutW, and I built an edit control that works internally in UTF-8 using these APIs, but that is another story.

Are bad Requirements our fault?

Over the past week or so, I've been reading Marcus Buckingham's First, Break All the Rules. I caught him at a conference last year and finally got around to picking up both of this books, so I've been looking forward to this.

Anyway, I'm approximately half way through it and while I was reading this morning, I was struck by something. Let me quote:

Conventional wisdom asserts that good is the opposite of bad, that if you want to understand excellence, you should investigate failure and then invert it.

He goes on to support this by pointing out that we tend to study things if we want to prevent them: student truancy, drug use, etc, etc. And I found this interesting when applied to some of my recent projects. Everyone tends to focus on "what went wrong and how do we fix it" instead of "what went right and how do we do more of it?"

As is common in the software world, we complain quite a bit about not having a specification. We talk about it to anyone who will listen... and I was actually doing it this morning. Maybe we need to change our perspective a bit. As John pointed out in his recent post, most of our customers and even we take things for granted in how they should work, how we perceive the world, and what we expect from our people and our tools. When we are working with something as intangible and complex as software, we need to focus on making things a bit more tangible and descriptive for our users. We have to make sure that they can understand some of the key things... not *how* we're doing something, but what the tradeoffs are.

I believe that we need to educate our customers on not only what is possible, but what is impossible given their constraints. Not only could this help clarify requirements, but it might completely eliminate silly requirements like "Build an auction system on par with Ebay".

IT Death Match

Today, I got a chance to put my money where my mouth is. I work in a small development division at a large consulting company that has a protectionist IT department. IT wants to protect us from everything, especially ourselves, which more often than not results in impeding billable workers from getting billable work done.

In the post linked above, I stated that IT divisions at consulting companies should be avoided whenever necessary. Have a project that needs a website? Don't post it on the company's web server, register a domain at Go Daddy and host it at Webhost4Life.

IT at any company that is not in business to do IT is like a 360 pound swamp creature complete with slime and tentacles that pulls you into the mire anytime you get within 39 feet of where it wallows. Why deal with that mucky mess when you could use a professional IT company like Webhost4Life that will treat you like a customer (not a hostage, intruder, villain, miscreant) and charge you a mere $10/month? (note: I have no affiliation with the linked professional IT companies, just direct experience with them.)

So, today I get a call from my customer. She has sent a link to the project website that is still on our internal web server to the reviewers, but the site has stopped serving all pages. Last week, we had set up an account at Webhost4Life to host the website on, but the FTP access wasn't working and I needed them to set up MS Index Server.

Let the death match begin! IT Swamp Thing vs. IT Pros

I called up our internal help desk, which goes by the acronym TAC (Total Assistance Center - ha!) to get the ball rolling there. The dude I talked to had never heard of an .aspx page - Bong! One point to IT Pro. After I told him that the customer was calling and was unhappy, he actually told me he was going to assign the issue the lowest priority - Bong! Two points for IT Pro! But I persuaded him to increase it one level, which insured I would get a call back the same day.

Meanwhile, I had opened up an on line help session and was typing questions to Marina over at Webhost4Life to get the FTP problem cleared up and the Index Server configured. She couldn't get the FTP site to work with Filezilla, Bong! One point for IT Swamp Thing, but did set it up so that it would work through IE. Next, I sent here the setup configuration for the Index server, but she had me browse to a web page to enter a help ticket instead, and gave me an indefinite, "Someone will get back to you" answer Bong! One point for Swampy.

By the time I left work, both ITs were still tied two-to-two. Even with similar results from both help desks, going with IT Pro is still my best choice because they:

  1. Let me have a website that I can FTP into (IT Slime prohibits this)
  2. Monitor the websites 24/7 and have disaster recovery plans
  3. Let me build a .Net page that can upload any file type (IT Stinky prohibits anything Outlook wouldn't approve of)
  4. And treats me like a customer, not a renegade script monkey.

Strange case of two system locale ANSI charsets

Are you technically familiar with how system locale ANSI charsets work on Windows? I thought I knew enough to get by... until recently. You may know that there are two primary ANSI locale settings, one of which requires a reboot, but do you know when it comes to the distinction between these two settings that the MSDN docs get it wrong, that the Delphi 7 core ANSI functions get it wrong, or that you cannot set the system code page in C/C++ using setlocale?

In Windows XP Regional and Language Options, you can set "Standards and formats" on the first tab, and "Language for non-Unicode programs" on the third tab, the latter requires a reboot (it is similar on previous Windows OSes). The weird thing is that both of these mess with the ANSI locale code page.

Windows APIs are built around two types of text strings, ANSI and UNICODE. The UNICODE charset (Wide Char) is pretty straight-forward because it is not affected by what the locale settings and language are. The ANSI charset always supports the 128 ASCII values, but can have different ways of utilizing the high bit of the byte to support additional characters. In single-byte charsets, the upper 128 values are assigned to the additional characters like European accented characters, Cyrillic or Greek characters. In East Asian double-byte charsets, special lead byte values in the upper 128 are followed by a second byte to complete the character. "Double-byte" is actually a misleading term because characters in double-byte strings can use either one or two bytes. An ANSI charset is implemented as a "code page" which specifies the encoding system for that charset.

The ANSI charset used by the computer changes according to the locale the computer is configured to, but what is the computer's locale? Well, no one is terribly clear on that! The Windows API GetLocaleInfo allows you to get information about either the "default system locale" or the "default user locale." The MSDN article then goes on to refer to the "current system default locale," and the "default ANSI code page for the LCID," as opposed to the "system default–ANSI code page." I have yet to discover how the User/System differentiation works although presumably user logons retain certain aspects of the Regional and Language Options. Anyway, I would say it is anything but clear.

According to MSDN for Microsoft C++, a C program can use setlocale( LC_ALL, "" ) to set itself to use the "system-default ANSI code page obtained from the operating system" rather than plain ASCII, and then all multi-byte string functions will also operate with that code page. However, it turns out that this code page is actually the one from "Standards and formats" in the computer's Regional and Language Options. I call this the "setlocale" charset.

Meanwhile, all ANSI Windows APIs and messages operate according to the ANSI code page from the "Language for non-Unicode programs" setting. This setting governs the real ANSI system locale code page which you can find out with the GetACP Win32 function. This is the default code page used in MultiByteToWideChar when you specify CP_ACP. I call this the "GetACP" charset.

When these two code pages are different such as U.S. English setlocale charset and Japanese GetACP charset, many programs used internationally exhibit bugs you won't see otherwise. For example, Delphi core source code SysUtils.pas uses a SysLocale based on the setlocale charset in many of its Ansi string functions like AnsiPos and CharLength, while implicit ANSI/WideString conversions and other Ansi functions like AnsiUpperCase happen according to the GetACP charset.

Even Winzip prior to release 8.1 did not parse pathnames correctly with different setlocale and GetACP charsets. WinZip couldn't zip Japanese filenames that contained an ASCII backslash as the second byte of the Shift-JIS character because the string functions were treating the double-byte ANSI strings as single-byte ANSI (Windows-1252) strings.

There is a reason that Windows has these two system charsets. The locale info for a locale's "Standards and formats" is provided via ANSI API in a particular charset (is this what MSDN vaguely referred to as the "default ANSI code page for the LCID"). The OS cannot provide Japanese standards and formats such as weekday names in a Western European ANSI charset. So, a programmer is supposed to interpret that locale info's text strings according to the locale info's charset, even if the machine locale charset is different. But I have not found this documented properly, and I don't think many people know about this. Delphi got it wrong and the Microsoft C++ documentation is not clear on it. I think at this point, Microsoft developers are inclined to want to forget about these issues and focus on Unicode.

Experiences and clarifications are welcome!

How to Make a Peanut Butter and Jelly Sandwich

Stupid Questions and Software Development

Back in college I was a math tutor. A preliminary for all tutors at this particular college was a joyous seminar on the “problems of learning and instruction” – a brief glimpse into just how maddening the enterprise of explaining things to people can be. Most of it seemed like tedium, and painfully obvious. It was no surprise to the assembled tutors-to-be that some people just don’t get math. Of course – why else would someone seek out tutoring?

I remember nursing a genuine resentment at being forced to waste two weekends listening to lecturers drone on and on about ‘cognitive gaps’ and ‘leaps of inference’ and the like when it was eminently clear that none of this would aid us, the tutors, or our charges. Really, if years all of this pinheaded theorizing hadn’t made oh, say, the professors of math courses any more effective as teachers, what chance did four days of it have for us?

Then came a little ‘interactive’ session entitled ‘How to Make a Peanut Butter and Jelly Sandwich’. The premise was simple. The audience was to verbally instruct one of the seminar’s facilitators, step-by-step, on the process of crafting a fine, and presumably edible, PB&J. He had everything he needed arranged on a table: peanut butter, jelly, a loaf of wonder bread, and a knife. Oh, dear, thought I. Now we get the sad spectacle of a man pretending to be as stupid as a bag of hammers in order to frustrate our attempt to walk him through making lunch.

And sure enough, the facilitator played it to the hilt. What was surprising was that this little exercise was actually enjoyable; hilarious even, as it became clear that this guy was experienced. He was good at what he did, and it made for fine comedy.

“Just start with two pieces of bread!” was the first instruction that coalesced out of the din of tutors offering simultaneous advice. The facilitator put on his best what-me-worry face and confidently gripped the bag of bread, held it aloft. Then consternation crossed his visage, signaling that the next phase of his mission wasn’t clear. Hmm… two pieces. Well, I’ve got this collection of bread pieces… hmm… I need two of them, but they’re in the bag… hmm.

“Open the bag!” Well, you lob a slowball like that, he has to hit it out of the park. A look of enlightenment came upon him as he vigorously shook the bag, but this, too, passed into confusion as the bag failed to yield its load of bread-like styrofoam. Much hooting and hollering ensued. Again, gripped by inspiration, he settled on a tried-and-true bag-opening tactic: he tore it open, violently. With purpose. With conviction. Bread flew from the bag. Some landed in his vicinity and he seized it triumphantly, mangling it further in the process.

“Now put some peanut butter on the bread!” At this point everyone was fully cognizant that this guy was going to misinterpret our instructions in some absurd manner; some of us just wanted to see exactly how absurd. Much laughter as he placed the whole unopened jar of Skippy on top of the bread with a self-satisfied flourish.

It went on and on; full minutes later, after we’d successfully negotiated the opening of the peanut butter jar, we’d told him to get some peanut butter on the bread by way of first getting peanut butter on the knife and then smearing the knife on the bread. See, we thought we’d out-dumbed him, but such was not the case as he viciously plunged the knife into the jar, punching out the bottom and sending a mess of broken glass and hydrogenated oils to the floor.

Well, I’ve bored you enough with the details. Suffice it to say that it took a solid hour to finally get this guy to make a frickin’ sammich. So where am I going with this, and what does it have to do with requirements gathering? Everything.

Because, when you’re writing software, you’re “sitting in a chair telling a box what to do,” as someone on JOS put it, and that box is pretty dumb. Because, essentially (and particularly in a business domain) you’re trying to instruct something a lot dumber than a man pretending to be stupid, and you’re trying to get it to do something a lot more complicated than making a PB&J. Another way of putting it is that successfully developing software necessitates dealing with a level of specificity that makes most people ill, insane, or both. Largely, this is a war of wills, and the decisive battle is usually in the requirements gathering phase.

As developers, we’ve probably all had the experience of requirements coming down from on high only to find that they are woefully incomplete and/or vague. That’s an old complaint and an old story. But it’s also a safe bet that we’ve all had the additional experience of running into what can charitably be called resistance when seeking clarification on requirements, and smoothing that dirt road is what I’m here to talk about.

Many approaches over the years have been offered. Agile/XP-type processes, with their emphasis on “user stories”, short cycles, and lots of end-user feedback, seem to work very well when they work, but, just like technologies, methodologies don’t exist in a vacuum. You usually can’t simply combine a methodology with a business environment like you combine acids and bases in a laboratory. Agile methods are great when everyone plays along, not so great when they don’t. Same goes for any methodology, big-M or small. The chief problem in any software development environment is cooperation—a people problem.

As developers, we’re quite aware of our strange mental inclinations. Whatever our differences as human beings, our similarity lies in our shared capacity to decompose the real world into a series of maddeningly specific steps. To a client’s business analyst, it’s a job well done to specify:

“Requirement 357a: The system shall, upon encountering an incoming address, increase the ‘returned from post office counter’ in the data warehouse for existing addresses equal to the incoming address.”

Yup, the analyst thinks, that’s all there is to it. Problem is, of course, the developer can’t compile that sentence into workable code. Now the developer is faced with the task of asking The Stupid Questions. Incoming address? Incoming from where? Where do we find it? How are two addresses reckoned to be “equal”? So on and so forth. The business analyst starts to get exasperated; he’s got, like, four tons of this crap to go through, and he’s not a mind reader either. Now he’s got to go directly to the client and ask The Stupid Questions, because come to think of it he’s not exactly sure of how the client determines when two addresses are equal—OK, they’ve got the same street, number, city and zip, but this one address is missing a state/province code… seems straightforward, they’re still equal, right? ‘Cos if you have the zip you know the state… jeeze! But that box, that stupid, stupid box, doesn’t know that, and now the analyst has to ask what the client wants, which makes him look like a moron, because he’s paid to figure this junk out, and he hasn’t done it, or so the client seems to suggest every time he walks into her office with another list of questions… So he ignores the developer’s email and hopes they’ll just do something right for a change. And the developer sends more email and things get testy because now the schedule is slipping because there’s still these unimplemented features because the developer doesn’t want to code them until the requirements are clear since if she does and the client doesn’t like it then that will generate a bug report on her code, and too many of those look bad come review time. Now QA is getting testy, too (no pun intended) because how are they supposed to test unimplemented features?

Four weeks later, after the PM has called the VP to schedule a JAD session, it comes out that:

“Requirement 666a: The system shall consider two addresses equal when, and only when, at least the following fields in the incoming data source (defined in subpart J of definitions document Foo) are Unicode (see addendum 6) character-by-character matches on a one-to-one basis… [long and winding road inserted here]… Further as documented in the ‘null-field coalesceable’ specification, STATE/PROVINCE is not a required field for this process as the system shall normalize the city and state by the postal code, which is required…”

Welcome to the Dilbert zone.

So sure, you say, we’re all familiar with this kind of frustration. What do we do about it? Well, I have an idea. Keep in mind that’s all it is. I’m not selling any snake oil. There’s no guarantee that this will work, no statistically significant findings from a controlled study to back it up. But I think it’s worth trying:

Arrange a meeting with the client, and get them to tell you how to make a peanut butter and jelly sandwich.

Since we as developers are paid to systematize the world on behalf of other people, we have to do a better job of educating our clients on both the value and the pitfalls of what we do. As long as the rain keeps falling, no one knows or cares about our chanting and dancing around with the chicken bones. Come drought time, the mystery of our profession is our undoing. We’ve always been unfathomable pinheads, but in times of systemic failure we’re the unfathomable pinheads who failed. I think it’s possible, and desirable, to give people a sense of what we do. And if you get to clown around and make a mess in the process, why not?

CodeSnipers Lives

Hey, if you've come across the site today, welcome! We're out of beta and not everything is working perfectly, but the bulk of it is. Regardless, we have some great content from some great people with a wealth of experience, skills, and knowledge.

You can read Alex's swift kick in the tail about software documentation entitled: We didn't need documentation back then!.

Or, if you're looking for something a bit more technical this morning, you can read Joseph's review of Cross-Platform GUI Programming with wxWidgets.

Or, if you're not the least bit interested in the code and tools but are concerned about representing and helping the users, you can check out Duane's entry on listening to user feedback entitled: Pave the cowpath.

This is just a small sample of the content available here and with a bit of exploration, you'll find quite a bit more. We have some sharp people sharing quite a bit of wisdom.

And don't forget, if you are particularly interested in a contributor here, they have all written introductory entries and you can check out their personal blog.

Cross-Platform GUI Programming with wxWidgets (book review)

If you spend time writing applications targeting several platforms like Windows/Unix/Mac or even embedded platforms like Pocket PC (WinCE) then, no doubt you have come across a open source widget system called wxWigets.

Book : Cross-Platform GUI Programming with wxWidgets
By : Julian Smart, Kevin Hock, Stefan Csomor

Cross-Platform GUI Programmg with wxWidgets, was published recently, on July 25, 2005, and is a must for any cross platform UI developer. The goal of wxWidgets is not to replace UIs such as MFC or Motif even or GTK+, but to work above them. For a cross platform developer, this is a saving grace. I could now build advanced cross platform applications that have a native look and feel on the target platforms.

Perhaps like many, if you are unaware of wxWidgets ,also referred as simply wx, you may think its just another GUI, and the problem with that, is having people to relearn GUIs, which can lead to fustration. Even Joel Spolsky cautions about user fustrations with UI design, from his classic User Interface Design book. Lets face it, mostly everyone has experienced Micrsoft's Windows UI. So we have come to accept and expect certain Microsoft UI formalities. Like expanding a window by moving the mouse to a window edge. For most other UIs like motif, that would just move the window and cause user fustration. So the last thing we need is to re-learn another UI. Wx resolves this by using the native underlining UI on that platform, such as MFC if it were a Windows application, so the end result is an application that really is using MFC but was programmed with wx.

Some problems with new UI is often, that it is incomplete, compared to some UI like MFC. Most UI, don't include networking functions, memory management, database functions, or advance graphic engines. Wx is very complete, with ODBC functions, and OpenGL routines. KICAD is an open source CAD software using wx, how amazing is that!

Overall, this book is a simple read, it is geared towards C/C++ programmers, and if you have any experience with other GUIs as MFC, OWL you will quickly excel in its application. You can also use a variety of other languages such as Python, Perl, Basic, Lua, Eiffel, Java, JavaScript, Ruby, Haskell and C#.

I don't really have too many cons against the book. Other than, its really a beginner to intermediate introduction to wx. If you're a wx expert, you can still benefit from the book, I expecially liked its multithreaded section.

Also what is not covered in the book, is macros in wx which contain really nice features that allow for dynamic classes to be defined on runtime and imported via dynamic libraries and make for some cool plugin technology, for those whom don't wish to make thier application open sourced, and yet, want users to extend the UI of their application, via plugins.

We didn't need documentation back then!

Ever been in that situation? A product has been in development, from the ground up for three to four years. There have been several sales and the product is overall regarded as rather successful. In fact, it grows much bigger than initially expected. An infrastructure forms around it, staffing demands grow, etc. etc. until at some point the company is big enough that an actual HR department is justified. Then, the core developer decides to move on and is off to greener pastures. Ouch.

It gets worse: He or she did in fact never bother to create any elaborate documentation of the architecture or any important design decisions. It's all in his head.


I overheard a similar description the other day. Interestingly, the speaker concluded his story with the following statement: "We did not need to create documentation for anything back then." Nodding, chuckling ... then the conversation moved on to other subjects.

I could not forget about this though. On the one hand, I do empathize with the spirit: You're having a high tech startup, so you work hard, really hard to get out that product. You have only one or two people hacking away on the code. You're exploring new territory and have to hurry, because the competition could enter your market any moment. You really a) have no time for documenting, b) see no reason, because the application is far from complex at that point, and finally c) only need a minimum of documentation simply because there is not a large team involved and thus pretty few things to be communicated. No time for breaks, because you and your team are busy inventing the future and selling the half-ready program to cash-wielding prospects who would prefer to have the final result tomorrow. It's a pretty exciting environment.

Not creating effective documentation in that early stage of a product is a horrible mistake.

Here are just some reasons for this:

  • As more and more requirements are revealed software tends to gain and not lose complexity.
  • The software (or rather, the code base) will end up being in use far longer than the original developers could possibly anticipate.
  • Teams will grow, requiring a great information overhead. It can be very painful process, trying to discover the existing knowledge about architecture, design or features.
  • People will move on. They may move within the company or sign on somewhere else. Either way, the transition can be quite slow and inefficient. If the former main developer is suddenly busy running the company he won't have much time explaining database table structures.
  • The early knowledge may well be core knowledge, if the early development focused on the most important features of the application.

There is of course no easy answer, because some of those earlier objections may well appear quite valid. For some time anyway. Then, you better have a plan on how to transition smoothly to a good documentation process. Otherwise you might just find yourself in an unlucky scenario five years down the road.