Micro ISV Mistake #2

This is the second in a series of posts on common Micro ISV mistakes. Without repeating my entire introduction from the first article, these mistakes are based on lessons learned over the last few years as I started my company, developed our products, and then tried to sell them. Discussion is, as always, welcomed and encouraged in the comments.

Mistake #2 is trying to be all things to all people.

This is a very easy mistake to make. Even if you don’t consciously choose to make a general purpose product from the outset, it creeps up on you. You start with a simple idea, identify a neat little market, plan your development, and know exactly where you’re going. Then you spot another market which you could also serve with just a little extra development, or you decide that if you spent a little more time, you wouldn’t need to limit yourself to that niche; you could attack the entire market. Even if you don’t think of it yourself, someone, an advisor, friend, relative, customer, or potential sales partner, will suggest it to you. It can be seductive and can trick you into feeling like a genius, you’ve gone from a small market to a massive market and you’re just getting started. You’ve actually made a terrible mistake.

Challenge: Micro ISV Mistake #1

As the distinguished gentleman Gavin pointed out yesterday in Micro ISV Mistake #1 (you might want to read that first), he believes that it's a bad idea "thinking your idea needs to be kept secret". Initially I was fully supportive of this concept - I'm doing it myself - but then I got to thinking about it more as I followed some of the comments.

Ben, Duane, and Chas all pointed out simple scenarios when the oppsite applied. They boiled down to variations of: Prior to your alpha, keep your mouth shut. And I think they're correct.

Everyone knows that execution is everything and that an idea in itself doesn't have much value. This is why patents in the US require a design that *should* work. The concept of the Space Elevator has been around for a century, but only in recent years is it beginning to look feasible. The concepts of human flight, tanks, etc go all the way back to Leonardo Davinci, but it's only been in the last century that these concepts have taken form.

When an idea is embryonic without implemenation, design, or even a fleshed out concept, I believe that it makes perfect sense to keep your mouth shut. Putting your idea out there will give you some early feedback, but it also begins to level the playing field.

In this sort of scenario, I would start discussing my ideas publicly if I had one or more of the following:

  • Personal knowledge, involvement, or connections which would be difficult for another to gain quickly (read: weeks or months).
  • Resources to devote fully and completely to the task very soon.
  • Support and/or endorsement from a large player in my target market or industry (read: if Steve Jobs, Bill Gates, the Pragmatic Programmers, Tim O'Reilly, etc endorsed me, my company, my product, or my idea, I'd make *sure* that you read about it).
  • If being an early mover was huge factor in success, I'd start talking soon and loud.

On the other hand, some things that would make me stay a bit quieter:

  • If I don't have anything besides an idea. Without an execution plan, there's not much value and it would be easier to be duplicated.
  • If all of the pieces or concepts were available and my idea was a new way of putting them together. This might lend itself to fast duplication... which may make it a weak idea anyway.
  • If my idea, code, business was wholely dependent on using information or API's from another group such as Amazon, Google, etc. Asking forgiveness...

Just some food for thought.

Agile Methods need a different mindset

I came across a great article today from Agile Project Planning called: "Do agile software teams make more mistakes?" and he points to the obvious answer of "Yes!" But then he goes into a detailed explanation describing that success comes about quite often because of failure, not in spite of it. True points all and I completely agree, but I'm a bit more pragmatic about it.

As a advocate of Agile Methods, I agree wholeheartedly agree with the idea of fast, tight loops to get customer feedback, design reviews, code reviews, error correction, etc, etc. But in most development, the customer already has a rough idea of their goal and it's your job to figure this out. If you have an excellent requirements gathering/digging team, you probably already have a list of these, but it's nowhere near complete regardless of what the percentage next to the task says. There will always be more that you have to do because of incomplete requirements, unclear requirements, misunderstandings, new customers, etc.

It's an iterative process, so your development should be too.

For example, I live just outside Washington, DC and grew up outside of Chicago. When I drive back, I essentially get on I-70 Westbound, turn northeast at Indianapolis, and turn West again at the appointed place. Little thought is involved because the path is clear, I've driven it quite a few times, and it's major Interstates almost the entire way.

On the other hand, if I go to visit the highly esteemed Duane (a fellow CodeSniper), it takes a bit more effort. I've only made the drive to the area a few times, there are more turns involved, there are state highways involved, and it's to an area that I'm less familiar with. Therefore a completely different mindset is involved:
* I plan the route by looking at the map in advance.
* I keep an eye out for landmarks to mark my progress.
* I watch the odometer and clock to compare actual driving time against my plan.
* And as I get closer, I give him a call to get any last minute changes, detours, etc that might affect my decisions.

When you know exactly where you're going, how to get there, approximately how long it should take, and you've done it numerous times, a waterfall method might work.

If you're unfamiliar with the destination, the route, the effort, and you've done it less than ten times, you need ways of evaluating your progress, establishing baselines and adjusting your route as you go.

This is why I use Agile Methods... because even when I "know" the route, there are a million things that can happen along the way.

Micro ISV Mistake #1

I’ve decided to kick off my regular posting here on CodeSnipers with a short series on common Micro ISV mistakes. In making my initial list, I’ve stuck to mistakes I’ve made myself in the last few years. I’m not ruling out the possibility of adding to the list as the series and discussion goes on, but if that does happen I might be relying on observations, I’m not promising to test drive every mistake for you.

Some of you may argue that not all of my “mistakes” are really mistakes, and you might have a good point, but I firmly believe that in my situation each was the wrong thing to do, so I don’t have any qualms about calling any of them mistakes. If you do have a different perspective, I hope you’ll stay and discuss it in the comments.

Skip the first solution(s)

Some people are really fast. They receive a problem set, spend a short time pondering, quickly come up with a solution and then sit down at their computer to begin coding. Sometime later, give or take some trial and error, the code is complete, maybe even tested and then submitted to source control and QA for black box testing. Managers and other bystanders happily point out: This is one fast developer!

Here's the problem: Fast development does not necessarily mean good development. Joel Spolsy hinted at that in his Hitting the High Notes: The quality of the work and the amount of time spent are simply uncorrelated.

Now, I would guess though that the really good people (no matter how fast they are) understand that the first solution is not necessarily the best one, even if it does provide the correct result. This means that even after having arrived at a solution, one does not stop, but rather look at other approaches, which just might turn out to be much, much better.

For sake of a simple example let's think about Fibonacci numbers. That's right, you maybe heard about them last in some algorithms and data structures class and probably never had to deal with them since.

In short Fibonacci numbers have these values: F0 = 0, F1 = 1, and Fi = F(i-2) + F(i-1) for every i > 1. The series of Fibonacci numbers starts like this then: 0, 1, 1, 2, 3, 5, 8, 13, ...

Hm, so a quick glance reveals that for every Fibonacci number for values greater than 1 we will need to know the values of the two Fibonacci number before that. That's clearly recursive!

The implementation might follow something like this then:

function fibRec(long input): long
if input < 2
return input
else
return fibRec(input - 2) + fibRec(input - 1)
end function

Now again, this is the first and quick solution. The problem is that this is also an extremely inefficient solution. Yes, it returns the correct result, but for increasingly big input values, the running time quickly becomes absolutely prohibitive. The running time grows in fact exponentially: Depending on your computer's performance and your compiler implementation, you probably will start losing patience waiting for the result of, say fibRec(60).

This can be vastly improved by using an iterative approach:


function fibIter(long input): long
long prev = -1
long result = 1
long sum = 0
for each number i from 0 to input
sum = result + prev
prev = result
result = sum
end for
return result
end function

The running time of this algorithm is linear - much faster than the first approach. This is a very solid approach, even for pretty big input values.

It takes a little bit more code than the recursive solution (i.e. writing the recursive routine might be faster), but is really similarly straightforward. Spending some more time and thought can bring up an even better solution for this problem, one with a logarithmic running time, outperforming the straightforward, iterative approach. Feel free to google this, I am sure this can be found pretty easily.

The point of all this: I think we can really think of it as a general principle. Part of being good at what you do is to not stop at the first solution that might present itself, but always look for alternative ways. This may often be easier said than done, considering schedule pressures, etc., but it definitely has been my experience that the first solution is most often really not the best one. Even if it is though, there's no way of knowing, unless we can compare it with others.

Unicows.dll Removes Unicode

UnicoWS.dll does not Unicode enable your program in Win9x. It does not add Unicode to Win9x (it works by converting Unicode to ANSI, i.e. removing Unicode). However, what unicows.dll does is an extremely useful thing: it allows your new Unicode (wide char) program to "work" on old Win9X/ME machines that don't support the Unicode APIs.

Because of the fog of trepidation surrounding Unicode/charset issues, I think the Microsoft announcement of MSLU made the right choice in not overemphasizing what MSLU, the "Microsoft Layer for Unicode on Windows" (unicows lib and dll), does not do. They let the discerning reader figure it out somewhere down in the article where they say it is "not a panacea" and vaguely spell it out under the second point made there. Hey, you can't start out by shouting what your product does not do just because people might think so.

I have never used unicows, but every time I hear about it people seem to suggest that it gives you Unicode on Win9x, so I take the bait and research into it only to be reminded again of what it actually is. What set me off to write this article was a response to a post on JoS as follows:

Does it make sense to *not* compile for Unicode these days? The old excuse was "Unicode doesn't work on Win9x", but now it does as long as you link in the MSLU library. So why not compile for unicode right off, and just forget that ANSI/MBCS exists? Chris Tavares

If you slightly reworded it 'The old excuse was "Unicode builds don't run on Win9x"' then I would agree, and to be fair that may be what the poster meant. But wherever I hear about unicows it often seems to come across as adding Unicode to Win9x.

On old Win95, 98 and ME boxes, your unicows linked (wide char _UNICODE) program will generally operate in the ANSI locale code page with anything involving Windows messaging and APIs. That means all the filename and directory APIs, text on Windows controls etc. Characters not supported in the system ANSI code page will be lost when they are passed into or retrieved from APIs.

But this is not as bad as it sounds because users generally don't have multilingual expectations on those old machines (not to mention fonts were limited). So MSLU answers the main concern of having a single executable that works on all Win32 platforms.

I can't write an article about unicows and not mention one of my 3 favorite bloggers, Michael Kaplan, who as far as I can tell is the guy behind MSLU (though he gives credit to others) and is a great presence on newsgroups, always humble and helpful (and his site has a section on MSLU). He was also mentioned by Joel Spolsky recently as the guy who helped Joel figure this Unicode thing out (some time ago).

The reason I am sensitive to this unicows issue is that I have worked with Unicode on Win9x. There are a couple of Wide Char APIs that have been available since Windows 95, namely GetTextExtentPoint32W and TextOutW, and I built an edit control that works internally in UTF-8 using these APIs, but that is another story.

Are bad Requirements our fault?

Over the past week or so, I've been reading Marcus Buckingham's First, Break All the Rules. I caught him at a conference last year and finally got around to picking up both of this books, so I've been looking forward to this.

Anyway, I'm approximately half way through it and while I was reading this morning, I was struck by something. Let me quote:

Conventional wisdom asserts that good is the opposite of bad, that if you want to understand excellence, you should investigate failure and then invert it.

He goes on to support this by pointing out that we tend to study things if we want to prevent them: student truancy, drug use, etc, etc. And I found this interesting when applied to some of my recent projects. Everyone tends to focus on "what went wrong and how do we fix it" instead of "what went right and how do we do more of it?"

As is common in the software world, we complain quite a bit about not having a specification. We talk about it to anyone who will listen... and I was actually doing it this morning. Maybe we need to change our perspective a bit. As John pointed out in his recent post, most of our customers and even we take things for granted in how they should work, how we perceive the world, and what we expect from our people and our tools. When we are working with something as intangible and complex as software, we need to focus on making things a bit more tangible and descriptive for our users. We have to make sure that they can understand some of the key things... not *how* we're doing something, but what the tradeoffs are.

I believe that we need to educate our customers on not only what is possible, but what is impossible given their constraints. Not only could this help clarify requirements, but it might completely eliminate silly requirements like "Build an auction system on par with Ebay".

IT Death Match

Today, I got a chance to put my money where my mouth is. I work in a small development division at a large consulting company that has a protectionist IT department. IT wants to protect us from everything, especially ourselves, which more often than not results in impeding billable workers from getting billable work done.

In the post linked above, I stated that IT divisions at consulting companies should be avoided whenever necessary. Have a project that needs a website? Don't post it on the company's web server, register a domain at Go Daddy and host it at Webhost4Life.

IT at any company that is not in business to do IT is like a 360 pound swamp creature complete with slime and tentacles that pulls you into the mire anytime you get within 39 feet of where it wallows. Why deal with that mucky mess when you could use a professional IT company like Webhost4Life that will treat you like a customer (not a hostage, intruder, villain, miscreant) and charge you a mere $10/month? (note: I have no affiliation with the linked professional IT companies, just direct experience with them.)

So, today I get a call from my customer. She has sent a link to the project website that is still on our internal web server to the reviewers, but the site has stopped serving all asp.net pages. Last week, we had set up an account at Webhost4Life to host the website on, but the FTP access wasn't working and I needed them to set up MS Index Server.

Let the death match begin! IT Swamp Thing vs. IT Pros

I called up our internal help desk, which goes by the acronym TAC (Total Assistance Center - ha!) to get the ball rolling there. The dude I talked to had never heard of an .aspx page - Bong! One point to IT Pro. After I told him that the customer was calling and was unhappy, he actually told me he was going to assign the issue the lowest priority - Bong! Two points for IT Pro! But I persuaded him to increase it one level, which insured I would get a call back the same day.

Meanwhile, I had opened up an on line help session and was typing questions to Marina over at Webhost4Life to get the FTP problem cleared up and the Index Server configured. She couldn't get the FTP site to work with Filezilla, Bong! One point for IT Swamp Thing, but did set it up so that it would work through IE. Next, I sent here the setup configuration for the Index server, but she had me browse to a web page to enter a help ticket instead, and gave me an indefinite, "Someone will get back to you" answer Bong! One point for Swampy.

By the time I left work, both ITs were still tied two-to-two. Even with similar results from both help desks, going with IT Pro is still my best choice because they:

  1. Let me have a website that I can FTP into (IT Slime prohibits this)
  2. Monitor the websites 24/7 and have disaster recovery plans
  3. Let me build a .Net page that can upload any file type (IT Stinky prohibits anything Outlook wouldn't approve of)
  4. And treats me like a customer, not a renegade script monkey.

Strange case of two system locale ANSI charsets

Are you technically familiar with how system locale ANSI charsets work on Windows? I thought I knew enough to get by... until recently. You may know that there are two primary ANSI locale settings, one of which requires a reboot, but do you know when it comes to the distinction between these two settings that the MSDN docs get it wrong, that the Delphi 7 core ANSI functions get it wrong, or that you cannot set the system code page in C/C++ using setlocale?

In Windows XP Regional and Language Options, you can set "Standards and formats" on the first tab, and "Language for non-Unicode programs" on the third tab, the latter requires a reboot (it is similar on previous Windows OSes). The weird thing is that both of these mess with the ANSI locale code page.

Windows APIs are built around two types of text strings, ANSI and UNICODE. The UNICODE charset (Wide Char) is pretty straight-forward because it is not affected by what the locale settings and language are. The ANSI charset always supports the 128 ASCII values, but can have different ways of utilizing the high bit of the byte to support additional characters. In single-byte charsets, the upper 128 values are assigned to the additional characters like European accented characters, Cyrillic or Greek characters. In East Asian double-byte charsets, special lead byte values in the upper 128 are followed by a second byte to complete the character. "Double-byte" is actually a misleading term because characters in double-byte strings can use either one or two bytes. An ANSI charset is implemented as a "code page" which specifies the encoding system for that charset.

The ANSI charset used by the computer changes according to the locale the computer is configured to, but what is the computer's locale? Well, no one is terribly clear on that! The Windows API GetLocaleInfo allows you to get information about either the "default system locale" or the "default user locale." The MSDN article then goes on to refer to the "current system default locale," and the "default ANSI code page for the LCID," as opposed to the "system default–ANSI code page." I have yet to discover how the User/System differentiation works although presumably user logons retain certain aspects of the Regional and Language Options. Anyway, I would say it is anything but clear.

According to MSDN for Microsoft C++, a C program can use setlocale( LC_ALL, "" ) to set itself to use the "system-default ANSI code page obtained from the operating system" rather than plain ASCII, and then all multi-byte string functions will also operate with that code page. However, it turns out that this code page is actually the one from "Standards and formats" in the computer's Regional and Language Options. I call this the "setlocale" charset.

Meanwhile, all ANSI Windows APIs and messages operate according to the ANSI code page from the "Language for non-Unicode programs" setting. This setting governs the real ANSI system locale code page which you can find out with the GetACP Win32 function. This is the default code page used in MultiByteToWideChar when you specify CP_ACP. I call this the "GetACP" charset.

When these two code pages are different such as U.S. English setlocale charset and Japanese GetACP charset, many programs used internationally exhibit bugs you won't see otherwise. For example, Delphi core source code SysUtils.pas uses a SysLocale based on the setlocale charset in many of its Ansi string functions like AnsiPos and CharLength, while implicit ANSI/WideString conversions and other Ansi functions like AnsiUpperCase happen according to the GetACP charset.

Even Winzip prior to release 8.1 did not parse pathnames correctly with different setlocale and GetACP charsets. WinZip couldn't zip Japanese filenames that contained an ASCII backslash as the second byte of the Shift-JIS character because the string functions were treating the double-byte ANSI strings as single-byte ANSI (Windows-1252) strings.

There is a reason that Windows has these two system charsets. The locale info for a locale's "Standards and formats" is provided via ANSI API in a particular charset (is this what MSDN vaguely referred to as the "default ANSI code page for the LCID"). The OS cannot provide Japanese standards and formats such as weekday names in a Western European ANSI charset. So, a programmer is supposed to interpret that locale info's text strings according to the locale info's charset, even if the machine locale charset is different. But I have not found this documented properly, and I don't think many people know about this. Delphi got it wrong and the Microsoft C++ documentation is not clear on it. I think at this point, Microsoft developers are inclined to want to forget about these issues and focus on Unicode.

Experiences and clarifications are welcome!

How to Make a Peanut Butter and Jelly Sandwich

Stupid Questions and Software Development

Back in college I was a math tutor. A preliminary for all tutors at this particular college was a joyous seminar on the “problems of learning and instruction” – a brief glimpse into just how maddening the enterprise of explaining things to people can be. Most of it seemed like tedium, and painfully obvious. It was no surprise to the assembled tutors-to-be that some people just don’t get math. Of course – why else would someone seek out tutoring?

I remember nursing a genuine resentment at being forced to waste two weekends listening to lecturers drone on and on about ‘cognitive gaps’ and ‘leaps of inference’ and the like when it was eminently clear that none of this would aid us, the tutors, or our charges. Really, if years all of this pinheaded theorizing hadn’t made oh, say, the professors of math courses any more effective as teachers, what chance did four days of it have for us?

Then came a little ‘interactive’ session entitled ‘How to Make a Peanut Butter and Jelly Sandwich’. The premise was simple. The audience was to verbally instruct one of the seminar’s facilitators, step-by-step, on the process of crafting a fine, and presumably edible, PB&J. He had everything he needed arranged on a table: peanut butter, jelly, a loaf of wonder bread, and a knife. Oh, dear, thought I. Now we get the sad spectacle of a man pretending to be as stupid as a bag of hammers in order to frustrate our attempt to walk him through making lunch.

And sure enough, the facilitator played it to the hilt. What was surprising was that this little exercise was actually enjoyable; hilarious even, as it became clear that this guy was experienced. He was good at what he did, and it made for fine comedy.

“Just start with two pieces of bread!” was the first instruction that coalesced out of the din of tutors offering simultaneous advice. The facilitator put on his best what-me-worry face and confidently gripped the bag of bread, held it aloft. Then consternation crossed his visage, signaling that the next phase of his mission wasn’t clear. Hmm… two pieces. Well, I’ve got this collection of bread pieces… hmm… I need two of them, but they’re in the bag… hmm.

“Open the bag!” Well, you lob a slowball like that, he has to hit it out of the park. A look of enlightenment came upon him as he vigorously shook the bag, but this, too, passed into confusion as the bag failed to yield its load of bread-like styrofoam. Much hooting and hollering ensued. Again, gripped by inspiration, he settled on a tried-and-true bag-opening tactic: he tore it open, violently. With purpose. With conviction. Bread flew from the bag. Some landed in his vicinity and he seized it triumphantly, mangling it further in the process.

“Now put some peanut butter on the bread!” At this point everyone was fully cognizant that this guy was going to misinterpret our instructions in some absurd manner; some of us just wanted to see exactly how absurd. Much laughter as he placed the whole unopened jar of Skippy on top of the bread with a self-satisfied flourish.

It went on and on; full minutes later, after we’d successfully negotiated the opening of the peanut butter jar, we’d told him to get some peanut butter on the bread by way of first getting peanut butter on the knife and then smearing the knife on the bread. See, we thought we’d out-dumbed him, but such was not the case as he viciously plunged the knife into the jar, punching out the bottom and sending a mess of broken glass and hydrogenated oils to the floor.

Well, I’ve bored you enough with the details. Suffice it to say that it took a solid hour to finally get this guy to make a frickin’ sammich. So where am I going with this, and what does it have to do with requirements gathering? Everything.

Because, when you’re writing software, you’re “sitting in a chair telling a box what to do,” as someone on JOS put it, and that box is pretty dumb. Because, essentially (and particularly in a business domain) you’re trying to instruct something a lot dumber than a man pretending to be stupid, and you’re trying to get it to do something a lot more complicated than making a PB&J. Another way of putting it is that successfully developing software necessitates dealing with a level of specificity that makes most people ill, insane, or both. Largely, this is a war of wills, and the decisive battle is usually in the requirements gathering phase.

As developers, we’ve probably all had the experience of requirements coming down from on high only to find that they are woefully incomplete and/or vague. That’s an old complaint and an old story. But it’s also a safe bet that we’ve all had the additional experience of running into what can charitably be called resistance when seeking clarification on requirements, and smoothing that dirt road is what I’m here to talk about.

Many approaches over the years have been offered. Agile/XP-type processes, with their emphasis on “user stories”, short cycles, and lots of end-user feedback, seem to work very well when they work, but, just like technologies, methodologies don’t exist in a vacuum. You usually can’t simply combine a methodology with a business environment like you combine acids and bases in a laboratory. Agile methods are great when everyone plays along, not so great when they don’t. Same goes for any methodology, big-M or small. The chief problem in any software development environment is cooperation—a people problem.

As developers, we’re quite aware of our strange mental inclinations. Whatever our differences as human beings, our similarity lies in our shared capacity to decompose the real world into a series of maddeningly specific steps. To a client’s business analyst, it’s a job well done to specify:

“Requirement 357a: The system shall, upon encountering an incoming address, increase the ‘returned from post office counter’ in the data warehouse for existing addresses equal to the incoming address.”

Yup, the analyst thinks, that’s all there is to it. Problem is, of course, the developer can’t compile that sentence into workable code. Now the developer is faced with the task of asking The Stupid Questions. Incoming address? Incoming from where? Where do we find it? How are two addresses reckoned to be “equal”? So on and so forth. The business analyst starts to get exasperated; he’s got, like, four tons of this crap to go through, and he’s not a mind reader either. Now he’s got to go directly to the client and ask The Stupid Questions, because come to think of it he’s not exactly sure of how the client determines when two addresses are equal—OK, they’ve got the same street, number, city and zip, but this one address is missing a state/province code… seems straightforward, they’re still equal, right? ‘Cos if you have the zip you know the state… jeeze! But that box, that stupid, stupid box, doesn’t know that, and now the analyst has to ask what the client wants, which makes him look like a moron, because he’s paid to figure this junk out, and he hasn’t done it, or so the client seems to suggest every time he walks into her office with another list of questions… So he ignores the developer’s email and hopes they’ll just do something right for a change. And the developer sends more email and things get testy because now the schedule is slipping because there’s still these unimplemented features because the developer doesn’t want to code them until the requirements are clear since if she does and the client doesn’t like it then that will generate a bug report on her code, and too many of those look bad come review time. Now QA is getting testy, too (no pun intended) because how are they supposed to test unimplemented features?

Four weeks later, after the PM has called the VP to schedule a JAD session, it comes out that:

“Requirement 666a: The system shall consider two addresses equal when, and only when, at least the following fields in the incoming data source (defined in subpart J of definitions document Foo) are Unicode (see addendum 6) character-by-character matches on a one-to-one basis… [long and winding road inserted here]… Further as documented in the ‘null-field coalesceable’ specification, STATE/PROVINCE is not a required field for this process as the system shall normalize the city and state by the postal code, which is required…”

Welcome to the Dilbert zone.

So sure, you say, we’re all familiar with this kind of frustration. What do we do about it? Well, I have an idea. Keep in mind that’s all it is. I’m not selling any snake oil. There’s no guarantee that this will work, no statistically significant findings from a controlled study to back it up. But I think it’s worth trying:

Arrange a meeting with the client, and get them to tell you how to make a peanut butter and jelly sandwich.

Since we as developers are paid to systematize the world on behalf of other people, we have to do a better job of educating our clients on both the value and the pitfalls of what we do. As long as the rain keeps falling, no one knows or cares about our chanting and dancing around with the chicken bones. Come drought time, the mystery of our profession is our undoing. We’ve always been unfathomable pinheads, but in times of systemic failure we’re the unfathomable pinheads who failed. I think it’s possible, and desirable, to give people a sense of what we do. And if you get to clown around and make a mess in the process, why not?