# Skip the first solution(s)

Some people are really fast. They receive a problem set, spend a short time pondering, quickly come up with a solution and then sit down at their computer to begin coding. Sometime later, give or take some trial and error, the code is complete, maybe even tested and then submitted to source control and QA for black box testing. Managers and other bystanders happily point out: This is one fast developer!

Here's the problem: Fast development does not necessarily mean good development. Joel Spolsy hinted at that in his Hitting the High Notes: The quality of the work and the amount of time spent are simply uncorrelated.

Now, I would guess though that the really good people (no matter how fast they are) understand that the first solution is not necessarily the best one, even if it does provide the correct result. This means that even after having arrived at a solution, one does not stop, but rather look at other approaches, which just might turn out to be much, much better.

For sake of a simple example let's think about Fibonacci numbers. That's right, you maybe heard about them last in some algorithms and data structures class and probably never had to deal with them since.

In short Fibonacci numbers have these values: F0 = 0, F1 = 1, and Fi = F(i-2) + F(i-1) for every i > 1. The series of Fibonacci numbers starts like this then: 0, 1, 1, 2, 3, 5, 8, 13, ...

Hm, so a quick glance reveals that for every Fibonacci number for values greater than 1 we will need to know the values of the two Fibonacci number before that. That's clearly recursive!

The implementation might follow something like this then:
``` function fibRec(long input): long if input < 2 return input else return fibRec(input - 2) + fibRec(input - 1) end function ```

Now again, this is the first and quick solution. The problem is that this is also an extremely inefficient solution. Yes, it returns the correct result, but for increasingly big input values, the running time quickly becomes absolutely prohibitive. The running time grows in fact exponentially: Depending on your computer's performance and your compiler implementation, you probably will start losing patience waiting for the result of, say fibRec(60).

This can be vastly improved by using an iterative approach:

``` function fibIter(long input): long long prev = -1 long result = 1 long sum = 0 for each number i from 0 to input sum = result + prev prev = result result = sum end for return result end function ```

The running time of this algorithm is linear - much faster than the first approach. This is a very solid approach, even for pretty big input values.

It takes a little bit more code than the recursive solution (i.e. writing the recursive routine might be faster), but is really similarly straightforward. Spending some more time and thought can bring up an even better solution for this problem, one with a logarithmic running time, outperforming the straightforward, iterative approach. Feel free to google this, I am sure this can be found pretty easily.

The point of all this: I think we can really think of it as a general principle. Part of being good at what you do is to not stop at the first solution that might present itself, but always look for alternative ways. This may often be easier said than done, considering schedule pressures, etc., but it definitely has been my experience that the first solution is most often really not the best one. Even if it is though, there's no way of knowing, unless we can compare it with others.

### i agree sort of

Code should be optimized where it's really necessary. In the given example, I would argue that yes, it is very necessary, because the runtime can otherwise be rather abysmal. I don't think we're talking about seconds here either.

Readability is of course a very important issue - code tends to be read more often than anyone might expect at the time of writing. The more complicated an algorithm implemention becomes, the more (effective) comments are hopefully in place ... :D