“We know the network is slow and there are probably a thousand problems in the application.”
I heard this from a customer recently. The problem was, it was assumption and hearsay – they had no tooling, no analysis, and had in no way quantified those statements. Even worse, parts if the application environment were considered “out of scope” for the analysis project, because “obviously those parts *should* work.” Let me tell you a story about assumptions.
In December of 2008, I did a project in a hardware vendor’s showcase lab with a major national rental car chain. The assembled team had been working for more than a month to make the rental pricing application perform, with no success. It was an ideal setting – access to multi-million dollar top-of-the-line hardware systems, a complete clean room environment. I was called in because the performance was miserable – the application could barely handle 5 users and transactions were running for minutes at a time.
In less than a week, we were done. There were two very obvious glaring issues where assumptions were made.
1) The customer’s application was running on IBM WebSphere application server but no one had sanity checked that the installation was even functional – they just kept hitting it with traffic and assumed that since it responded, that meant it was “working.” A quick reinstall to fix what was obviously a failed installation – because “obviously” that wouldn’t fail, right? Checking that assumption made all the difference in the world.
2) Once the application was actually running, we observed a lot of exceptions being thrown related to a static value lookup. These had been observed previously, but no one thought they were important. Just by caching that static value and ending the stream of exceptions, we cut several seconds off every request – a major performance improvement.
Assume nothing you can’t or haven’t measured.