Is footprint still important for desktop applications?

I just read Tomas‘ blog entry titled Is footprint still important for embedded devices? His answer is a clear "Yes." Well, what about desktop applications? Why the heck are desktop applications so slow and memory-hungry?

Like Tomas, I bought my first computer in the late Seventies. I did a vacation job for a month and bought the computer with a friend (50/50), because it was so expensive. It was a Commodore Pet 2001 with 8kb of RAM. Unlike Tomas, I programmed it in Basic (Assembly did not attract me). One of the first programs I got running was Joseph Weizenbaum’s ELIZA. I actually got a listing and I typed it in and modified it a bit. From then on I was fascinated by the art of programming: Modula-2, C++, Python, Java, showing a trend in the direction of higher level "more expensive" languages and systems.

Ok, back to the question. Let’s look at IDEs as an example. For some reasons IDEs always "stress the desktop systems." 20 years ago I used EMACS. At that time people said EMACS was an acronym for Extended Memory And Constantly Swapping. With SNiFF we had the same problem. And today, Eclipse stresses our desktops.

Yes, careful design, better algorithms and data-structures can improve performance and footprint, but don’t think desktop application developers don’t already do this. The real problem is somewhere else. Identifying and fixing real hotspots is easy. In all those systems, quite some effort was put into performance and footprint improvements. However, it is incredibly hard to optimize complex systems with no real hotspots. If you use a memory analyzer or a profiler you’ll hardly find hot-spots that contribute more than 5-10% of the overall footprint. There are usually higher priority tasks during development than optimizing a 5% problem.

Let’s look at Eclipse. Where does this "distributed footprint problem" come from? Is it Java? No, Java is much more efficient than most people think. Unlike C++, Java managed to provide an environment where class libraries, tool kits and frameworks from different sources work well together. Why reinvent the wheel, when someone has already done the work for you? You can focus on the application you want to create instead of getting lost in infrastructure. The price you pay is that the components you use are usually not tailored for your use case, but can often be used for a wider range of applications. More general libraries add overhead in performance and footprint, but they can drastically reduce the development effort.

Could we rewrite Eclipse (or Workbench) with a better footprint? We probably could. In some cases we could use better overall design, better algorithms or data-structures. Preserving the functionality Eclipse has today, we could maybe get an overall factor 2 or 3 if we are lucky (or good). But then it would probably be an island solution. We would have more applications running in parallel, which then would add to the overall footprint.

What can you do to speedup your desktop applications today? In most cases adding memory is the solution. Don’t let you computer touch the hard-disk. Just go to the next computer shop and buy 2GB of memory and plug it into your computer. At least this is what helped me.

3 Comments

  1. Michael

    No offense… but I hope no one ever hires you to write a highly optimized graphics engine.
    Just kidding. Balancing complexity and features/performance is always one of the trickiest jobs in software engineering.
    But one really should not write off the performance of an IDE. As a developer I spend about 90% of my time staring at a debugger. If it takes 15 seconds to load the IDE (like some Redmond style IDEs) you can waste a considerable amount of time looking at a splash screen.
    P.S. You should do a better job at hiding the email addresses on this site.

  2. Michael Scharf

    Michael,
    indeed, performance of IDEs is important.
    I like the comparison to “highly optimized graphics engines”. Here the important feature is the optimization. There are high quality rendering engines where quality of the result is more important than the speed. Maybe some times someone comes up with a “highly optimized IDE” and is very successful. I’m sure that suddenly optimization of IDEs will get more focus.
    The funny thing is, 15 years ago I have been writing a highly optimized graphics engine: I implemented a subset (only lines, dots and text) of OpenGL in C using XWindows for a 3d molecular graphics program called WHATIF (http://swift.cmbi.kun.nl/whatif/). My engine was very flexible (I did OO in KR C) and faster than the hardware GL on an IRIS workstation at that time. The compromise was in rendering quality. No anti-aliasing and a simple algorithm for depth queueing (there were only 256 colors, so I put the objects into color bin and painted from dark to bright)….
    Michael

  3. Vetukuri Raju

    We have been using the WindRiver Workbench for over a year now. We are facing lots of problems, we could solve some with the help of the support, we still have seen the problems with memory leakages when we open and close the application. Our 3 GB RAM server goes out of memory when we open and close the WB for more than 6 to 7 times. Above blog says rightly that it is better to buy the RAM and then restart your server every other day.

Comments are closed.