One of my first software engineering projects after earning my degree in Computer Science in 1983 was working on an interpreter for Wang BASIC-2. A company here in Seattle was heavily invested in computer systems manufactured by Wang Laboratories and needed to port their application software to other mini and microcomputer systems. It was a heady time.
I had studied all the usual topics related to language systems in college and really enjoyed it. So it was a great job to get. I also really enjoyed the people I worked with and flying to Boston from time to time was fun too.
I also enjoyed programming in LISP. One of the maxims that was thrown around from time to time, whenever someone was complaining about some construct being slow, was “Given a sufficient smart compiler, it wouldn’t be so slow.”
Enter optimizing compilers. It’s a fun set of puzzles to work on. Given some intermediate representation (IR) of a program, how can you automatically generate a faster representation without changing the meaning (the semantics) of the representation?
Well, here’s an article that takes a linear time algorithm O(n) and computes a constant time O(1) version of it. Quite simply, this is amazing.
I’m left wondering, though, what’s going to happen in code reviews now? When someone reviews a function and suggests that it looks inefficient, how are they going to know that the compiler isn’t going to rewrite it in a non-obvious way that is highly efficient?
In some sense, we already have this situation. Optimizing compilers today are amazing and arguing about efficiency of code in source form can be a fool’s errand. It’s better to consider the overall algorithm in use and look for better algorithms once it’s known that the efficiency is important. We know from the days of Extreme Programming that usually, YAGNI (“You Ain’t Gonna Need It”) is a good principle to follow.
At any rate, I hope you enjoy the article. The commentary on Hacker News is great too.