Many people are talking about the Herb Sutter article The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software”. I have to confess I found it rather simplistic and stating the obvious… but still a nice read.
January 19, 2005
5 Comments »
RSS feed for comments on this post. TrackBack URI
the stuff
about moore’s law not holding true forever was pretty predictable. but the idea that programmers have been relying on this to help their perf problems was surprising to me, probably because in open-source in the standards for coding are a bit higher, from the sheer visibility aspect of the code. (not saying anything about my employer in particular, but i’ll refer you to the daily wtf)
it irks me when people don’t take pride in their code, to tell you the truth. i don’t think there was one big project that i didn’t rewrite major portions of because of architectural reasons..it just makes life so much easier down the road & you learn so much about the project itself from coding it that you didn’t know when you started out.
Comment by nealsid — January 19, 2005 @ 5:53 PM |
Re: the stuff
Pride means different things to different people. Many programmers choose to optimize their programs for readability and maintainability over speed. I believe it is still a viable choice — the options in speeding up single-threaded programs is still significant (from increasing L2 cache through moving OS operations to a different CPU all the way to running a JIT which will optimize the program at run-time). None of this is science fiction, and this looks like it could give Moore’s-law-like speedups for ten more years.
Comment by moshez — January 20, 2005 @ 8:10 AM |
Re: the stuff
You can write code that is readable, maintanable *and* fast. If you ask me what’s more important, I’ll have to evade the question and say that depends on what the code is supposed to do ๐
Comment by mulix — January 20, 2005 @ 7:56 PM
Re: the stuff
I often find myself sacrificing speed for the other two…and I have yet to see code which is all three. I’ve seen (and wrote) code which was very fast…but it was neither readable nor maintainable. The thing is, bug fixes/changes touch your code at random, while speed is usually mostly wasted at bottlenecks. Hence, it’s better to optimize first for maintainability, and optimize for speed only when a performance problem is demonstrated, including having enough profiling data.
Comment by moshez — January 20, 2005 @ 9:38 PM
Interesting premise (megahertzes stopped growing), but I’m not sure I agree with most of the conclusions.
First of all, what’s “applications”? For instance, there’s office stuff, there are games, there are databases and OSs. The last have been trying to get the most out of parallelism for almost forever. Games get specialized hardware. And office stuff? I don’t really buy the assumption that they need ever more powerful CPUs to start with. Is Word 2003 really doing things that, say, Word 2.0 could not have done in its time and with its resources? It’s just bloated. It’s bloated because it can afford to be. And it still can afford to be bloated even if CPU cycles do not increase by a single Hertz in the coming 10 years, because it’s functioning well _now_. New features, if needed, can be added with more careful regard to performance, but that’s about it.
Secondly, I see no immediate need to turn to parallelism even in response to shrinking CPU performence _and_ growing application processing needs. Like I said, Word 2003 is doing more-or-less what Word 2.0 could have done with resources order of magnitude weaker. So if you need Word 2003 to be faster now, just optimize it. It should be easier, safer, more plannable, more easy dividable among developers than parallelization.
And lastly, the much-feared multithreading is not always necessary to achieve parallelism, because we also have multprocessing, which is working well now. Look at how the graphical functions are exported to OS daemons, or how database functions are exported to database servers. We might just see more and more such functions exported to specialized programs that run in their own processes and that might even have their own hardware support. In fact, pressure in the industry to produce faster code might push companies to export generic, yet performance-sensitive parts of their applications to other companies that specialize in writing such code.
Comment by adistav — January 20, 2005 @ 8:35 AM |