I've decided that whenever possible I will no longer use words that end in -ize. Many homely words have come into popular use which were born as adjectives but developed aspirations to become something more, in this case, nouns. They employed the pedestrian approach of tacking -ize onto their backside - initialize, linearize, functionalize, rhapsodize. Nasty, nasty, nasty.
When I can, I will replace such neoplasms with less ungracious variants, like initiate. I much prefer the ring of 'initiating a variable' or 'initiating my computer'. Of course in doing this some eggs may be broken. Some may mistakenly infer that I am ushering my computing device into a clandestine social order rather than simply flipping the off switch to on.
But that's a risk I'm willing to take. The road to righteous beauty seldom runs downhill.
Saturday, February 9, 2013
Tuesday, February 8, 2011
Politics are Just So Last Year
Lately I've been thinking a lot about the practical value of political parties, platforms, and philosophies. I've concluded that it's all rubbish. Basically everyone wants the same outcomes. We differ in our belief of how to get there, and to a lesser extent, whether the goal is practical or economical. But we agree on probably 90% of the goals (to wit: government initiatives and public institutions be effective and efficient).
The trouble with governing top-down by political philosophy rather than bottom-up by actual results is that we don't seem to learn from our mistakes. It seems as if we all live by the credo, "It's the thought that counts". You would think, in this scientific age, we'd be more inclined to say, "Gosh, I don't know how best to reach this goal. Maybe we should try an experiment. Let's bring together several competing groups, have each tackle this problem in a different way for a while, and then sit down and evaluate the results to see which was best, and learn what worked under what circumstances." Hmm. Wouldn't that be nice.
Outside of politics, a great many enterprises have had great success for many years making decisions using little more than the simple strategy of hypothesize-and-test. Many have used (machine) learning techniques and variations on a theme to evaluate candidate schemes for advertising, marketing, product pricing, contract negotiation, medical treatment, and myriad other settings where function matters more than form. Why have they succeeded where government has largely failed? Probably they cared most about results, not traditions or appearances or preserving the status quo.
So why is politics immune to the scientific method? Why isn't the bottom line the bottom line? I really don't know. Maybe in a democracy, results matter only if The People say they do. But I do think America could learn a lot from Google's style of management. You want results? Hire an engineer or scientist; ask them how they'd quantify a system and tune it up. I'll take their designs over a lawyer's any day.
The trouble with governing top-down by political philosophy rather than bottom-up by actual results is that we don't seem to learn from our mistakes. It seems as if we all live by the credo, "It's the thought that counts". You would think, in this scientific age, we'd be more inclined to say, "Gosh, I don't know how best to reach this goal. Maybe we should try an experiment. Let's bring together several competing groups, have each tackle this problem in a different way for a while, and then sit down and evaluate the results to see which was best, and learn what worked under what circumstances." Hmm. Wouldn't that be nice.
Outside of politics, a great many enterprises have had great success for many years making decisions using little more than the simple strategy of hypothesize-and-test. Many have used (machine) learning techniques and variations on a theme to evaluate candidate schemes for advertising, marketing, product pricing, contract negotiation, medical treatment, and myriad other settings where function matters more than form. Why have they succeeded where government has largely failed? Probably they cared most about results, not traditions or appearances or preserving the status quo.
So why is politics immune to the scientific method? Why isn't the bottom line the bottom line? I really don't know. Maybe in a democracy, results matter only if The People say they do. But I do think America could learn a lot from Google's style of management. You want results? Hire an engineer or scientist; ask them how they'd quantify a system and tune it up. I'll take their designs over a lawyer's any day.
Tuesday, September 8, 2009
Moore's Law is Dead, and I'm not feeling so good myself
Despite continued inertia to the contrary, the law attributed to Gordon Moore describing the doubling of computer chip speeds every 1 and 1/2 years has in fact reached the end of its rope. CPUs are no faster today than 5 years ago. Yes, dual and even quad core CPUs have arisen to claim the title of state-of-the-art on the laptop/desktop. But for almost all typical applications and workloads, multi core machines in 2009 actually run slightly slower than a one-core 3.2 GHz Pentium 4 did four years ago in 2005.
Strangely, for a phenomenon that has been so central to the advancement and pervasion (invasion?) of computer-based tech in every part of our lives, I've been surprised that the law's demise has received so little recognition, as well as the general disinterest that has greeted the prospect that computers might not get any faster. I think that small fact is actually a Very Big Deal. It has the potential to stop computer-driven tech advancements dead in their tracks. I think it's entirely possible that those forms of science and technology that depend on ever faster computers may in fact not develop much further until a new revolution in processor design arrives (e.g. optical or quantum computers). Until we can break the CMOS clock roadblock, and the exponential rise power consumption that accompanies it, we ain't going nowhere.
In fact, I think the problem is worse than our just standing still. There is a long history of code bloat and slowdown that has accompanied every advancement in software. New apps, new operating systems, new development tools, all of these are invariably slower than its predecessor. It's been so long since CPUs were slow that an entire generation of programmers has never had to write fast or efficient code. The skill has been lost.
Now it's also true that being skilled in writing and optimizing code is no surrogate for the exponential growth rate of Moore's Law. Writing fast code can achieve no more than a linear speedup (e.g. efficient cache management) in most cases. The problem is that the tools (languages, libraries, OSes) continue to slow down, and there's no chance that this trend is going to change for some time to come. Until then, given the end of Moore's Law, we'd better get used to computers that are getting slower, not faster.
I'm not sure what impact this imminent slowdown will have on the warp and woof of computing, especially on mainstays like laptops and cell phones. Probably it means that phones and computer sill require greater engineering effort than usual. Designers can't just assume that the CPU will continue to carry the full workload. More custom secondary chipsets will be needed to keep up with specialized tasks (voice processing, speech recognition, media streaming, etc). The rise of GPU (graphical processing units) is one examples of the inability of CPUs to keep on doing all the heavy lifting. I think new standards in audio processing chipsets may be next (for both speech recognition and generation).
However, I'm more concerned with the impact of Moore's ex-law on two tech areas: telecommunications and information-exploitive computing. If CPUs can't speed up then networks can't speed up. If networks remain static, then the cost of distributing broadband is not going to continue to drop. ISPs and telecommunications pipelines may 'fail to thrive' and stabilize well below the level of performance needed to realize the full potential of internet-based and/or time-shifted TV broadcast, or universally net-distributed education (e.g. on-demand education), or ubiquitous teleconferencing (badly needed to more efficiently deliver business collaboration that is independent of geography or time zone). Without big advances in telecom, the US is going to have a VERY hard time competing with low-wage nations that can charge 1/3 to 1/8 our labor rate.
My other area of concern, information-exploitation, covers techniques that maximize the value of data and information, like data mining, artificial intelligence, and robotics. Further advancements in these technologies will expand the limits of what computers can do. Without computer speedup, autonomous and tele-robots, software-bots, intelligent assistants, and even the indexing and organization of data-into-information-into-value is going to slow substantially. And that has the potential to be surprisingly bad for mature economies like the US.
For more than a century, America's fortunes have depended if not on our technological leadership, then at least on our ability to stay ahead of the competition by delivering products first that keep up with the state-of-the-art in technology, something which has not been possible elsewhere in the world and where labor costs otherwise handily out-compete us. But if advancement of the state-of-the-art slows with the end of Moore's Law, the rising tides in the BRIC (Brazil, Russia, India, China) countries threatens to overtake not only our already lost heavy industries, but also those tech-driven markets in which we currently enjoy an advantage, like the minimal time-to-market for novel uses of IT. Corporations like Intel and Apple that deliver 'hot' or 'cool' goods and services first in the US, and enable US business to trailblaze and invent new markets, likely will not continue to deliver their new toys here first if advancement in info tech slows and China or India gain a chance to invent those novel markets first. (See the books 'The Inventor's Dilemma' or 'The Inventor's Solution' for more on this.)
I think Intel's Andy Grove was right in saying 'Only the paranoid survive.' If the end of Moore's Law means what I think it might, then it's high time for the law's inventors to get as paranoid as possible.
Strangely, for a phenomenon that has been so central to the advancement and pervasion (invasion?) of computer-based tech in every part of our lives, I've been surprised that the law's demise has received so little recognition, as well as the general disinterest that has greeted the prospect that computers might not get any faster. I think that small fact is actually a Very Big Deal. It has the potential to stop computer-driven tech advancements dead in their tracks. I think it's entirely possible that those forms of science and technology that depend on ever faster computers may in fact not develop much further until a new revolution in processor design arrives (e.g. optical or quantum computers). Until we can break the CMOS clock roadblock, and the exponential rise power consumption that accompanies it, we ain't going nowhere.
In fact, I think the problem is worse than our just standing still. There is a long history of code bloat and slowdown that has accompanied every advancement in software. New apps, new operating systems, new development tools, all of these are invariably slower than its predecessor. It's been so long since CPUs were slow that an entire generation of programmers has never had to write fast or efficient code. The skill has been lost.
Now it's also true that being skilled in writing and optimizing code is no surrogate for the exponential growth rate of Moore's Law. Writing fast code can achieve no more than a linear speedup (e.g. efficient cache management) in most cases. The problem is that the tools (languages, libraries, OSes) continue to slow down, and there's no chance that this trend is going to change for some time to come. Until then, given the end of Moore's Law, we'd better get used to computers that are getting slower, not faster.
I'm not sure what impact this imminent slowdown will have on the warp and woof of computing, especially on mainstays like laptops and cell phones. Probably it means that phones and computer sill require greater engineering effort than usual. Designers can't just assume that the CPU will continue to carry the full workload. More custom secondary chipsets will be needed to keep up with specialized tasks (voice processing, speech recognition, media streaming, etc). The rise of GPU (graphical processing units) is one examples of the inability of CPUs to keep on doing all the heavy lifting. I think new standards in audio processing chipsets may be next (for both speech recognition and generation).
However, I'm more concerned with the impact of Moore's ex-law on two tech areas: telecommunications and information-exploitive computing. If CPUs can't speed up then networks can't speed up. If networks remain static, then the cost of distributing broadband is not going to continue to drop. ISPs and telecommunications pipelines may 'fail to thrive' and stabilize well below the level of performance needed to realize the full potential of internet-based and/or time-shifted TV broadcast, or universally net-distributed education (e.g. on-demand education), or ubiquitous teleconferencing (badly needed to more efficiently deliver business collaboration that is independent of geography or time zone). Without big advances in telecom, the US is going to have a VERY hard time competing with low-wage nations that can charge 1/3 to 1/8 our labor rate.
My other area of concern, information-exploitation, covers techniques that maximize the value of data and information, like data mining, artificial intelligence, and robotics. Further advancements in these technologies will expand the limits of what computers can do. Without computer speedup, autonomous and tele-robots, software-bots, intelligent assistants, and even the indexing and organization of data-into-information-into-value is going to slow substantially. And that has the potential to be surprisingly bad for mature economies like the US.
For more than a century, America's fortunes have depended if not on our technological leadership, then at least on our ability to stay ahead of the competition by delivering products first that keep up with the state-of-the-art in technology, something which has not been possible elsewhere in the world and where labor costs otherwise handily out-compete us. But if advancement of the state-of-the-art slows with the end of Moore's Law, the rising tides in the BRIC (Brazil, Russia, India, China) countries threatens to overtake not only our already lost heavy industries, but also those tech-driven markets in which we currently enjoy an advantage, like the minimal time-to-market for novel uses of IT. Corporations like Intel and Apple that deliver 'hot' or 'cool' goods and services first in the US, and enable US business to trailblaze and invent new markets, likely will not continue to deliver their new toys here first if advancement in info tech slows and China or India gain a chance to invent those novel markets first. (See the books 'The Inventor's Dilemma' or 'The Inventor's Solution' for more on this.)
I think Intel's Andy Grove was right in saying 'Only the paranoid survive.' If the end of Moore's Law means what I think it might, then it's high time for the law's inventors to get as paranoid as possible.
Subscribe to:
Comments (Atom)