Not only do oops happen, they seem to abound. (WSJ 2/22/08, Technology 1, Common Sense 0). Some are truly unexpected; many are related to risk.
Let's see, isn't it the basic tenet of capitalism, from some views which are not, by the way, that to which we ought to have undying loyalty, that the spoils goes to those who take risk?
Now, to see where such a comment might be going, what if that same attitude was applied to real life events, such as flying? First, the manufacturer knows that risk elements are to be minimized, not just left to their own devices. Too, those who provide services related to flying know that they cannot push risks too far because of safety and cost.
Yet, in the economy, some aura that is best described as more amenable to Las Vegas, or Reno, than to normal affairs has taken hold.
Of course, what is risk and how ought we manage this thing are topics that require continual study and discussion. Whole disciplines deal with the matter.
Again, let's use an example from engineering. A recent IEEE Control Systems Society publication surveys the state, and continuing issues, of inertial stablized platform (ISP) technology. By the way, how this relates to the topic is the a plane can be an example of an ISP application in action. One comment alluded to the decades of work that have gone into improving the state of affairs in regard to the ISPs.
Yet, there are plenty of problems to solve. Two key issues are measurement and control. That is, like ourselves (we are another example of an ISP) who have to sense and respond, so do things that move intelligently. Nowadays, there is no component used for the ISP task that is not computationally influenced, whether it is more embedded piece of circuity than software.
In regard to oops, we get to where we trust technology too much. The WSJ (see above) story talked about a truck going awry and getting into a messy situation, because the driver followed GPS instructions (evidently, though, not paying attention to warnings) rather than responding to visual cues that things were getting problematic.
Ah, truth engineering deals with this sort of thing, in part.
In a complex project, one can somewhat sympathize with managers who let things go awry due to too much reliance on the computational, in particular pointing fingers here at both PLM and CAE. Yet, one wonders how many engineers were thrown out of the office or downright disciplined under some guise of not being a team player? What is that old story of the team of lemmings running over the cliff?
So, 'oops and loops lead to oops. They are inevitable. Yet, unreasonable risk taking (or even reasonable when stakes are high) are not to be rewarded; actually, we need to take back some bonuses made to financial folks the past few years.
How about putting a lag on some financial rewards of a longer duration, like years? Any other dampers that we could think of?
After all, most of us don't retire after childhood. When one looks at the situation with the boomers, who are significant due to their massive generational imprint, one sees problems arising as the boomer retirement period advances, the study of which ought to helps us to learn a few lessons.
01/09/2009 -- The year end was very interesting; now, we need to show that oops help us to learn.
11/20/2008 -- Boon and bust, the way of fairy dust.
Several Readers Asked Me The Question
1 week ago