Sunday, July 5, 2009

Scale up versus extrapolation

Aviation Week (AW) covered the recent problem discovery and reported an element of surprise.

No, folks, this was predictable from the beginning as shown by entries (see #2, #4 and more) in this blog; one main issue deals with quasi-empiricism.

First of all, in the AW article, an engineering professor is quoted as saying that Boeing did it backward. Usually, one starts small and then scales up (remember this important concept, as we'll cover it more thoroughly). That is, one enlarges from a small basis. You know, there is an adage: don't bite off more than one can chew.

The concept is applied a lot in modeling. In fact, software vendors always worry about scale up, in many ways, such as logic, throughput and more (very many ways).

The professor expressed some surprise that Boeing had extrapolated so much from a limited basis of knowledge. You see, not only were there relaxations along many axes in the product and process spaces, the program jumped to a maximum without prototyping. Why? Well, the computer was the magic key (computerism).

Now, the article says that some claim that the software involved (meaning CAE) is not suspect. No, the article says, it was the data and model that were. According to those that know, including a quoted expert.

Well, the blogger has no inside information on the particular project but speaks from experience related to this type of modeling. The software, and its underlying philosophical framework, are always to be suspect. Why? There is no grand theory to support these things to the level that we have been led to believe.

Ah, folks, it is a very un-insightful viewpoint for an engineering company to not know the limits of mathematics and software. But, the view is probably the one to be expected to be held by a gaming generation, that is until they bump up against the world.

We'll go into this further, as the subject is not something amenable to easy resolution. I hope that the FAA is aware of the issues like these.

One big issue? Allowing output from one program to run into another program as if it were equivalent to naturally-obtained data. Say what? Well, why do you think they say the software is okay but not the data/model? Supposedly, the belief in the software is much stronger than we expect. Gosh, if this is so, it needs to be deconstructed; perhaps, recent events provide the perfect opportunity for engineering to learn.

Formerly, there was a hard rule to never think that a program's output stood for nature. Especially so is this the case when dealing with the results of an application of differential equations (and the like); no matter how brilliantly close these results may appear when viewed with the newer mechanisms for visualization (yes, we'll go into that more, too), they are approximates to the real.

Too, these techniques require fiddling (think of oodles of knobs) for controlling the evolution of a solution (in this case, data or model?).

Gosh, have we begun to believe that our little virtual selves (and their artifacts) on the tubes (in cyber space or the cloud) are more real than what we are in the world (and its artifacts) in which we live? This will be very important to discuss.

The management view does not understand the subtleties of these problems. Technical fellows do a little bit more so, but they too can go astray.

There is (can be) no hubris in engineering!!!

Remarks:

01/19/2011 -- Update1 and Update2. The focus now will be mostly the idiots of economics/finance.

09/02/2009 -- Let's face it, folks, undecidability needs to be discussed and adopted in any complex situational setting, especially if computers are involved. Only hubris pushes us to make loud exclamations about what we're going to do in the future.

08/20/2009 -- Nor in financial mathematics, either.

Modified: 08/24/2011

No comments: