Olia Lialina on Thu, 14 Mar 2019 20:39:51 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> rage against the machine


i was rereading today this 5 y. o. article about a decade old accident

https://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash/amp

following are parts of  IV. Flying Robots and the article's final statement

It takes an airplane to bring out the worst in a pilot.
[... ]
Wiener pointed out that the effect of automation is to reduce the cockpit workload when the workload is low and to increase it when the workload is high. Nadine Sarter, an industrial engineer at the University of Michigan, and one of the pre-eminent researchers in the field, made the same point to me in a different way: “Look, as automation level goes up, the help provided goes up, workload is lowered, and all the expected benefits are achieved. But then if the automation in some way fails, there is a significant price to pay. We need to think about whether there is a level where you get considerable benefits from the automation but if something goes wrong the pilot can still handle it.”

Sarter has been questioning this for years and recently participated in a major F.A.A. study of automation usage, released in the fall of 2013, that came to similar conclusions. The problem is that beneath the surface simplicity of glass cockpits, and the ease of fly-by-wire control, the designs are in fact bewilderingly baroque—all the more so because most functions lie beyond view. Pilots can get confused to an extent they never would have in more basic airplanes. When I mentioned the inherent complexity to Delmar Fadden, a former chief of cockpit technology at Boeing, he emphatically denied that it posed a problem, as did the engineers I spoke to at Airbus. Airplane manufacturers cannot admit to serious issues with their machines, because of the liability involved, but I did not doubt their sincerity. Fadden did say that once capabilities are added to an aircraft system, particularly to the flight-management computer, because of certification requirements they become impossibly expensive to remove. And yes, if neither removed nor used, they lurk in the depths unseen. But that was as far as he would go.

Sarter has written extensively about “automation surprises,” often related to control modes that the pilot does not fully understand or that the airplane may have switched into autonomously, perhaps with an annunciation but without the pilot’s awareness. Such surprises certainly added to the confusion aboard Air France 447. One of the more common questions asked in cockpits today is “What’s it doing now?” Robert’s “We don’t understand anything!” was an extreme version of the same. Sarter said, “We now have this systemic problem with complexity, and it does not involve just one manufacturer. I could easily list 10 or more incidents from either manufacturer where the problem was related to automation and confusion. Complexity means you have a large number of subcomponents and they interact in sometimes unexpected ways. Pilots don’t know, because they haven’t experienced the fringe conditions that are built into the system. 

[... ]
 At a time when accidents are extremely rare, each one becomes a one-off event, unlikely to be repeated in detail. Next time it will be some other airline, some other culture, and some other failure—but it will almost certainly involve automation and will perplex us when it occurs. Over time the automation will expand to handle in-flight failures and emergencies, and as the safety record improves, pilots will gradually be squeezed from the cockpit altogether. The dynamic has become inevitable. There will still be accidents, but at some point we will have only the machines to blame.



---- Morlock Elloi wrote ----

Handling of the recent B737 Max 8 disaster is somewhat revealing.

What seems to have happened (for the 2nd time) is that computing machine
fought the pilot, and the machine won.

It looks like some cretin in Boeing that drank too much of AI Kool Aid
(probably a middle manager) decided to install trained logic circuit
that was supposed to make new aircraft behave (to pilots) like the older
one. As its operation was far too complicated (ie. even Boeing didn't
quite understand it) they decided not to inform pilots about it, as it
could disturb the poor things with too much information.

One part of the unknown operation appears to be the insistence of ML
black box on crashing the airplane during ascent. As it had full control
of the trim surfaces there was nothing pilots could do (I guess using
fire axe to kill the circuit would work, if pilots knew where the damn
thing was.)

That's what the best available info right now is on what was the cause.

What is interesting is how this was handled, particularly in the US:

- There were documented complaints about this circuit for long time;
- FAA ignored them;
- After the second disaster most of the world grounded this type of
aircraft;
- FAA said there is nothing wrong with it;
- It seems that intervention from White House made FAA see the light and
ground the planes.

Why? What was so special about this bug? FAA previously had no problem
grounding planes on less evidence and fewer complaints.

It may have to do with the first critical application of the new deity
in commercial jets. The deity is called "AI", and its main function is
to deflect the rage against rulers towards machines (it's the 2nd
generation of the concept, the first one was simply "computer says ...".)

FAA's hesitation may make sense. After several hundred people have been
killed, someone will dig into the deity, and eventually the manager
idiot and its minions will be declared (not publicly, of course) the
guilty party. This could be a fatal blow to the main purpose of the deity.


(BTW, 'rage' is also a verb)




#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: