Saturday 6 April 2019

Should human pilots be able to override autopilot?

All over the news this week is the Boeing 737 crash in Ethiopia, which has killed all 157 people on board. The cause of the crash, it seems, is the same as the cause of the Boeing 737 crash in Indonesia last year: a malfunction in the AI autopilot system which is supposed to keep the plane level. It seems that if a plane climbs at an angle greater than 10°, it is likely to stall, and so the AI system (known as MCAS) detects if this is about to happen, and automatically pulls down the nose of the aeroplane, thus preventing it from stalling. That's what is supposed to happen, anyway. Apparently, in both the Indonesia and Ethiopia crashes, the problem was that the MCAS system wrongly detected the angle of the plane (thinking it was angled upwards more than it actually was) and compensated by pulling the nose of the plane downwards. This caused the plane to nosedive and crash.



According to news reports, in both fatal crashes, human pilots had fought against the AI system, trying to regain control of the aircrafts, but failing to do so because the AI system was built to override human input. So we have two cases where the AI was in error, but the human was right, but the AI system 'won', with fatal results. In those cases, if humans had been allowed to override the AI, lives could almost certainly have been saved. But should humans always have the power to override AI?

Why do we create AI systems or machinery in the first place? For two main reasons, it seems:

  1. To free up human time
  2. To be better than humans / other methods
We invented spinning and weaving machines which could produce cloth more quickly, cheaply and accurately than humans could. We invented the motor car to go faster, further and for longer than a horse can pull a carriage. And presumably we invented the MCAS system for aeroplanes because it can judge angles of ascent and compensate for them more consistently and safely than a human pilot can.

Sometimes, people make mistakes. Sometimes those mistakes are fatal. But it is part of human nature to make mistakes sometimes, even when one is trained to an exceptionally high standard. But we seem to more readily accept human error than AI error. We seem to have a tendency to believe that AI should be flawless, and anything less than 100% perfection is unacceptable. I don't know the stalling statistics of planes prior to the MCAS system; if human error on the issue was higher than AI error, then shouldn't we just accept that, although some people have died as a result of the MCAS system, many more have been saved? If human error on take off used to cause 300 deaths a year, and endanger many more, then an AI system which 'only' causes 160 deaths a year is an improvement, isn't it?

But we are loathe to accept that, because we have a sentimental idea that AI should be flawless, and it's unrealistic. Sometimes people make mistakes, and sometimes machines malfunction or misjudge situations. What's particularly sad about the Boeing crashes is that the human pilots were right, but had been locked out from overriding the AI system. So why not alter the MCAS system so that pilots can override it? The answer is that an override function reintroduces the possibility of human error. There could be a take off where the human pilot wrongly believes the MCAS system is malfunctioning, overrides it, points the nose of the plane higher, the plane stalls and crashes and everyone dies. Then we would call for humans not to be able to override the AI. But we can't have it both ways: either we live with human error, or we live with machine malfunctions. AI seems to make fewer mistakes, but we are less accepting of its flaws than we are with human error. Where we'll go with the human versus AI pilot is anybody's guess. But no system is flawless, and whether we rely on humans or machines to fly our planes, there will sadly be some fatal crashes.