Blackbox [upd] -
To survive this, we need a new discipline: . Instead of opening the black box (which is mathematically impossible for deep networks), we build second models that act as interpreters. We ask the black box to highlight the pixels it was looking at. We force it to provide a "reason" after the fact, even if that reason is just a simulation.
Doctors were baffled. Asthma is a major risk factor for pneumonia complications. Why would the AI do this? blackbox
Because the engineers couldn't ask the AI directly, they had to reverse engineer the data. They discovered that the hospital had a protocol: All asthmatics with pneumonia are immediately sent to the ICU. Therefore, these patients received aggressive, life-saving care immediately. The AI, seeing only the outcome (asthmatics rarely died), concluded that asthma was protective. To survive this, we need a new discipline:
Consider the case of a hospital in Tennessee. Doctors deployed a cutting-edge black box AI to identify patients at risk of pneumonia. The AI was remarkably accurate—except for one glitch. It consistently sent asthmatics home, labeling them "low risk." We force it to provide a "reason" after
We are now in a position where we must trust the oracle, but we are forbidden from looking behind the curtain. Historically, enlightenment thinkers believed that explanation preceded trust . We believed the sun would rise because Newton explained gravity. We believed a surgeon was competent because we saw their diploma.