in Artificial Intelligence, Technology

Can A.I. Be Taught to Explain Itself?

Source: The New York Times Magazine

Source: The New York Times Magazine

Accountability at an algorithmic level is a fascinating topic. Especially, in a world where AI algorithms will build new algorithms without human intervention.

We human beings seem to be obsessed with black boxes: The highest compliment we give to technology is that it feels like magic. When the workings of a new technology is too obvious, too easy to explain, it can feel banal and uninteresting. But when I asked David Jensen — a professor at the University of Massachusetts at Amherst and one of the researchers being funded by Gunning — why X.A.I. had suddenly become a compelling topic for research, he sounded almost soulful: “We want people to make informed decisions about whether to trust autonomous systems,” he said. “If you don’t, you’re depriving people of the ability to be fully independent human beings.”

A clear example is Facebook’s feed and the way they tweak what a user will see. The same goes for Google’s featured “answer” to a user’s question. It seems to me a no brainier that we need to make companies accountable for their algorithmic misbehavior so we can fix the fundamental issues that they generate. In particular, on products that reach billions of people.

Three years ago, David Gunning, one of the most consequential people in the emerging discipline of X.A.I., attended a brainstorming session at a state university in North Carolina. The event had the title “Human-Centered Big Data,” and it was sponsored by a government-funded think tank called the Laboratory for Analytic Sciences. The idea was to connect leading A.I. researchers with experts in data visualization and human-computer interaction to see what new tools they might invent to find patterns in huge sets of data. There to judge the ideas, and act as hypothetical users, were analysts for the C.I.A., the N.S.A. and sundry other American intelligence agencies.

The researchers in Gunning’s group stepped confidently up to the white board, showing off new, more powerful ways to draw predictions from a machine and then visualize them. But the intelligence analyst evaluating their pitches, a woman who couldn’t tell anyone in the room what she did or what tools she was using, waved it all away. Gunning remembers her as plainly dressed, middle-aged, typical of the countless government agents he had known who toiled thanklessly in critical jobs. “None of this solves my problem,” she said. “I don’t need to be able to visualize another recommendation. If I’m going to sign off on a decision, I need to be able to justify it.” She was issuing what amounted to a broadside. It wasn’t just that a clever graph indicating the best choice wasn’t the same as explaining why that choice was correct. The analyst was pointing to a legal and ethical motivation for explainability: Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.

Via The New York Times Magazine.

Write a Comment

Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.