Much has been made of what happens when machine learning and AI go wrong. However, have you ever wondered about the dangers of what happens when machine learning works? Jonathan Zittrain, examines the dangers of allowing machine learning systems to make predictions whose rationale we don't understand.
Zittrain identifies three main issues in relying on AI and machine learning:
- Misfires. Machine learning works on identifying patterns in data. It can provide an answer based on the data, but it cannot identify the why or the cause. As judgment (human or otherwise) is removed), there is no easy way to predict how machine learning might fail when presented with specially crafted or corrupted data.
- Interaction of Results. As the use of AI and machine learning becomes more pervasive, they will generate more data of their own (which will subsequently be used by other machine learning and AI systems). The interaction of thousands of systems could produce irrational results.
- Theory Free Learning. This is probably the most interesting issue. If machine learning can provide the result without the theory, does this accelerate a drive towards a "results" research agenda which focuses on short term applications, as opposed to the science and knowledge underpinning the result?
Much of the timely criticism of artificial intelligence has rightly focussed on the ways in which it can go wrong: it can create or replicate bias; it can make mistakes; it can be put to evil ends. We should also worry, though, about what will happen when A.I. gets it right