How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence

November 22nd, 2019

Matteo Pasquinelli, “How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence”. Spheres, n. 5, 2019. +PDF

glorified-statistics

Source: Joe Davidson

 

“Once the characteristic numbers are established for most concepts, mankind will then possess a new instrument which will enhance the capabilities of the mind to a far greater extent than optical instruments strengthen the eyes, and will supersede the microscope and telescope to the same extent that reason is superior to eyesight.”— Gottfried Wilhelm Leibniz.

“The Enlightenment was […] not about consensus, it was not about systematic unity, and it was not about the deployment of instrumental reason: what was developed in the Enlightenment was a modern idea of truth defined by error, a modern idea of knowledge defined by failure, conflict, and risk, but also hope.”
— David Bates.

“There is no intelligence in Artificial Intelligence, nor does it really learn, even though it’s technical name is machine learning, it is simply mathematical minimisation.”— Dan McQuillan.

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”
— Joe Davidson.

What does it mean for intelligence and, in particular, for Artificial Intelligence to fail, to make a mistake, to break a rule? Reflecting upon a previous epoch of modern rationality, the epistemologist David Bates has argued that the novelty of the Enlightenment, as a quest for knowledge, was a new methodology of error rather than dogmatic instrumental reason. In contrast, the project of AI (that is to say, almost always, corporate AI), regardless and maybe due to its dreams of superhuman cognition, falls short in recognising and discussing the limits, approximations, biases, errors, fallacies, and vulnerabilities that are native to its paradigm. A paradigm of rationality that fails at providing a methodology of error is bound to end up, presumably, to become a caricature for puppetry fairs, as it is the case with the flaunted idea of AGI (Artificial General Intelligence).

Given the degree of myth-making and social bias around its mathematical constructs, Artificial Intelligence has inaugurated the age of statistical science fiction.

What's this?

You are currently reading How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence at Matteo Pasquinelli.

meta