
A physical, tangible representation of an image classification algorithm and the translation of its operations in a concrete, visible form. The human eye cannot understand completely the meaning of its actions, but its effects lead to the identification of the image metadata, understanding what is in the picture and categorising it, after a specific Machine Learning training. The output in a sensitive content detector is a censorship of a certain categories of pictures.







Our concept inspiration started reflecting on the theme of the exhibition – Exposed. One of our first approaches and interests was to understand what is unexposed, hidden, in the modern conception of photography.
Photography, by its nature, is a tangible and unequivocal proof that a moment has happened (Barthes, 1980). Thinking about its evolution in the digital age, images are now produced with a much greater frequency and in much greater quantities, thanks to the ease with which they can be captured. They became ephemeral, fast and result of the freedom of expression, without limitations and often without a real purpose. The use of such a large quantity of visual content is no longer dictated and regulated by the human being, but by the machine; every digital image has metadata, i.e. information that identifies and differentiates it from others. Algorithms that, far from our eyes, read and analyze this information and classify the images.
The idea of our installation aims to make tangible the algorithm and computer vision steps, expose the actions that read and categorise images, through the repetitive work of robots. Under each robot, a tablet will show a different picture and what happens when the algorithm has to analyse it: metadata extraction, object recognition and classification (through a pre-trained Machine Learning model) and final decision, if the content needs to be censored or not.
The aim is to make the visitor aware of automatic mechanisms, as well as raise questions about them: who manages these algorithms? How do they work and how were they trained? Who defines and decides how to categorize the contents submitted to them? Behind an apparent automation, in fact, there is a model trained by Man, from a selected and categorised dataset, which creates censorship that is not always justified and biases.