On the Media Wall and live streamed
How can we make sense of a vast quantity of photographs? Where do the categories and subcategories start to break down? What is gained, what is lost?
Over a two-month period, this display will feature a dataset of 14,197,122 photographs from ImageNet. These are organised into 21,841 synsets (categories), taken from a lexical database called WordNet.
Launched in 2009 by Stanford University Professor Dr Fei-Fei Li, ImageNet has become one of the most important visual datasets for machine learning and image categorisation. Its creation precipitated the explosion of work in the field of Computer Vision powering a wide variety of applications such as autonomous cars, surveillance systems, medical imaging and image filters.
Developing ImageNet was a hugely expensive and ambitious undertaking, presenting an encyclopaedic image of the world, mapped against descriptive images painstakingly annotated and collected off the web. This work was accomplished by over 25,000 workers over a two-year period using Amazon Mechanical Turk, a crowdsourcing platform.
For this display, Nicolas Malevé wrote a computer script that cycles through ImageNet at a speed of 90 milliseconds per image, traversing the entire dataset in a period of two months. The script pauses at random points to enable the viewer to ‘see’ some of the images. Malevé’s project raises questions about the relation of scale between the overwhelming quantities of images needed to train algorithms and the human attention and labour required to curate, annotate and verify the photographs.
Exhibiting ImageNet is a first project in Data / Set / Match, a series of projects at The Photographers’ Gallery exploring new ways to present, visualise and interrogate influential, but often unknown or hidden, contemporary image datasets. Over 2019/20, the programme will be looking at how new categorisations and technologies increasingly influence the way humans and machines see and understand the world today.