Long exposure of waterfall in heavily wooded ravine

Carried over the edge

Water flows in a lng exposure over a rocky waterfall.
Waterfall. Itsukushima, Japan. AI was used to help in the editing of this photo, but nothing has been added or removed.

Much has been written in recent weeks about the lightning quick advances in AI, especially with respect to ChatGPT and similar programmes which generate text. There have been debates about AI chatbots and customer care systems; about AI in online dating; about the potential dangers posed by unregulated AI according two of the three “Godfathers of AI”.

While all of this is worthy of discussion and debate and, perhaps, even anxiety, it does not pique my interest as much as the rise of AI in photography.

To be clear, manipulation and photography have gone hand in hand since the birth of the medium, and consequently the arguments around “truth” in photography have always existed. Every decision a photographer makes has a bearing on the image the viewer sees. The location, the light, the background and foreground, the relationship between the subject and other elements, the choice of film (or now the colour settings on the camera), the lens, the exposure… even the exact moment of releasing the shutter, and the decisions about which elements to leave outside the field of view all have a direct impact on how the resulting image is regarded.

Is it the “truth”? That, surely, is a question of philosophy which has no relevance to the veracity of the photograph as a record of what was there.

It has always been the prerogative of the photographer to edit their image so that it accords with their own intellectual, emotional, or visceral response to a scene. For many purists such editing was and is confined to relative changes in contrast, exposure, saturation, and cropping. That is to say that nothing is added to or removed from the field of view remaining in the final photograph. It is a truthful reflection of what was in front of the photographer’s camera and lens.

Indeed, within the world of newsgathering the importance of not altering the contents of a photograph is sacrosanct, and rightly so. Photographers who have added or removed things from a picture to make it more appealing have been sacked and told they will never work in news photography again.

Which brings us to AI. Artificial intelligence started to appear in photographic editing software a few years ago, and has improved incrementally, but until recently its use was in making basic editing either quicker or better.

For example, one may want to edit the colour balance and contrast of a subject without affecting the surrounding scene. Historically this would have taken some time to do, especially if it had a complex outline. Now, using AI, a person or object can be selected at the touch of a button making the editing process more efficient. Similarly, AI can be used to improve problems of noise in an image, or to improve the sharpening process (all digital images are necessarily “soft” and sharpening is a necessary part of the editing process, but that is a whole other conversation – get in touch if you would like to know more).

However, recently we have seen AI generated images on social media (like the highly entertaining set imaging the Royal Family in a different light https://www.instagram.com/p/Cr-3KjsM1jq/ ), and Adobe has launched a beta version of Photoshop which offers “generative AI”. This is not just a game changer; it is a paradigm shift.

There are photographers – or lens-based artists – who have always manipulated their photos adding or subtracting things to create their work, and what they create is often exquisite and moving. But there is never any suggestion that it is anything other than a work of art – a realisation of the author’s imagination. The danger of rapidly accelerating advances in generative AI is that when anything we can imagine can be so readily created and disseminated, the huge rivers of photographs we already experience will be flooded with images that have no relationship to reality at all but cannot be distinguished as such. Without any form of regulation our ability to discern “truth” from “fiction” will be eroded, and with it our trust in what we see.

Arguments already exist that trust in photographs was eroded years ago, and to an extent that is undeniable. But when generative AI reaches a point that the images it creates are impossible to pick out from “real” photographs, will we ever trust anything we see again?

Certainly, the mainstream media will continue to demand that the images and films it publishes in the news are not doctored in any way, but with ever greater numbers of people getting their news from social media channels where there is no regulation, the potential for damage to a community and its understanding of events is worthy of discussion.

Leave a Reply

Your email address will not be published. Required fields are marked *