A Paralympian’s fight to be seen by AI

An athlete’s simple photo request turned into a powerful reminder that even the smartest technology still struggles to truly see everyone.

Paralympian

When Australian Paralympian Jess Smith used an AI image generator to create a new headshot, she discovered that technology still struggled to represent her accurately. Despite clearly prompting the system to show her missing left arm below the elbow, every generated image depicted her with two arms or a prosthetic.

The AI explained it lacked data to create such an image, a moment that made Smith realise how digital systems often mirror real-world inequalities and biases.

Months later, after media attention and reported model updates, the same AI finally produced an image that reflected her reality. For Smith, this was more than a technical fix.

‘Representation in technology means being seen not as an afterthought, but as part of the world that’s being built,’ she said, calling it ‘progress in humanity.’

OpenAI confirmed that it had recently improved its image-generation models, though it acknowledged that fair representation remains a work in progress.

Others, like Naomi Bowman, who has sight in only one eye, are still facing similar struggles. When she asked the AI to edit a photo without altering her facial features, it ‘evened out’ her eyes instead. Bowman called the experience ‘sad,’ saying it exposed the biases built into AI systems and urged developers to train models on broader, more inclusive data sets.

Experts warn that such biases persist because of who builds and labels the data.

‘It’s about who’s in the room when the data is being built,’ said Abran Maldonado of Create Labs, emphasising that inclusion must begin at the design stage.

For Smith, the lesson is clear. Technology isn’t inherently exclusionary but becomes so when designers fail to imagine everyone in the world it seeks to represent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot