FEATURED ARTWORKS OF THE WEEK
ONe Rad Latina is a self-taught, Neurodiverse, multi-disciplinary visual artist born and raised in New York City’s Inwood and the Dominican Republic.
Her passion lies in public art, and she can often be found creating murals and art installations out of found or repurposed materials, bringing beauty back to the communities of New York.
ONe Rad Latina works with different mediums and prides herself on her prolific line of artistic works and not being tied down to one specific visual artist’s craft.
For her digital works, she combines traditional techniques with new technologies to bring better representation to Women and People of color in web3.
Today, we’re hearing from ONe Rad Latina on the effect of AI engine bias and her work to improve AI engine diversity and cultural inclusion.
The Cultural Algorithm by ONe Rad Latina
We live in a world that is run by AI (Artificial Intelligence). From the song that plays next on your playlist to the ad you’ll see when streaming your favorite show, AI has its virtual hand in all of it.
Much like the algorithm in your favorite music app, AI engines using text-to-image algorithms are learning from your preferences.
Because these text-to-image algorithms learn based on human behavior, many have shown a tendency toward the sexualization of women and/or the exclusion of people of color in image renders.
I was very aware of this when I began exploring the concept of adapting AI as a tool into my artistic practice, and first did a bit of research into the different AI models and where their codes leaned toward certain biases.
This was especially important for me as an Indigenous Latina Woman working to reflect my cultural identity and who I am in my work.
When I later began to create artwork using the AI models, I saw that bias for myself.
I saw that If I did not state the racial and ethnic descriptions, the AI would return images of mostly white subjects regardless of cultural cues. If no gender was implied then the subjects would tend to be mostly male.
It was disappointing but not surprising.
After working to find descriptions of subjects that would leave no room for doubt as to what I needed, the AI slowly started to produce accurate imagery with less prompting for race or gender based only on the cultural cues it would overlook before. It did this by learning from my many attempts of triumph and failure within the render and edit process.
The AI was able to differentiate between actual culture and costume in future works, removing a built-in bias; I was amazed.
The development of these AI algorithm tools is in the very early stages; trial and error are part of the process.
Creatives like me, from communities whose cultures are often as exploited as their people, finally have an opportunity to drive the way these machines learn our cultures. We can guide them into a place where they can effectively work with artists in a truly meaningful and equitable way.
To me, as a creative person, it’s important to maintain a working relationship with these tools, so that it may correctly reflect my culture and not what others who are not a part of it have perceived it to be.
Last week, we spoke with JenJoy Roybal on her curation process for Machine Dialogues, our latest exhibition. Check out her interview in last week’s blog post here.