Jarrod Barnes

Posted on Nov 28, 2022Read on Mirror.xyz

Bias in Generative AI - Risks and Ripple Effects of New Tech

I’ve recently been experimenting with generative AI (Open AI, Midjourney, Lexica, and my personal favorite, Kive AI Canvas) and am absolutely fascinated by the tech. Hacking my way to writing better prompts for GPT-3 and text-to-image has become a side hobby to stretch my creative muscles to explore new use cases. I’m bullish on what generative AI can do for the creator economy and drive content creation costs to zero, creating an even more accessible and inclusive pathway for creators, not to mention multiplayer and social experiences (see circle labs).

But, as fun as it’s been exploring and ideating - I’ve asked myself, who trained these models? How might this be misused? (Thinking back to the invention of the endless scroll) Turns out, I wasn’t the only one asking this question. OpenAI released the second version of its DALL·E image generator in April to rave reviews, but efforts to address societal biases in its output have illustrated systemic underlying problems with AI systems.

https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918

“This is not just a technical problem. This is a problem that involves the social sciences…there will be a future in which systems better guard against certain biased notions, but as long as society has biases, AI will reflect that.”-  Kai-Wei Chang, Associate Professor, UCLA Samueli School of Engineering

Now, to their credit, OpenAI has designed and implemented new models to aid in Reducing Bias and Improving Safety in DALL·E 2, claiming that users were 12× more likely to say that DALL·E images included people of diverse backgrounds after the technique was applied. That said, what does accountability at scale look like for the future of AI? As we know, governance often follows the tech (the White House published the Blueprint for an AI Bill of Rights ~5 years after IBM and Microsoft launched task forces to mitigate bias).

Recently, Hugging Face, an open source AI community focused on democratizing good machine learning, launched a Stable Diffusion Bias Explorer, a tool that can show how words like "assertive" and "gentle" are mapped to sexist stereotypes. The project is one of the first interactive tools of its kind, letting users combine different descriptive terms and see firsthand how the AI model maps them to racial and gender stereotypes.

https://www.vice.com/en/article/bvm35w/this-tool-lets-anyone-see-the-bias-in-ai-image-generators

The increasing complexity of these black box AI systems, which has made it challenging for scientists to understand how the systems work—beyond looking at inputs and outputs. While it’s impossible to fully eliminate human bias from human-made tools, tools that create greater levels of awareness can help both researchers and users have an understanding on how bias shows up in AI systems.

What does the future hold for generative AI? Certainly more creation, innovation, and expanded use cases. My hope is that we consider the ripple effects as we continue to build for the future. Have any perspectives on this topic? I’d love to chat and learn from you - feel free to reach out!

subscribe://