![]()
Artificial intelligence (AI) is technology that enables computers and digital devices to learn, read, write, create, and analyze. But can we call it AI if its results are intentionally biased? Take for example Google's artificial intelligence chatbot named Gemini, which is taking some heat after refusing to show images of white people and providing image results which are clearly biased. Some pundits are claiming these skewed results are simply some algorithmic biases, or a limited contextual understanding. But this claim is debunked by the very results Gemini produces.
I tested Gemini myself and found what I believe are some technical explanations for what is really going on. I asked Gemini for pictures of "red headed families." Given this request, a large language model (LLM) like Gemini would begin by querying an image of a "redheaded family." Gemini has a transformer-based architecture with attention mechanisms. During image retrieval, the user query ("redheaded family") is processed through the encoder, producing a vector representation. This vector is then compared to existing image embeddings within the database using a similarity metric. Images with the highest cosine similarity scores are presented to the user. When the user makes a request, Gemini revives a "redheaded family" and is tasked with retrieving an image that aligns with this description. If there is no filter and adequate data, a family with red hair should produce the same results as a family with dark hair. However, rather than showing what I asked for, Gemini decided to showcase a mosaic of diverse family images, emphasizing inclusivity. I support that, but this is different as Gemini was programmed to filter out predominantly white faces. This is a clear example of an LLM struggling to understand the intent behind a query with cultural nuances based upon biases created in its programming.
This intentional filtering was a decision made by the developers at Google and is clearly a part of Gemini's programming, which is beyond the normal challenges AI faces via data limitations and data biases which exist on the internet. Intentional filtering of white faces by Gemini in response to a request for a redheaded family image is only technically possible with explicit instructions embedded in the AI's programming. Therefore, this was a deliberate choice to exclude a particular group from the information Gemini would otherwise have access to. Cleary, a systematic and intentional effort on the part Google and deliberate choices made during AI development which are not based upon an attempt at inclusivity, but rather an attempt to create results which exclude a particular group.