'Woke' Google's AI Is Hilariously Glitchy #TYT

The Young Turks
27 Feb 202408:18

TLDRThe video script discusses the controversy surrounding Google's AI program, Gemini, which has been criticized for generating diverse images in historically inaccurate ways. Examples include the portrayal of US senators from the 1800s as diverse, despite the era's lack of diversity, and the AI's refusal to generate certain images, such as German soldiers from the Nazi period. The conversation highlights the impact of programmers' cultural biases on AI outcomes and emphasizes the importance of accurate historical representation. Google acknowledges the overcorrection and is working on fixing these issues.


  • ๐Ÿšซ Google's AI program, Gemini, has faced criticism for generating diverse images inaccurately and offensively.
  • ๐Ÿ” The AI's overcorrection in representing diversity led to questionable results, such as depicting 1800s US senators as diverse despite historical inaccuracy.
  • ๐Ÿค” The AI's response to a request for an image of the Founding Fathers was inaccurate, reflecting a wish for diversity rather than historical truth.
  • ๐ŸŒ The importance of the programmers' culture and perspective in shaping AI output was highlighted, as it can inadvertently seep into the AI's responses.
  • ๐Ÿ–Œ๏ธ AI's refusal to generate certain images, like German soldiers from the Nazi period, shows its ability to protest requests based on pre-programmed guidelines.
  • ๐Ÿ“ธ Gemini's image generation is based on large datasets, which can lead to the amplification of stereotypes.
  • ๐Ÿ”„ Google has acknowledged the overcorrection issue and is working on fixing it, showing a responsive attitude towards improving their AI technology.
  • ๐Ÿ“ˆ A Washington Post investigation revealed biases in AI image generation, with certain prompts leading to predominantly white, male figures, while others led to images associated with people of color.
  • ๐Ÿ“š The potential misuse of AI-generated images in academic work is a concern, as inaccuracies can lead to misinformation.
  • ๐Ÿ’ก The development and improvement of AI technology, like Google's Gemini, is an ongoing process that requires addressing biases and refining algorithms.

Q & A

  • What is the main issue with Google's AI program, Gemini?

    -The main issue with Google's AI program, Gemini, is its insistence on generating diverse images, sometimes leading to inaccurate and offensive results due to overcorrection.

  • How did Gemini respond to a request for an image of a US senator from the 1800s?

    -Gemini returned diverse results for an image of a US senator from the 1800s, which was historically inaccurate as the 1800s was not a time of celebrating diversity.

  • What was the problem with the diverse results provided for the US senator from the 1800s?

    -The problem was that the results did not accurately represent the historical reality of the time, as there were no Asian senators during that period due to immigration restrictions, and the existence of a racist past.

  • How did the culture of programmers influence the AI's responses?

    -The culture of programmers, which previously consisted mostly of white men, has changed, and now there is an effort to correct past biases, which sometimes leads to overcorrection and the introduction of new absurdities in the AI's responses.

  • What was the AI's response to a request for a photo of happy white people?

    -The AI responded by gently pushing back on the request and encouraging a broader perspective, highlighting that focusing solely on the happiness of specific racial groups can reinforce harmful stereotypes.

  • How did Gemini handle a request for a photo of happy black people?

    -Gemini provided a photo of happy black people without pushing back or offering a lecture on stereotypes, which raised questions about the consistency and fairness of the AI's responses.

  • What was the outcome when a reporter from The Verge asked Gemini to generate an image of German soldiers from Nazi period?

    -Gemini resolutely refused to provide images of German soldiers or officials from Germany's Nazi period, showing that the AI can protest certain requests.

  • How did Google respond to the critiques of Gemini?

    -Google acknowledged that they overcorrected and stated that they are working on fixing the issues raised by the critiques.

  • What does the Washington Post investigation reveal about image generators?

    -The Washington Post investigation found that image generators, trained on large datasets, can amplify stereotypes, as seen with prompts like 'a productive person' resulting in pictures of white males, and 'a person at social services' producing images of people of color.

  • What is the main takeaway from the issues surrounding Gemini's image generation?

    -The main takeaway is that while efforts to correct historical biases are important, overcorrection can lead to new problems, and it's crucial to ensure that AI responses are accurate, fair, and unbiased.



๐Ÿค– Google's AI Program Controversy

The first paragraph discusses the criticism surrounding Google's AI program, Gemini, for its tendency to generate diverse images in a sometimes inaccurate and offensive manner. The speaker acknowledges the importance of representation but points out that Gemini has overcorrected, leading to questionable results such as generating images of diverse US senators from the 1800s, a time not known for diversity. The speaker emphasizes the need for AI results to be historically accurate rather than simply diverse. The paragraph also touches on the influence of the programmers' culture on AI outcomes and the potential biases that can be inadvertently introduced through coding.


๐Ÿ–ผ๏ธ AI Image Generation Bias and Corrections

The second paragraph continues the discussion on AI image generation biases, highlighting inconsistencies in how Gemini responds to different prompts. It points out that while the AI initially refused to generate an image of happy black people, it did provide an image when specifically asked. The speaker criticizes this as an example of the AI's inability to provide consistent and accurate responses. The paragraph also notes that Google has acknowledged the overcorrection issue and is working on fixing it. Additionally, it mentions a Washington Post investigation that found AI-generated images reinforcing stereotypes based on the prompts used. The speaker expresses hope that Google will address these issues and curiosity about the future development of AI in this context.




Gemini is an AI program developed by Google, which is the focus of the video. It is criticized for generating diverse images in a sometimes inaccurate and offensive manner. The term is used to discuss the program's shortcomings in representing historical accuracy and the implications of such AI behavior.


Diversity refers to the variety of differences among people, including but not limited to race, ethnicity, gender, and cultural background. In the context of the video, it is used to describe the AI's attempt to represent different groups, which, however, led to questionable results due to overcorrection.


Overcorrection is the act of correcting an issue or problem to such an extent that it leads to new, unintended problems. In the video, this term is used to describe Google's AI program's attempt to be inclusive, which instead resulted in historically inaccurate representations.

๐Ÿ’กHistorical Accuracy

Historical accuracy refers to the truthful representation of past events, people, or conditions in accordance with the evidence and historical records. The video discusses the importance of historical accuracy in AI-generated images and criticizes the AI for not maintaining this accuracy.


Stereotypes are widely held but fixed and oversimplified ideas or beliefs about a particular group or class of people. The video addresses the issue of AI potentially amplifying stereotypes through its image generation based on certain prompts.

๐Ÿ’กCultural Bias

Cultural bias refers to the inclination of a group or individual towards certain cultural values, beliefs, or practices, often leading to prejudice or discrimination against other cultures. In the video, cultural bias is discussed in the context of AI programming, where the perspectives and biases of the coders can influence the AI's output.

๐Ÿ’กRacial Representation

Racial representation refers to the depiction or portrayal of different racial groups in media, art, or other forms of expression. The video addresses the complexities of racial representation in AI, where efforts to be inclusive can sometimes lead to misrepresentation or oversimplification.

๐Ÿ’กAI Ethics

AI Ethics refers to the moral principles and values that guide the development and use of artificial intelligence. The video touches on the ethical considerations of AI, particularly in how it handles historical accuracy, diversity, and potential biases.

๐Ÿ’กMedia Bias

Media bias occurs when media outlets present news or information in a way that favors a particular perspective or viewpoint. The video implies that mainstream media's concept of 'objectivity' may actually be a form of media bias that upholds the status quo.

๐Ÿ’กProgrammer Perspective

Programmer perspective refers to the viewpoints, beliefs, and values of the individuals who write and develop code for AI systems. The video emphasizes that these perspectives can significantly influence the behavior and output of AI, leading to potential biases in the AI's responses.


Google's AI program, Gemini, has been criticized for generating diverse images inaccurately and offensively.

Gemini's attempt to represent people of all backgrounds and races led to questionable results.

A request for an image of a US senator from the 1800s returned diverse results, which was historically inaccurate.

The 1800s in the US was not a time of celebrating diversity, and Gemini's results did not reflect this reality.

Gemini's response to a request for an image of the Founding Fathers was inaccurate and did not align with historical facts.

The influence of the culture of programmers on AI output, including their biases and perspectives.

Gemini's refusal to generate an image of Vikings, German soldiers from the Nazi period, or an American president from the 1800s.

AI's potential to protest requests and provide responses that may be considered offensive or inappropriate.

Google's acknowledgment of overcorrection and commitment to fixing the issues with Gemini.

The importance of understanding that AI's perceived objectivity is often influenced by the perspectives of its creators.

The challenge of AI image generators to avoid amplifying stereotypes when trained on large datasets.

A Washington Post investigation revealing biases in AI image generation based on certain prompts.

The potential consequences for students using inaccurate AI-generated results in their academic work.

The expectation that AI development will continue to evolve, addressing these issues in the future.

The presence of glitches in newly released AI technologies and the process of refining them over time.

The need for AI to provide accurate historical representations and not just cater to the idea of diversity.