Living in an AI World: Will We Survive and in What Reality?

A female pope? A black Viking?

Yes, according to Google’s generative artificial intelligence (AI) program Gemini, which had produced just such images about two weeks before Musk’s X post. Gemini became the object of ridicule on social media after depicting historically or biologically impossible images, which seemed to result from a perverse commitment to promoting “diversity, equity, and inclusion” (DEI).

Earlier versions of generative AI had displayed biases, such as consistently showing images of white doctors or black felons, so programmers tried to automatically insert diverse responses into queries. The result was a sometimes-comical over-correction that distorted reality. When people asked to see examples of British monarchy, Gemini came back with images depicting English monarchs in a variety of historically inaccurate races and ethnicities, such as female popes, black U.S. founding fathers, and black Vikings. When asked to show examples of “gingers,” the program showed black or Asian people with red hair, a combination not found in nature.

Many just laughed it off as obviously absurd, but there were some discussions about artificial intelligence’s (AI) inherent biases along with its incredible and growing power to influence perceptions and shape reality. The images were comical if you knew they were impossible, but what if someone didn’t know that? What if a child didn’t know any better—would that child believe there had been female popes or that Vikings were black?

If you search, “Is AI a threat to humanity?,” it shouldn’t be lost on you that you are using AI to answer a question about itself. So, if AI had an agenda, any answers would be suspect.

Despite the recent DEI kerfuffle, such a search could bring up a recent article from the RAND think tank called “Is AI an Existential Risk? Q&A with RAND Experts,” where one of their experts, Jonathan Welburn, seems to be in imposed correction mode because he’s most concerned about AI’s ability to exacerbate inequity. “AI bias might undermine social and economic mobility,” he said. “Racial and gender biases might be baked into models.”  Another expert was most concerned about AI’s ability to affect climate change. Given RAND’s close relationship to the military-industrial complex, it was surprising that the article didn’t mention the risks of AI doing things like independently launching nuclear bombs. However, the experts did express concern about humans using AI to access weapons, whether those weapons are nuclear, cyber, chemical, or biological—RAND professes a commitment of objectivity, but the organization received a considerable amount of money from Open AI supporter Open Philanthropy.

Geoffrey Hinton, a nuclear physicist who some call the “Godfather of AI” gave a lecture where he said that he thinks “there is a one in ten chance everyone will be dead from AI in 5-20 years” because AI’s “hive mind” gives it the ability to connect with other AI programs that learn from each other, creating a huge advantage over humans, especially if AI evolves to intentionally control. “OpenAI’s latest model GPT-4 can learn language and exhibit empathy, reasoning, and sarcasm,” he said. “If I were advising governments, I would say that there’s a ten percent chance these things will wipe out humanity in the next 20 years.”

Several weeks earlier, Turing Award winner Yoshua Bengio, said there’s a one-in-five chance we all die.

Something like that seems more serious than misgendering Caitlyn Jenner, but when an X user asked Google’s Gemini to give a concise answer on if it was okay to misgender Caitlyn Jenner if it meant saving the world from nuclear war, the answer was very concise: “No.”

Google executives apologized, with Google’s chief executive, Sundar Pichar, calling it “completely unacceptable.” Musk, who owns Gemini competitor Grok, warned that Google would do a better job in the future of concealing its bias, but it would still exist. Google’s stock temporarily took a hit, and Gemini suffered a public relations setback in the competitive world of generative AI. A week or two later, however, everyone seemed to have forgotten about it—maybe we shouldn’t have.


Photo by Jared Gould — Adobe — Text to Image 

Author

2 thoughts on “Living in an AI World: Will We Survive and in What Reality?

  1. Yes, it’s getting very hard to tell what’s true and, because of AI, it’s going to get much harder, especially when so many have no interest in pursuing truth but only want to promote certain agendas.

  2. AI is like the main stream media—something that should never be trusted or taken at face value. Both do nothing but express opinions, masquerading as truth when they just reflect the biases, prejudices and narratives of the writer (MSM) or the programmer (AI).

    And Google is completely dishonest. They knew what Gemini was producing and released it anyway. To believe otherwise means you have to believe Google released a product they had never tested. At all. Nobody does that.

Leave a Reply

Your email address will not be published. Required fields are marked *