Zuck gives his AI a Right Wing bias

Meta is reportedly trying to make its AI less biased by pushing it towards the Right. Mark Zuckerberg has said as much in a recent Llama announcement post.

It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics. This is due to the types of training data available on the internet. Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue. As part of this work, we’re continuing to make Llama more responsive so that it answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn’t favor some views over others.

Trouble is, this is basically darkness being couched in language that makes it look like light. Because what is bias when it comes to a question such as evolution?

Is the AI supposed to say “there are two sides to the argument” and that “both sides are equally valid”? Is it supposed to put cockamamie ideas like young earth creationism on the same pedestal as rigorous and scientifically validated Theories like evolution by natural selection?

LLMs are going to give you what you put in them, with some amount of extrapolation and deviation. The reason generative AI seems to have an anti-Right bias is because the data it has been trained on was not created by Right wing conspiracy theorists, religious nutjobs, and politically motivated grifters. It is the result of a nominally mainstream version of culture that has so far had a more or less positive approach towards issues like kindness, science, and welfare. A twisted culture full of silly, toxic, or dangerous views will absolutely create AI models that are trained on such views, and we seem to be moving towards it with each capitulation by billionaire tech bros.

Some weeks ago, there was a lot of amusement on Indian Twitter about Grok, an LLM created by Elon Musk’s people saying terrible things about Musk, including calling him a spreader of misinformation. I think there is this mistaken idea that generative AI tools have views. They don’t. They are going to say what their training data says. Grok isn’t an actual intelligence. It is a product created by feeding popular information into a machine. It called Musk names because the real world calls Musk names. If it had been trained on datasets that were full of fawning praise for Musk, that is what it would have regurgitated.

Moreover, talking about moving AI towards a Right Wing bias is effectively talk of making it useless. Because in order to have a model that does not say what the real world says, you will need to train it on lies and propaganda. When that happens, you basically do away with whatever advantage your AI tool gave you because it was useful to people looking for factually correct answers.

The only problem I foresee is the popular perception of AI. If people think AI tools employ some manner of independent thinking when providing answers, they are bound to think of biased answers as true too. This is why it is important to hammer home the fact that we shouldn’t treat chatbot answers as having the same weight as a human view. These are predictive text parrots that will repeat anything you feed them.

Sign up to receive blog posts in your email

Discover more from vimoh

Subscribe now to keep reading and get access to the full archive.

Continue reading