A thought about AI and biases in the training process


It's important to highlight an aspect that often goes unnoticed but is significant in AI and machine learning.

When discussing AI, we often refer to LLMs or large language models. These models and their reliability are intricately tied to the data they're trained on. The larger models are trained on vast amounts of internet data, which, as we all know, can be a mixed bag of information. As Abraham Lincoln wisely said, "You can't believe everything you read on the internet."

Machine learning and AI can amplify any biases in the training data. Consequently, major companies with products in this field have large teams of people who review data sets and assess their reliability by tagging the data. This manual review process, while not perfect, plays a crucial role in mitigating biases. The more data that is reviewed, the lower the likelihood of mistakes. At this stage in the development of AI language models, the effectiveness of these manual review processes will have a significant impact on the accuracy of AI outputs, possibly even more so than the number of parameters and the algorithms used. Some, like Elon Musk, have referred to this practice as being "woke". Failing to carry out these manual reviews will likely lead to AI failures because historical data for machine learning is often not the best predictor, and doing so is akin to abandoning critical thinking.

For instance, Apple and Goldman Sachs collaborated to create a credit card that uses AI to determine credit limits. The AI was trained using data from previous applicants. As a result, women and applicants from minority groups were less likely to be approved or received lower credit limits compared to white males, even if the white males had lower incomes. This bias occurred because the data indicated that, historically, women and minority applicants were less likely to be given high credit limits.

Another example is when law enforcement agencies use AI to allocate resources to specific areas based on predictions. The result of using these tools without care is a concentration of police presence in historically poor or minority areas, perpetuating historical biases.

The key differentiator among significant players is how effectively they handle biases to cater to their respective markets. OpenAI's GPT, Google's Gemini, Microsoft's CoPilot, Meta's Llama, xAI's Grok, and others target different markets. The main question for AI currently is how effectively they can achieve their objectives, whether the markets will converge or diverge, and, therefore, which companies will come out on top.

My 2c - I'll add more later

No comments: