When it comes to seeking guidance or making important decisions, we often turn to experts for their advice. But have you ever stopped to think about why we trust the experience of experts? Is it simply because they have undergone rigorous training and education in their field, or is there something more at play? If we trust experts to give us accurate and reliable recommendations, why don’t we trust AI models to do the same?
One possible reason we trust experts is that they are affiliated with reputable organizations or institutions. These organizations often have strict standards for their experts and hold them to a high level of accountability. This means that the experts associated with these organizations have likely undergone thorough vetting and have a reputation to uphold. In other words, experts are not only accountable to themselves, but also to the organizations they represent.
When we ask experts for help or guidance, we usually don’t ask them to explain exactly how they arrived at their conclusions. We trust that their expertise and experience have led them to the correct answer.
However, there are also cases where experts may not be able to fully explain how they arrived at a particular conclusion. This is known as “expert intuition” or “gut feeling,” and it is based on the expert’s deep understanding and experience of their field and the specific situation at hand.
On the other hand, AI models do not have the same level of transparency and accountability. They are often developed and trained by a small team of developers, rather than a larger institution with a reputation to protect, or the quality of the output that the model gives isn’t tied to the organization. As a result, it may be more difficult for users to understand how and why an AI model is making certain decisions or recommendations. Additionally, it can be challenging to understand the context (or set of inputs) in which an AI model is making a prediction or recommendation, as it may not have the same level of understanding and knowledge of the situation as a human expert.
Another potential reason we trust experts over AI models is that experts are people and people can usually understand what motivates other people.
This can make it easier for us to trust experts, as we can relate to their perspective and understand their motivations. AI models, on the other hand, do not have personal motivations or perspectives in the same way. This lack of personal connection can make it more difficult for us to trust AI models and their recommendations.
It’s also worth considering the issue of bias in AI models. These models can be biased if the data they are trained on is biased or flawed in some way. This can lead to unfair or inaccurate results, which can further undermine trust in the model. Experts, on the other hand, are less likely to be biased in this way, as they have a more holistic understanding of the issues at hand.
While AI models can certainly be highly effective and efficient in certain contexts, it’s important to consider why we may be more inclined to trust experts over AI models. Is it simply because of the training and education they have received, or are there other factors at play? This is a topic worth exploring and discussing, as the use of AI models in various fields is only likely to increase in the coming years.
P.S.: There is also a major issue that is highly debatable and there isn’t a clear answer to which is the issue of liability for wrong recommendations, while an expert is liable for the recommendations they give when we talk about an AI model given bad recommendations isn’t clear where the liability lies between the developers, who put the model to use, how has the data been given to the model, etc…
P.s.s: Full disclosure this article was written with the help of an AI model developed by OpenAI (ChatGPT-3) the message and opinion that is article conveys is the conviction of the author. But it also shows the potential of AI to assist humans, instead of taking many hours to write this article with the help of this AI tool it was done in 1,5 hours.
By: João Alves