Stanford measured how transparent companies are true their Large Language Models (LLMs) and other foundation models.
“The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown”― H.P. Lovecraft
In May this year Geoffry Hinton, dubbed by many as the father of artificial intelligence, said “I have suddenly switched my views on whether these things are going to be more intelligent than us.” Hinton said he was surprised by the capabilities of GPT-4 and wants to raise public awareness of the serious risks of AI.
Although artificial intelligence will probably not lead humanity to extinction, many experts agree that it poses a risk. Often, these risks are not even those imagined during the development of a technology. For example, social networking and dark moderation contributed to the Rohingya genocide in Myanmar.
In addition, the Cambridge Analytica scandal and other scandals related to both moderation and data management have shown that technology’s opacity leads to harm. That is why calls for transparency in AI models have multiplied:
“make transparency, fairness and accountability the core of AI governance … [and] Consider the adoption of a declaration on data rights that enshrines transparency.” — António Guterres, UN Secretary-General, source
At a time when large language models (LLMs) are also increasingly used for sensitive applications (medicine, law, and so on) we need to be able to test them systematically for errors, biases, and potential risks. Several studies show that LLMs can also leak personal data, so we just need to know their limitations and the data they are trained on.