Meta open sourced its large language model LLaMA earlier this year, in contrast to its competitors Google and OpenAI, which have not released their latest large models. LLaMA has driven the rapid development of large models, but opponents such as Google and OpenAI have criticized the unconstrained open-source approach as dangerous. Zoubin Ghahramani, Google’s vice president of research and development, thinks this could lead to abuse. Yann LeCun, chief scientist of Meta AI, said that Google and OpenAI’s approach to AI’s increasing secrecy is a huge mistake, and consumers and governments will refuse to embrace AI unless they are not controlled by companies such as Google and Meta. Google, Microsoft, and OpenAI are the most visible stars in the field of AI, but Meta has also been deeply involved in the field for nearly a decade. Stanford researcher Moussa Doumbouya used LLaMA’s model to generate questionable text, including how to dispose of a dead body without getting caught, to publish an article supporting Hitler’s views. In private chats, he thinks distributing the technology to the public is like “everyone can buy grenades at the grocery store.” LeCun believes that the creation and dissemination of false information and hate speech has long existed and cannot be stopped, but platforms can prevent its spread. He believes that the most dynamic ecosystem must be open and everyone can contribute.