have expressed concerns about Meta CEO Mark Zuckerberg regarding the company's open-source AI model called LLaMA.
They believe that LLaMA, which became freely available after being leaked, could be misused for harmful and criminal activities.
In a letter dated June 6, the senators criticized Zuckerberg for releasing LLaMA without proper safeguards. They felt that Meta's approach, which allowed unrestricted access to the AI model, didn't consider the potential risks involved, ultimately failing to protect the public.
Originally meant for researchers only, LLaMA got leaked on BitTorrent shortly after its announcement. The senators worry that with no oversight or control, the model could be misused by spammers, cyber criminals, and those sharing objectionable content.
To explain their concerns, the senators compared LLaMA to other closed-source models like OpenAI's ChatGPT-4 and Google's Bard. They pointed out that LLaMA could generate abusive and harmful content, including responses related to self-harm, crime, and antisemitism. In contrast, ChatGPT-4 adheres to ethical guidelines and refuses certain requests.
Although ChatGPT-4 is designed to reject certain prompts, some users have found ways to bypass those restrictions and make the model produce unintended and potentially harmful responses.
In their letter to Zuckerberg, the senators sought answers about the risk assessments conducted before LLaMA's release, the measures taken by Meta to prevent or mitigate potential harm, and Meta's use of user data for AI research. They wanted to understand Meta's approach better to ensure public safety.
OpenAI, the organization behind LLaMA, is reportedly working on an open-source AI model in response to advancements made by other open-source models. The senators' concerns highlight the importance of responsible development and usage of AI models to prevent potential harm.
Overall, the senators have raised an alarm about Meta's open-source AI model, LLaMA, due to fears of its misuse for harmful purposes. They emphasize the need for stronger safeguards and responsible practices in developing and releasing AI models to protect the public.