Skip to content
Home » Galactica to Llama: Meta’s AI Model Journey Demonstrates a Lack of AI Governance

Galactica to Llama: Meta’s AI Model Journey Demonstrates a Lack of AI Governance

A Llama??

In the past year, Meta’s venture into large language models (LLMs) has seen both successes and challenges. Two weeks prior to OpenAI’s ChatGPT launch, Meta unveiled Galactica, an open-source LLM tailored for scientific use. Despite initial optimism, Galactica faced a rapid takedown after three days due to concerns over plausible yet inaccurate and, at times, offensive information in its output—commonly referred to as hallucinations.

Yann LeCun, Meta’s chief scientist, defended Galactica, emphasizing its research nature and dismissing casual misuse. However, the model fell short of expectations, leading to its removal. Two weeks later, ChatGPT entered the scene with its own challenges, including acknowledged hallucination issues. If best practices of AI Governance were implemented, the Ethics Committe might have considered the fact of releasing a “research project” that hallucinates for use by researchers and scientists, would be insufficient. 

LeCun’s commitment to open research continued with the release of Llama in February 2023. Although not fully adhering to traditional open-source licenses, Llama aimed to open-source AI. LeCun highlighted Meta’s dedication to open research, implementing a cautious approach that required researchers to fill out a form for Llama access.

Lessons from Galactica informed Meta’s approach to responsible AI development. Joelle Pineau, Meta’s VP of AI research, emphasized that Galactica was a research project, not a product, and Meta misjudged expectations. This learning influenced subsequent releases—Llama, Code Llama, and Llama 2—where Meta sought to better manage expectations and guide users responsibly.

Despite challenges, Meta’s models have made significant strides in the AI research community. Galactica’s legacy, though short-lived, forms the basis of Meta’s ongoing commitment to enhancing and responsibly deploying large language models. The journey from Galactica to Llama reflects the industry’s continuous learning and adaptation to both technical and ethical considerations.

In the context of Meta’s AI model journey, it is paramount to address the evolving regulatory landscape, particularly concerning the use of children’s data. With the Federal Trade Commission (FTC) imposing restrictions on Meta’s data monetization, the company faces a significant compliance challenge, especially in light of its expansive user base, which includes minors. The ethical implications of large language models, such as Galactica and Llama, amplify when considering the potential impact on children. As Meta strives to enhance its governance measures, a meticulous examination of compliance with regulations like the Children’s Online Privacy Protection Act (COPPA) becomes imperative. The call for responsible AI development should encompass not only transparency and ethical considerations but also adherence to legal frameworks that safeguard the privacy and well-being of the youngest users in the digital landscape. This intersection of ethical and regulatory challenges underscores the complexity of navigating the AI landscape responsibly.

Addressing AI Governance Concerns in Meta’s Offerings:

Certainly, addressing AI governance concerns in Meta’s offerings could have involved implementing the following measures:

  1. Transparency and Responsible Use Guidelines:
    • Establish clear and transparent guidelines for the responsible use of AI models set in a Code of Ethics and Code of Data Ethics.
    • Provide explicit information about the intended use, limitations, and potential biases of Galactica and Llama to manage user expectations and ensure responsible interaction.
  1. Pre-launch Audits and Testing:
    • Conduct comprehensive independent-third-party audits and testing of models before public release, including rigorous evaluation for accuracy, bias, and ethical concerns.
    • A thorough testing process, including real-world simulations, could have identified and addressed issues like hallucinations in Galactica before going live.

Note: Meta’s reliance on open-source as a form of “audits” highlights the inadequacies of such thinking, necessitating the use of an AI governance framework and an Ethical Risk Assessment.

  1. User Education and Training:
    • Implement educational resources and training for users interacting with AI models.
    • Provide guidelines on interpreting model outputs, understanding limitations, and discerning between accurate information and potential hallucinations.

Note: While Meta has an “Acceptable Use Policy,” a formal Code of Ethics on their website would enhance accountability and consequences for misuse.

  1. Community Feedback and Iterative Improvement:
    • Foster a feedback loop with the user community to continually improve models.
    • Implement mechanisms for users to report issues, provide feedback, and suggest improvements.

Note: Despite having a feedback system, the potential bias in the open-source community raises concerns about ethical oversight.

  1. Ethical Review Board:
    • Establish an internal or external ethical review board to evaluate the societal impact of AI models.
    • Assess the ethical implications of model outputs, ensuring alignment with societal norms and values.

Note: Reiterating the importance of an ethical review board to prevent unintended dissemination of offensive or harmful content.

Implementing these governance measures would contribute to the ethical development and deployment of AI models, aligning with transparency, accountability, and continuous improvement principles in AI governance.

Balancing Open-Source Collaboration and Centralized Governance:

The reliance on open-source methodologies alone to address AI governance concerns compounds risks associated with deploying advanced language models like Galactica. While open-source frameworks contribute to transparency and collaborative development, they may not inherently guarantee comprehensive governance practices. In the case of Galactica, blind trust in open-source principles without parallel emphasis on explicit governance structures led to unanticipated challenges and removal. Open-source models may lack nuanced oversight necessary for addressing ethical considerations, bias mitigation, or identifying potential hallucinations. A balanced approach, integrating both open-source collaboration and centralized governance mechanisms, is crucial. This ensures collaborative benefits with responsible oversight, auditing, and ethical considerations, vital for deploying AI models impacting scientific research and public discourse. Mitigating the pitfalls of a singular reliance on open-source models contributes to the development of more reliable and ethically sound AI systems.