



const { OpenAI } = require('openai');
const api = new OpenAI({
baseURL: 'https://api.ai.cc/v1',
apiKey: '',
});
const main = async () => {
const result = await api.chat.completions.create({
model: 'meta-llama/Meta-Llama-Guard-3-8B',
messages: [
{
role: 'system',
content: 'You are an AI assistant who knows everything.',
},
{
role: 'user',
content: 'Tell me, why is the sky blue?'
}
],
});
const message = result.choices[0].message.content;
console.log(`Assistant: ${message}`);
};
main();
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.ai.cc/v1",
api_key="",
)
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-Guard-3-8B",
messages=[
{
"role": "system",
"content": "You are an AI assistant who knows everything.",
},
{
"role": "user",
"content": "Tell me, why is the sky blue?"
},
],
)
message = response.choices[0].message.content
print(f"Assistant: {message}")
-
AI Playground

Test all API models in the sandbox environment before you integrate.
We provide more than 300 models to integrate into your app.


Product Detail
Introducing Llama Guard 2 (8B): Advanced Content Safety for LLMs
Llama Guard 2 (8B), developed by Meta AI and released in April 2024, is an 8-billion parameter text classification model designed to enhance content safety in Large Language Models (LLMs). Built upon the Meta Llama 3 architecture, it provides robust safety predictions across 11 hazard categories defined by the MLCommons taxonomy. Its primary function is to classify and filter potentially harmful or inappropriate text generated by LLMs, ensuring responsible AI deployment.
Key Model Details:
- ⭐ Model Name: LlamaGuard
- 💡 Developer/Creator: Meta
- 🗓️ Release Date: April 2024
- 🏷️ Version: LlamaGuard-2-8B
- 🧠 Model Type: Text Classification
Core Capabilities & Features
- ✅ Superior Performance: LlamaGuard-2-8B consistently outperforms other leading content moderation APIs, including Azure, OpenAI Moderation, and Perspective.
- 📊 High Accuracy & Low False Positives: Achieves an impressive F1 score of 0.915 and a remarkably low false positive rate of 0.040 on internal test sets, ensuring efficient and reliable content filtering.
- 💬 Dual Classification Support: Designed for comprehensive protection, it supports both prompt and response safety classification for LLMs.
- 🛠️ Customizable & Fine-tunable: Developers can easily fine-tune the model to create custom safety taxonomies tailored to specific application requirements and unique content moderation needs.
"LlamaGuard-2-8B is designed for seamless integration into LLM-powered applications, playing a crucial role in ensuring the safety and responsibility of generated content by filtering out potentially harmful or inappropriate text before display to users."
Technical Details
Architecture:
LlamaGuard-2-8B is robustly built on the Meta Llama 3 architecture, leveraging the highly efficient Transformer architecture, which is renowned for its capabilities in advanced large language models.
Training Data & Language Support:
The model was fine-tuned on the Llama 3 base model with substantial additional data specifically for safety classification. This training corpus includes a diverse set of online text, thoroughly covering all 11 defined safety categories. While the exact data source and size are not publicly disclosed, it represents a large and comprehensive dataset. Currently, the model is trained on English text, but it can potentially be fine-tuned for other languages.
Knowledge Cutoff: The precise knowledge cutoff is not explicitly stated, but it is estimated that the model was trained on data up to 2023.
Diversity & Bias Considerations:
Although the training data is meticulously designed for diversity and representation, developers are advised to carefully evaluate the model's performance and outputs for any inherent biases or lack of diversity that may still exist. Continuous monitoring is key for responsible AI deployment.
Ethical Guidelines & Licensing
Ethical Guidelines:
Meta AI emphasizes its strong commitment to responsible AI, having published clear ethical guidelines for the development and use of LlamaGuard-2-8B. These guidelines highlight the critical importance of mitigating potential harms and fostering responsible AI practices across all applications.
License Type:
Specific licensing details for LlamaGuard-2-8B are not publicly disclosed. However, it is generally anticipated to be available for both commercial and non-commercial uses, subject to Meta's specific terms and conditions.
Frequently Asked Questions (FAQ)
Q1: What is Llama Guard 2 (8B)?
A1: Llama Guard 2 (8B) is an 8-billion parameter text classification model developed by Meta AI to enhance content safety in Large Language Models (LLMs) by classifying content across 11 hazard categories.
Q2: How does Llama Guard 2 (8B) perform compared to other moderation APIs?
A2: It outperforms popular content moderation APIs like Azure, OpenAI Moderation, and Perspective, boasting a high F1 score of 0.915 and a low false positive rate of 0.040.
Q3: Can Llama Guard 2 (8B) be tailored for specific content safety needs?
A3: Yes, the model is easily fine-tunable, enabling developers to create custom safety taxonomies to perfectly match the unique requirements of their applications.
Q4: What languages does Llama Guard 2 (8B) currently support?
A4: The model is currently trained and optimized for English text. However, it has the potential to be fine-tuned to support other languages as needed.
Q5: What are the licensing terms for Llama Guard 2 (8B)?
A5: While specific licensing details have not been publicly disclosed, it is anticipated that Llama Guard 2 (8B) will be available for both commercial and non-commercial uses, subject to Meta's standard terms and conditions.
Learn how you can transformyour company with AICC APIs



Log in