



const { OpenAI } = require('openai');
const api = new OpenAI({
baseURL: 'https://api.ai.cc/v1',
apiKey: '',
});
const main = async () => {
const result = await api.chat.completions.create({
model: 'togethercomputer/Koala-7B',
messages: [
{
role: 'system',
content: 'You are an AI assistant who knows everything.',
},
{
role: 'user',
content: 'Tell me, why is the sky blue?'
}
],
});
const message = result.choices[0].message.content;
console.log(`Assistant: ${message}`);
};
main();
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.ai.cc/v1",
api_key="",
)
response = client.chat.completions.create(
model="togethercomputer/Koala-7B",
messages=[
{
"role": "system",
"content": "You are an AI assistant who knows everything.",
},
{
"role": "user",
"content": "Tell me, why is the sky blue?"
},
],
)
message = response.choices[0].message.content
print(f"Assistant: {message}")
-
AI Playground

Test all API models in the sandbox environment before you integrate.
We provide more than 300 models to integrate into your app.


Product Detail
✨ Introducing Koala (7B): A Powerful Open-Source Chatbot LLM
Koala (7B) is a cutting-edge, open-source large language model (LLM) developed by the renowned Berkeley Artificial Intelligence Research (BAIR) Lab. Launched in April 2023, this version 1.0 model is specifically engineered to deliver high-quality chatbot performance, positioning itself as a strong contender against established proprietary models like ChatGPT. It is primarily built for researchers and developers aiming to push the boundaries of advanced conversational AI applications.
Key Features of Koala (7B):
- High-Quality Performance: Demonstrated capabilities comparable to leading models like ChatGPT.
- Open-Source Availability: Freely accessible for extensive research and development initiatives.
- Efficient Architecture: Features a robust 7 billion parameter architecture.
- Curated Fine-tuning: Benefits from training on carefully selected and high-quality datasets.
- Intended Use: Primarily for research purposes and as a foundation for advanced conversational AI.
- Language Support: Predominantly English, with potential for future multilingual expansion.
⚙️ Technical Details & Training Methodology
Delving into its core, Koala (7B) is fundamentally based on the renowned LLaMA architecture, specifically utilizing its 7 billion parameter version as its foundational model. This robust transformer-based architecture has become the industry standard for achieving state-of-the-art performance in large language models.
Training Data & Fine-tuning Process:
As outlined in the original Technical Details documentation, Koala was meticulously fine-tuned on a carefully curated dataset totaling approximately 128,000 samples. This relatively compact dataset size underscores the efficiency of its fine-tuning process. The dataset comprises:
- Anthropic's Helpful and Harmless (HH) dataset: Consisting of 67,000 human-AI conversation samples, with a focus on helpful and safe interactions.
- Open-Assistant conversations: A collection of 9,000 samples sourced from the Open-Assistant project, dedicated to creating open-source AI assistants.
- Stanford Alpaca data: Comprising 52,000 instruction-following demonstrations, generated using innovative self-instruct techniques.
While a precise knowledge cutoff date for Koala (7B) is not explicitly stated, given its release in April 2023, it is reasonable to assume that the model's knowledge base extends up to early 2023.
Important Note on Diversity and Bias: It is crucial for researchers and developers to recognize that Koala inherits potential biases present in its foundational LLaMA model and the datasets utilized for fine-tuning. Thorough evaluation and mitigation strategies are highly recommended before deploying Koala (7B) in sensitive or critical applications.
📊 Performance Metrics & Robustness
Koala (7B) has consistently demonstrated impressive performance across various standard benchmarks, showcasing its capabilities as a high-quality conversational AI model.
Accuracy Benchmarks:
- Human Evaluation: In blind tests, human evaluators showed a preference for Koala's responses over ChatGPT's in 50% of cases, indicating truly comparable performance.
- TruthfulQA: Koala achieved a score of 47% on this benchmark, surpassing GPT-3.5 and closely approaching the performance of GPT-4.
- MMLU (Massive Multitask Language Understanding): The model scored 43.3%, comprehensively showcasing its broad knowledge and robust reasoning capabilities across a wide array of tasks.
While specific inference speed metrics for Koala (7B) are not explicitly provided, as a 7 billion parameter model, it is generally anticipated to be more efficient and faster in inference compared to larger models that offer similar functionalities. Its strong and consistent performance across diverse benchmarks like TruthfulQA and MMLU also attests to its excellent generalization capabilities and robustness across various topics and query types.
💡 Usage, License, and Ethical Guidelines
Responsible Deployment & Licensing:
Code samples and detailed usage instructions for integrating Koala (7B) are typically provided in its official documentation or GitHub repository, enabling developers to seamlessly incorporate it into their AI projects.
Although explicit ethical guidelines tailored specifically for Koala (7B) may not be extensively documented, users are strongly advised to adhere to universally recognized AI ethics principles. These include:
- Responsible Use: Ensuring the ethical and beneficial deployment of the model.
- Awareness of Biases: Actively acknowledging and working to mitigate potential biases inherited by the model.
- Privacy & Data Protection: Prioritizing user privacy and ensuring robust data protection measures.
- Transparency: Clearly indicating when content has been generated or assisted by AI.
The Koala (7B) model is released under an open-source license, which actively promotes broad access for research, development, and innovation within the AI community. This commitment aligns with the BAIR Lab's vision for advancing open AI research.
❓ Frequently Asked Questions (FAQ) about Koala (7B)
Q1: What is Koala (7B) and who developed it?
Koala (7B) is an open-source large language model (LLM) designed as a high-quality chatbot. It was developed by the Berkeley Artificial Intelligence Research (BAIR) Lab and released in April 2023.
Q2: Is Koala (7B) available for free?
Yes, Koala (7B) is released under an open-source license, making it freely available for various research and development purposes.
Q3: How does Koala (7B) perform compared to ChatGPT?
In blind human evaluations, Koala's responses were preferred over ChatGPT's in 50% of cases, demonstrating comparable high-quality performance and capabilities.
Q4: What kind of data was used to fine-tune Koala (7B)?
It was fine-tuned on approximately 128,000 samples, combining datasets like Anthropic's Helpful and Harmless (HH) dataset, Open-Assistant conversations, and Stanford Alpaca data.
Q5: What ethical guidelines should be followed when using Koala (7B)?
Users should adhere to general AI ethics principles, including responsible use, awareness and mitigation of potential biases, consideration of privacy and data protection, and transparency regarding AI-generated content.
Learn how you can transformyour company with AICC APIs



Log in