



const { OpenAI } = require('openai');
const api = new OpenAI({ apiKey: '', baseURL: 'https://api.ai.cc/v1' });
const main = async () => {
const prompt = `
All of the states in the USA:
- Alabama, Mongomery;
- Arkansas, Little Rock;
`;
const response = await api.completions.create({
prompt,
model: 'Qwen/Qwen1.5-72B',
});
const text = response.choices[0].text;
console.log('Completion:', text);
};
main();
from openai import OpenAI
client = OpenAI(
api_key="",
base_url="https://api.ai.cc/v1",
)
def main():
response = client.completions.create(
model="Qwen/Qwen1.5-72B",
prompt="""
All of the states in the USA:
- Alabama, Mongomery;
- Arkansas, Little Rock;
""",
)
completion = response.choices[0].text
print(f"Completion: {completion}")
main()
-
AI Playground

Test all API models in the sandbox environment before you integrate.
We provide more than 300 models to integrate into your app.


Product Detail
✨ Discover Qwen 1.5 (72B): An Advanced AI Language Model
Base language model Qwen1.5-72B represents the beta iteration of Qwen2, an advanced transformer-based language model. Pre-trained using a vast corpus of data, it offers significant improvements over its predecessor, Qwen.
Key enhancements include multilingual support for both base and chat models, stable performance with a 32K context length, and the removal of the need for trust_remote_code, streamlining its deployment and use.
🧠 Understanding the Qwen 1.5 (72B) Model Architecture
Qwen1.5-72B is a flagship member of the Qwen1.5 series, which encompasses decoder language models across six model sizes, ranging from 0.5B to 72B. As the largest base model in this series, it is built on a robust Transformer architecture.
Notable features include SwiGLU activation, attention QKV bias, and an improved tokenizer adaptable to multiple natural languages and codes. It also incorporates group query attention and a mixture of sliding window attention and full attention for enhanced performance.
💡 Note: For this beta version, Group Query Attention (GQA) and the mixture of Sliding Window Attention (SWA) and full attention are currently omitted.
⭐ Qwen 1.5 (72B) Performance & Competitive Edge
Qwen1.5-72B consistently demonstrates strong performance across diverse evaluation benchmarks. It showcases exceptional capabilities in language understanding, reasoning, and complex mathematical tasks.
Significantly, it outperforms Llama2-70B across all benchmarks, solidifying its position as a top-tier language model in its class. Its ability to reliably handle a 32K context length consistently sets it apart, ensuring stable performance in diverse scenarios without compromising efficiency.
Moreover, Qwen1.5-72B proves highly competitive with other leading models in the community, such as Mixtral 8x7b. Benchmark results affirm its prowess in tackling complex linguistic tasks with precision and efficiency, establishing it as a significant player in the landscape of transformer-based language models.
💡 Practical Usage Tips for Qwen 1.5 (72B)
While it's generally advised to use chat versions for text generation, the Qwen1.5-72B base model is invaluable for various experiments and evaluations. This is primarily due to its minimal bias when performing text completion tasks.
You can easily access this powerful model through our AI/ML API by signing up on this website.
For those deploying the model locally, you can apply advanced post-training techniques to further enhance performance. Consider using SFT (Sparse Fine-Tuning), RLHF (Reinforcement Learning with Human Feedback), or continued pretraining to tailor outputs to specific requirements and optimize model performance.
📜 Qwen 1.5 (72B) License Agreement
The Qwen1.5-72B model is governed by the Tongyi Qianwen license agreement. Full details of the license can be accessed on the model's repository on GitHub or Huggingface.
A commercial use request is not required unless your product or service reaches more than 100 million monthly active users.
🚀 Conclusion: Advancing LLMs with Qwen 1.5 (72B)
In conclusion, Qwen1.5-72B represents a significant advancement in open-source foundational language models. It offers enhanced capabilities in text completion, robust multilingual support, and superior context handling, making it a pivotal tool for researchers and developers aiming to push the boundaries of AI.
❓ Frequently Asked Questions (FAQ)
Q: What is Qwen 1.5 (72B)?
A: Qwen 1.5 (72B) is the beta iteration of Qwen2, an advanced 72-billion parameter transformer-based language model, featuring multilingual support and a stable 32K context length.
Q: How does Qwen 1.5 (72B) perform against competitors?
A: It consistently outperforms Llama2-70B across various benchmarks and is highly competitive with models like Mixtral 8x7b, particularly noted for its reliable 32K context handling.
Q: Is Qwen 1.5 (72B) suitable for commercial use?
A: Yes, it's governed by the Tongyi Qianwen license. A special commercial use request is only required if your product or service exceeds 100 million monthly active users.
Q: What are the primary applications for the base Qwen 1.5 (72B) model?
A: While chat versions are recommended for direct text generation, the base model is ideal for experiments, evaluations, and can be enhanced with post-training techniques like SFT or RLHF to customize outputs.
Q: Where can I find the license details and model repository?
A: The Tongyi Qianwen license agreement and model details are available on its official repositories on GitHub and Huggingface.
Learn how you can transformyour company with AICC APIs



Log in