



const { OpenAI } = require('openai');
const api = new OpenAI({
baseURL: 'https://api.ai.cc/v1',
apiKey: '',
});
const main = async () => {
const result = await api.chat.completions.create({
model: 'Salesforce/codegen2-7B',
messages: [
{
role: 'system',
content: 'You are SQL code assistant.',
},
{
role: 'user',
content: 'Could you please provide me with an example of a database structure that I could use for a project in MySQL?'
}
],
});
const message = result.choices[0].message.content;
console.log(\`Assistant: \${message}\`);
};
main();
import os
from openai import OpenAI
def main():
client = OpenAI(
api_key="",
base_url="https://api.ai.cc/v1",
)
response = client.chat.completions.create(
model="Salesforce/codegen2-7B",
messages=[
{
"role": "system",
"content": "You are SQL code assistant.",
},
{
"role": "user",
"content": "Could you please provide me with an example of a database structure that I could use for a project in MySQL?",
},
],
)
message = response.choices[0].message.content
print(f"Assistant: {message}")
if __name__ == "__main__":
main()
-
AI Playground

Test all API models in the sandbox environment before you integrate.
We provide more than 300 models to integrate into your app.


Product Detail
✨CodeGen2 (7B) - Key Specifications
- Model Name: CodeGen2 (7B)
- Developer/Creator: Salesforce AI Research
- Release Date: 2023
- Version: 2.0
- Model Type: Autoregressive language model
💡Overview of CodeGen2 (7B)
CodeGen2 (7B) represents a significant advancement in the realm of program synthesis. This 7-billion parameter autoregressive language model, meticulously developed by Salesforce AI Research, is engineered to generate executable code from natural language descriptions and accurately complete partially-formed code snippets, streamlining the development workflow for a diverse range of users.
🚀Key Features & Capabilities
- • Advanced Code Infilling: CodeGen2 (7B) excels at intelligently filling in the missing parts of your partially completed code, making your development process more efficient and intuitive.
- • Extensive Training Dataset: The model has been trained on a remarkably diverse dataset, encompassing 12 different programming languages and numerous popular frameworks, ensuring broad adaptability across various coding environments.
- • Dynamic Multi-Turn Code Interaction: Users can engage in a continuous dialogue with CodeGen2 (7B) for code generation and completion, allowing for iterative refinement until the output perfectly aligns with specific requirements.
🎯Intended Use Cases
CodeGen2 (7B) is positioned as an invaluable tool for program synthesis. It caters to a wide audience, from experienced developers aiming to optimize their workflow to aspiring coders seeking intelligent assistance. Its functionalities include generating code from natural language prompts, completing unfinished code snippets, and supporting advanced tasks like code refactoring and optimization.
🌐Supported Programming Languages
CodeGen2 (7B) boasts comprehensive support for a wide array of programming languages and associated frameworks. This includes, but is not limited to:
C, C++, C-Sharp, Dart, Go, Java, Javascript, Kotlin, Lua, PHP, Python, Ruby, Rust, Scala, Shell, SQL, Swift, Typescript, and Vue.
🧠Technical Architecture & Training
Architecture
At its core, CodeGen2 (7B) is built upon a robust transformer-based architecture, a foundational design widely recognized and utilized in models like GPT-3. However, it incorporates specialized modifications optimized for intricate program synthesis tasks. This refined architecture ensures high precision in capturing long-range dependencies within input sequences, leading to generated code that is both well-structured and semantically accurate.
Training Data
The model's extensive knowledge is derived from being trained on a strictly permissive subset of the deduplicated version of the Stack dataset (v1.1). This exposure to a broad spectrum of programming practices and techniques, from complex algorithms to simple scripts, underpins its versatile understanding of coding patterns.
Data Source and Size
CodeGen2 (7B) was trained using a substantial dataset of approximately 1.5 billion tokens. This code data has undergone rigorous curation to guarantee high quality and direct relevance to its target programming languages.
Knowledge Cutoff
Like all trained models, CodeGen2 (7B) has a specific knowledge cutoff. Its training data was collected up to June 2022. Consequently, its understanding of new programming paradigms, tools, or real-world events is limited to information available before this date.
Diversity and Bias
The training methodology focused on exposing the model to a vast range of coding practices and techniques, encompassing both niche programming domains and popular use cases, thereby enhancing its general versatility and robustness.
📈Performance Benchmarks
CodeGen2 (7B) has demonstrated impressive performance across key coding benchmarks:
- • On the renowned HumanEval benchmark, the model achieved a notable score of 30.7, successfully outperforming GPT-3 in this specific evaluation.
- • For the MBPP (Mostly Basic Programming Problems) benchmark, CodeGen2 (7B) scored an impressive 43.1, further solidifying its code generation capabilities.
🛠️Usage Information
API Usage Example
Example API Call Placeholder:
// This section demonstrates a conceptual API call for CodeGen2 (7B).
// Replace placeholders with actual endpoint and token as provided by Salesforce.
import requests
API_ENDPOINT = "https://api.salesforce.com/codegen2-7B/generate" # Hypothetical endpoint
AUTH_TOKEN = "YOUR_SALESFORCE_API_TOKEN" # Your actual API token
headers = {
"Authorization": f"Bearer {AUTH_TOKEN}",
"Content-Type": "application/json"
}
def generate_code(prompt_text, max_tokens=100, temperature=0.7):
payload = {
"model": "Salesforce/codegen2-7B",
"prompt": prompt_text,
"max_tokens": max_tokens,
"temperature": temperature
}
try:
response = requests.post(API_ENDPOINT, headers=headers, json=payload)
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()
except requests.exceptions.RequestException as e:
print(f"API request failed: {e}")
return None
# Example usage: Generate a simple JavaScript function
code_prompt = "Write a JavaScript function to reverse a string."
result = generate_code(code_prompt)
if result and "generated_code" in result:
print("Generated Code:\n", result["generated_code"])
else:
print("Failed to generate code.")
License Type
CodeGen2 (7B) is made available under a commercial license. Organizations and developers interested in leveraging this model for commercial applications are required to contact Salesforce directly to obtain specific licensing information and understand the full terms of use.
❓Frequently Asked Questions (FAQ)
1. What is CodeGen2 (7B) and who developed it?
CodeGen2 (7B) is a 7-billion parameter autoregressive language model specialized in program synthesis, developed by Salesforce AI Research. It focuses on generating and completing code from natural language descriptions.
2. What are the main capabilities of CodeGen2 (7B)?
Its primary capabilities include code infilling, multi-turn code generation and completion, and supporting a wide range of programming languages and frameworks for tasks like code refactoring and optimization.
3. How well does CodeGen2 (7B) perform in benchmarks?
CodeGen2 (7B) shows strong performance, achieving a score of 30.7 on the HumanEval benchmark (outperforming GPT-3) and 43.1 on the MBPP benchmark, highlighting its robust code generation abilities.
4. What is the knowledge cutoff date for CodeGen2 (7B)?
The model's knowledge cutoff is based on its training data, which was collected up to June 2022. It does not have information beyond this timestamp.
5. Is CodeGen2 (7B) available for commercial applications?
Yes, CodeGen2 (7B) is available under a commercial license. Interested parties should directly contact Salesforce for detailed licensing information and terms of use.
Learn how you can transformyour company with AICC APIs



Log in