



const main = async () => {
const response = await fetch('https://api.ai.cc/v2/generate/video/runway/generation', {
method: 'POST',
headers: {
Authorization: 'Bearer ',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'runway/gen4_aleph',
prompt: 'Replace the pink background color with the color of the image',
video_url : 'https://cdn.pixabay.com/video/2019/09/04/26566-358041233_large.mp4',
references: [{
type: 'image',
url: 'https://thumbs.dreamstime.com/b/yellow-square-background-shape-blank-high-quality-photo-214458702.jpg'
}]
}),
}).then((res) => res.json());
console.log('Generation:', response);
};
main()
import requests
def main():
url = "https://api.ai.cc/v2/generate/video/runway/generation"
payload = {
"model": "runway/gen4_aleph",
"prompt": "Replace the pink background color with the color of the image",
"video_url" : f"https://cdn.pixabay.com/video/2019/09/04/26566-358041233_large.mp4",
"references": [{
"type": "image",
"url": "https://thumbs.dreamstime.com/b/yellow-square-background-shape-blank-high-quality-photo-214458702.jpg"
}]
}
headers = {"Authorization": "Bearer ", "Content-Type": "application/json"}
response = requests.post(url, json=payload, headers=headers)
print("Generation:", response.json())
if __name__ == "__main__":
main()
-
AI Playground

Test all API models in the sandbox environment before you integrate.
We provide more than 300 models to integrate into your app.


Product Detail
✨ Aleph by RunwayML is a cutting-edge in-context video generation and editing API, engineered to execute a broad spectrum of advanced manipulations on existing video footage. It empowers users to seamlessly add, remove, and transform objects, generate novel camera angles, and fine-tune styles and lighting conditions using intuitive text prompts, image, or video references. Aleph profoundly streamlines video creation and post-production workflows for filmmakers, content creators, and designers by harnessing AI for flexible, high-quality video editing and generation within a unified, powerful platform.
⚙️ Key Specifications
- • Multi-Task Video Model: Capable of complex edits such as object insertion/removal, scene expansion, and intricate style/lighting adjustments.
- • Diverse Perspectives: Generates dynamic scenes from a multitude of perspectives and camera angles, enriching storytelling and creative exploration.
- • Seamless Scene Combination: Supports rapid iteration through the effortless combination and modification of scenes.
- • Efficient AI: Utilizes highly efficient AI models, enabling flexible video manipulation without necessitating large teams or budgets.
🚀 Core Capabilities
- ✅ Real-Time Editing: Offers powerful generative tools for instant, on-the-fly modifications to existing videos.
- ✅ Scene Perspective & Object Transformation: Easily alter scene perspectives and apply sophisticated object transformations.
- ✅ Advanced Style Variations: Comprehensive adjustments including lighting, mood, and overall visual aesthetics.
- ✅ Precise Object Removal & Replacement: Flawlessly remove or replace characters, props, and other scene elements.
- ✅ Diverse Creative Use Cases: Enables imaginative applications such as changing seasons, weather conditions, wardrobe, and complex visual effects.
- ✅ Intuitive Control Mechanisms: Supports both text-based instructions and visual references for highly precise and controlled output.
💲 API Pricing
$0.1575 / second
🎯 Optimal Use Cases
- 💡 Creative Content Creation: Perfect for advertising, marketing campaigns, and dynamic storytelling.
- 💡 Rapid Prototyping & Iteration: Quickly develop and refine video scenes and effects for design exploration.
- 💡 Streamlined Post-Production: Expedite workflows requiring object removal, relighting, or significant style shifts.
- 💡 Advanced Visual Effects Generation: Create compelling VFX such as fire, explosions, weather changes, and virtual staging.
- 💡 Footage Enhancement: Elevate existing video content by applying new perspectives and diverse stylistic variations.
💻 Code Sample
<snippet data-name="runway.create-video-to-video-generation" data-model="runway/gen4_aleph"></snippet>
🆚 Comparison with Other Models
Aleph vs. Runway Act Two:
Act Two primarily targets rapid, budget-conscious prototyping for very short clips (minimum 3 seconds), specifically tailored for animating characters with credit-based pricing. Aleph, conversely, excels in complex, in-context video edits and versatile manipulations on existing footage, rather than specific character animation.
Aleph vs. OpenAI Sora:
While Sora specializes in generating extended, cinematic-quality scenes and premium visual effects, Aleph is designed for fast, cost-effective, and highly flexible video inpainting and editing within shorter clips. It's the go-to for developers prioritizing speed and ease-of-use for instant object removal/replacement and scene modifications over heavy-weight cinematic production.
Aleph vs. Pika Labs:
Pika Labs offers advanced customization in complex workflows but often at a higher computational cost and with less focus on precise video inpainting features. Aleph distinguishes itself for swift object replacements and VFX generation, making it a more accessible solution for targeted video transformation tasks.
Aleph vs. Kaiber AI:
Kaiber is renowned for its broad video style transfer and artistic re-stylization capabilities, focusing on creative transformations. Aleph, however, offers more granular technical control over specific scene elements such as object manipulation, camera angles, and lighting adjustments, appealing to users who require intricate multi-task video generation and editing within existing footage, beyond purely stylistic changes.
⚠️ Limitations
- • Currently optimized primarily for video segments up to approximately 5 seconds, which may not be suitable for very long narratives or ultra-short single-frame applications.
- • Challenges still exist with lip-syncing accuracy and the precise insertion of new, complex objects.
- • Editing effects may occasionally lead to unintended alterations of other scene elements, potentially requiring manual refinement.
❓ Frequently Asked Questions (FAQ)
Q: What is Aleph by RunwayML primarily designed for?
A: Aleph is an advanced API for in-context video generation and editing, specifically created to perform a wide range of complex manipulations on existing video footage using AI.
Q: How does Aleph streamline video production workflows?
A: It streamlines workflows by offering flexible, high-quality video editing and generation within a single platform. Users can achieve tasks like object removal, style adjustments, and scene expansion rapidly using text prompts or visual references, reducing manual effort significantly.
Q: What are Aleph's key strengths compared to other video AI models?
A: Aleph's primary strengths lie in its speed, cost-effectiveness, and flexible in-context video editing for shorter clips. It excels in precise object manipulation, scene modifications, and VFX generation, making it ideal for targeted, intricate edits.
Q: What is the API pricing for Aleph?
A: The Aleph API is currently priced at $0.1575 per second of video content processed.
Q: Are there any known limitations when using Aleph?
A: Yes, Aleph is currently optimized for video segments up to about 5 seconds. Other limitations include potential challenges with lip-syncing accuracy, precise insertion of new objects, and occasional unintended alterations to scene elements that might require manual adjustments.
Learn how you can transformyour company with AICC APIs



Log in