Luma API:
Generate AI Videos
Access Luma AI's powerful Dream Machine and Ray3 models through a simple REST API. Generate ultra-realistic videos from text prompts and images. No complexity, just stunning AI-generated videos at scale.
Trusted by growing companies worldwide
Does Luma AI Have an Official API?
However, integrating directly can be complex. Apiframe provides a simplified, production-ready wrapper around Luma AI with additional features like webhook support, multiple SDKs, and unified billing across all AI models.
Note: Apiframe provides a production-ready API for generating videos with Luma AI. We handle all the complexity, giving you simple REST endpoints to create stunning AI videos from text and images.
Disclaimer: Apiframe is not affiliated with, endorsed by, or sponsored by Luma AI. Luma is a trademark of Luma Labs, Inc.
Everything You Need in a Luma AI API
Production-ready features for developers building with Luma AI.
Text to Video
Generate cinematic videos from text descriptions. Dream Machine and Ray3 create physically accurate scenes with natural motion.
Image to Video
Transform static images into dynamic videos. Bring any image to life with AI-powered animation and motion.
Keyframe Control
Define start and end frames to control the narrative flow. Create precise video transitions and storytelling sequences.
Extend & Loop
Extend videos to longer durations and create seamless loops. Perfect for social media content and ambient visuals.
Camera Control
Use generative camera features to achieve cinematic effects. Pan, zoom, and dolly movements from simple text instructions.
Variable Aspect Ratios
Generate videos in any aspect ratio—16:9, 9:16, 1:1, and more. Perfect for all platforms without complex editing.
Webhook Support
Get real-time notifications when your videos are ready. No polling required for async generation workflows.
Multiple SDKs
Generate Luma AI videos with Node.js, Python, PHP, and more. Official SDKs for rapid integration.
Quick Start – Get Running in 5 Minutes
Generate your first Luma AI video with just a few lines of code.
const { Apiframe } = require('@apiframe-ai/sdk');
// Initialize the client
const client = new Apiframe({
apiKey: 'your_api_key_here'
});
async function generateVideo() {
// Create a video generation task
const task = await client.luma.generate({
prompt: 'cinematic shot of a sunset over mountains, golden hour lighting, drone footage',
aspect_ratio: '16:9'
});
console.log('Task created:', task.id);
// Wait for completion with progress updates
const result = await client.tasks.waitFor(task.id, {
onProgress: (p) => console.log('Progress:', p)
});
console.log('Video ready:', result.video_url);
}
generateVideo();from apiframe import Apiframe
# Initialize the client
client = Apiframe(api_key='your_api_key_here')
# Create a video generation task
task = client.luma.generate({
'prompt': 'cinematic shot of a sunset over mountains, golden hour lighting, drone footage',
'aspect_ratio': '16:9'
})
print(f"Task created: {task['id']}")
# Wait for completion with progress updates
result = client.tasks.wait_for(
task['id'],
on_progress=lambda p: print(f'Progress: {p}%'))
print(f"Video ready: {result['video_url']}")
# Close the client
client.close()<?php
require 'vendor/autoload.php';
use Apiframe\Apiframe;
$client = new Apiframe([
'apiKey' => 'your_api_key_here'
]);
// Create a video generation task
$task = $client->luma->generate([
'prompt' => 'cinematic shot of a sunset over mountains, golden hour lighting, drone footage',
'aspect_ratio' => '16:9'
]);
echo "Task created: {$task['id']}\n";
// Wait for completion with progress updates
$result = $client->tasks->waitFor($task['id'], [
'onProgress' => function($progress) {
echo "Progress: {$progress}%\n";
}
]);
echo "Video ready: {$result['video_url']}\n";package main
import (
"fmt"
"log"
"os"
"github.com/apiframe-ai/apiframe-go-sdk"
)
func main() {
// Create a new Apiframe client
client, err := apiframe.NewClient(apiframe.Config{
APIKey: os.Getenv("APIFRAME_API_KEY"),
})
if err != nil {
log.Fatal(err)
}
// Create a video generation task
task, err := client.Luma.Generate(apiframe.LumaGenerateParams{
Prompt: "cinematic shot of a sunset over mountains, golden hour lighting, drone footage",
AspectRatio: "16:9",
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("Task created: %s\n", task.ID)
// Wait for completion with progress updates
result, err := client.Tasks.WaitFor(task.ID, &apiframe.WaitForOptions{
OnProgress: func(progress int) {
fmt.Printf("Progress: %d%%\n", progress)
},
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("Video ready: %s\n", result.VideoURL)
}curl -X POST https://api.apiframe.ai/luma-imagine \
-H "Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"prompt": "cinematic shot of a sunset over mountains, golden hour lighting, drone footage",
"aspect_ratio": "16:9"
}'What Can You Build with the Luma AI API?
Real-world applications powered by Luma AI through Apiframe.
Marketing & Advertising
Create stunning video ads and campaign content on demand. Generate multiple creative variations to optimize performance and A/B test at scale.
Social Media Content
Produce engaging videos for TikTok, Instagram Reels, and YouTube Shorts. Generate platform-optimized content with perfect aspect ratios.
Film & Video Production
Create concept videos, storyboards, and pre-visualization content. Explore creative directions before committing to full production.
Game Development
Generate cinematics, trailers, and promotional content for games. Create concept animations and cutscene previsualization.
SaaS & App Integration
Add AI video generation as a feature to your product. Let users create videos without leaving your application.
E-Commerce & Product Videos
Generate product demonstration videos and lifestyle content. Showcase products in motion without expensive video shoots.
Education & Training
Create educational videos, visual explanations, and training materials. Generate engaging content that enhances learning experiences.
Real Estate & Architecture
Visualize property walkthroughs and architectural concepts. Create immersive video tours and design visualizations.
Get Started in 3 Simple Steps
From signup to your first generated video in minutes.
Sign Up & Get API Key
Create your account using your email or GitHub. Get your Luma API key in minutes from the dashboard.
Make First API Call
Use our sample code to generate your first AI video. Specify your prompt, aspect ratio, and optional settings.
Receive Your Video
When generation completes, download your video in high-quality MP4 format. Stored on our CDN forever.
Why Apiframe vs Other Luma AI APIs
See how Apiframe compares to alternative Luma AI API providers.
| Features | Apiframe starts at $19/mo | Luma Direct starts at $9.99/mo | Replicate Pay-per-use | AWS Bedrock Enterprise pricing |
|---|---|---|---|---|
| Full Luma Access | Dream Machine + Ray3 | Full access | Limited models | Ray2 only |
| Multiple AI Models | 15+ models in one API | Luma only | Multiple models | Multiple models |
| Text to Video | Full support | Full support | Supported | Supported |
| Image to Video | Full support | Full support | Supported | Supported |
| Keyframe Control | Start & end frames | Supported | Limited | Supported |
| Video Extension | Up to 9 seconds | Supported | Limited | Supported |
| Webhook Support | Full support | Not available | Supported | Supported |
| Official SDKs | Node, Python, PHP, Go | None | Python only | AWS SDKs |
| Unified Billing | All models, one bill | Luma only | Unified | AWS billing |
| Free Trial | Free credits | 30 free generations | Limited | No free tier |
| Support | Live Chat + Discord | Discord community | Email + Docs | AWS support tiers |
How Does Luma AI Compare to Other AI Models?
Choose the right model for your use case. All models available through Apiframe.
| Features | Luma AI (via Apiframe) | Runway Gen-3 Alpha | Kling 1.5 | Pika 1.5 | OpenAI Sora |
|---|---|---|---|---|---|
| Best For | Cinematic videos, natural motion, coherent physics | Professional video editing, creative control | High-fidelity synthesis, speed-focused | Quick iterations, stylized content | Long-form, photorealistic videos |
| Latest Model | Ray3 / Dream Machine | Gen-3 Alpha | Kling 1.5 | Pika 1.5 | Sora |
| Official API | Yes (via Apiframe) | Yes | Yes | Yes | ChatGPT only |
| Video Quality | Best (16-bit HDR) | Excellent | Very good | Good (480p-1080p) | Best |
| Motion Coherence | Best | Excellent | Good | Good | Best |
| Video Length | 5-9 seconds | 5-10 seconds | 5-10 seconds | 3-5 seconds | Up to 20 seconds |
| Generation Speed | 30-120 seconds | 60-180 seconds | 30-60 seconds (fastest) | 60-120 seconds | 2-5 minutes |
| Camera Control | Full control | Best (Motion Brush) | Supported | Limited | Supported |
| Image to Video | Excellent | Supported | Supported | Supported | Supported |
| Free Tier | 300 credits | 125 credits (one-time) | 66 credits/day | 80 credits/month | No (ChatGPT Plus required) |
| Pricing | $19/mo (Apiframe) | From $12/mo | Credits-based | From $8/mo | $20-200/mo (ChatGPT) |
| Best Use Cases | Ads, social media, product videos | Film, professional content | High-volume, quick turnaround | Social content, quick edits | Cinematic, long-form content |
All models accessible through a single Apiframe API key. View full pricing details →
Which Model is for You?
- Ultra-realistic videos with natural motion.
- Coherent physics and cinematic quality.
- 16-bit HDR color output for professional use.
- Powerful image-to-video transformation.
- Access to multiple AI models through one API.
- Advanced camera and motion control.
- Professional video editing features (Aleph).
- Motion Brush for precise animation.
- Established platform with large community.
- Direct integration with editing workflows.
- Fastest generation speeds.
- Cost-effective high-volume production.
- Quick turnaround for iterations.
- Good quality at scale.
- Budget-conscious video generation.
- Quick, stylized content creation.
- Simple interface for beginners.
- Fast iteration on ideas.
- Social media-optimized outputs.
- Budget-friendly experimentation.
- Long-form video generation (up to 60s).
- Photorealistic output quality.
- Complex scene understanding.
- State-of-the-art capabilities.
- Integration with OpenAI ecosystem.
Trusted by Developers Worldwide
See what developers are saying about our Luma AI API.
The API works very well
The API works very well, it is fast and returns the necessary for the applications. Its application is easy and support responds quickly.
Easy to setup API
Very easy to set up. Everything works fast.
Support is fast and helpful
Support is fast and the API is easy to implement.
Luma AI API Pricing and Plans
Simple, transparent pricing. No hidden fees. Pay only for what you use.
Basic Plan
1,000 credits/month
- All Luma AI features
- API access
- Webhook support
- Integrations
Starter Plan
5,500 credits/month
- All Luma AI features
- API access
- Webhook support
- Integrations
Growth Plan
12,000 credits/month
- All Luma AI features
- API access
- Webhook support
- Integrations
Scale 1
High-volume pricing for enterprise customers with dedicated support and custom SLA.
120,000 image credits/month
Need more credits? You can buy additional credits at any time.
Frequently Asked Questions
Common questions about the Luma AI API.
What is the Luma API?
The Luma API is a REST interface that allows developers to programmatically generate AI videos using Luma AI's Dream Machine and Ray3 models. With simple HTTP requests, you can create ultra-realistic videos from text prompts and images with natural motion and coherent physics.
Does Luma AI have an official API?
Yes, Luma AI offers an official API. However, Apiframe provides a simplified wrapper with additional features like webhook support, multiple SDKs, no-code integrations, and unified billing across 15+ AI models—making integration faster and more developer-friendly.
How much does Luma API cost?
Through Apiframe, Luma API access starts at $19/month for 3,000 credits (~30 videos). A free tier with 300 credits is available to get started. Enterprise plans with higher volumes are also available for larger projects.
What is Luma Dream Machine?
Dream Machine is Luma AI's flagship text-to-video model launched in June 2024. Built on a multimodal transformer architecture trained directly on videos, it produces physically accurate, consistent scenes with natural motion.
What is Luma Ray3?
Ray3 is Luma AI's latest video generation model, introduced in September 2025. It features enhanced reasoning capabilities, 16-bit HDR color output, visual annotation support, and a Draft Mode for faster, more cost-effective generation.
What video formats does Luma API support?
Luma API generates high-quality MP4 videos. You can specify various aspect ratios including 16:9, 9:16, 1:1, and more to optimize for different platforms like YouTube, TikTok, Instagram, and others.
How long can Luma-generated videos be?
Luma AI can generate videos up to 5-9 seconds with Ray2/Ray3 models. You can extend videos using the extend feature to create longer sequences by chaining multiple generations together.
Can I turn images into videos with Luma?
Yes, Luma AI excels at image-to-video generation. You can provide a starting image and the AI will animate it with natural motion, maintaining consistency with the original image while adding dynamic movement.
What is keyframe control in Luma API?
Keyframe control allows you to define start and end frames for your video generation. This gives you precise control over the narrative flow, letting you specify exactly how the video should begin and end.
Does the Luma API support webhooks?
Yes, Apiframe provides full webhook support for Luma API. You can receive real-time notifications when your video generation is complete, eliminating the need for polling and enabling async workflows.
What SDKs are available for Luma API?
Apiframe provides official SDKs for Node.js, Python, PHP, and Go. You can also use direct HTTP requests with any programming language or integrate via no-code platforms like Zapier and Make.
Can I use Luma-generated videos commercially?
Yes, videos generated through Apiframe's Luma API can be used commercially according to Luma AI's terms of service. Always review the latest terms for specific commercial use guidelines.
How long does video generation take with Luma?
Typical generation times range from 30-120 seconds depending on video length and complexity. Ray3's Draft Mode enables faster generation for quick iterations, while full-quality renders take slightly longer.
Is there a free trial for Luma API?
Yes, Apiframe offers a free tier with 300 credits per month, enough to generate approximately 3 Luma videos. No credit card required to get started—create an account and start generating immediately.
How does Luma AI compare to other video generators?
Luma AI is known for exceptional motion coherence, physics accuracy, and cinematic quality. Ray3 is the first model to support 16-bit HDR output. Compared to Runway and Kling, Luma offers superior natural motion and realistic physics simulation.
What makes Luma AI's physics simulation special?
Luma's models are trained directly on video data, enabling them to understand and reproduce real-world physics naturally. This results in videos where objects move, interact, and behave in physically plausible ways without artifacts.
Still have questions?
Start Generating AI Videos Today
Get instant access to Luma AI, plus Midjourney, Runway, Kling, and 10+ other AI models through one API.
Questions? Join our Discord or contact sales.