Runway API v1
August 8, 2024 (April 24, 2026)
This is experimental API for for the Runway AI.
Our API supports all the functionality for Gen-4.5, Gen-4, Gen-4 Turbo available on the original Runway AI website. This includes Text/Image/Video-to-Video, Act Two, Aleph, Lip Sync, Images, Videos, Frames and more. Weβre fully committed to bringing all the Runway AI website magic to our API customers.
The Videos is a unified video generation endpoint supporting 14 AI models: Seedance 2.0, Kling 3.0 Pro/Standard, Kling 3.0 Motion Control (character animation via motion transfer), Kling 2.6 Pro/I2V, Wan 2.6 Flash, Wan 2.2 Animate, Veo 3.1, Sora 2/Pro, and Gen-4.5, Gen-4, Gen-4 Turbo with text-to-video, image-to-video, keyframe, and multi-reference (up to 11 mixed images + videos) workflows. Accounts with a Runway Unlimited plan can use exploreMode to generate videos without using credits β all models except Veo 3.1 support unlimited mode.
The Images is a unified image generation endpoint supporting 11 AI models: FLUX.2 Max, FLUX.2 Klein, Nano Banana, Nano Banana 2, Nano Banana Pro, GPT Image 1.5, GPT Image 1 Mini, GPT Image 2, Seedream 5, Gen-4, and Gen-4 Turbo with text-to-image and reference image workflows. Nano Banana, Nano Banana 2, and Nano Banana Pro support image references of famous people and minors, unlike Google Flow. Accounts with a Runway Unlimited plan can use exploreMode to generate images without using credits β all models support unlimited mode.
The Lip Sync enables you to use an image or video to create generative videos where the selected face speaks lines from your audio clips or AI-generated voices (28+ languages, model eleven_multilingual_v2).
For free Runway accounts, the following features are unlocked with this experimental API:
- Unlimited number of Image Upscaler generations
- Unlimited number of Transcribe generations
- Unlimited number of Super-Slow Motion generations
- Unlimited number of Frames Describe generations
Postman collection (April 15, 2026)
LLM-friendly API spec Feed this to your LLM to build integrations
Q&A: How is your experimental Runway API different from the official Runway API?
Articles:
Examples:
- Seedance 2.0 β real-face tutorial (Runway & Dreamina)
- 16+ AI Image Models: The Showdown
- Veo, Kling, Sora, Wan videos and Nano Banana, Seedream images
- Nano Banana Pro images
- Gen-4.5 image-to-video
- Gen-4.5 text-to-video
- Act Two
- Act Two voice swap
- Aleph
Developer Community:
Table of contents
-
GETaccounts -
GETaccounts/email -
POSTaccounts/email -
DELaccounts/email -
GETfeatures -
POSTimages/create -
POSTvideos/create -
POSTgen4_5/create -
POSTgen4turbo/create -
POSTgen4/create -
POSTgen4/upscale -
POSTgen4/video -
POSTgen4/act-two -
POSTgen4/act-two-voice -
POSTgen3turbo/create -
POSTgen3turbo/video -
POSTgen3turbo/extend -
POSTgen3turbo/expand -
POSTgen3turbo/actone -
POSTgen3/create -
POSTgen3/video -
POSTgen3/extend -
POSTgen3/actone -
POSTgen3alpha/upscale -
POSTsuper_slow_motion -
POSTlipsync/create -
POSTframes/create -
GETframes/describe -
GETlipsync/voices -
GETimage_upscaler -
GETtranscribe -
GETassets -
GETassets/assetId -
POSTassets -
DELassets/assetId -
GETscheduler -
DELscheduler/taskId -
GETtasks -
GETtasks/taskId -
DELtasks/taskId