Skill Detail

RunwayML Gen-3 Alpha Video Composer

Composes AI-generated video clips using the RunwayML Gen-3 Alpha API with text-to-video and image-to-video modes. Manages generation tasks, polling, and output stitching via FFmpeg.

Image & Creative AutomationClaude Code
Image & Creative Automation Claude Code Security Reviewed
Tool match: ffmpeg โญ 58.5k GitHub stars
INSTALL WITH ANY AGENT
npx skills add agentskillexchange/skills --skill runwayml-gen-3-alpha-video-composer Copy
Works best when you want a reusable capability, not another fragile one-off prompt.
At a glance
Author
FFmpeg
Last updated
Mar 24, 2026
Quick brief

The RunwayML Gen-3 Alpha Video Composer automates AI video generation by orchestrating the RunwayML API for Gen-3 Alpha model access. It submits generation tasks via the RunwayML REST API with configurable parameters including prompt text, seed image, motion amount, and duration settings. The skill manages the asynchronous generation workflow by polling task status endpoints until completion, handling rate limits and queue positions gracefully. Generated video clips are downloaded and can be sequenced into longer compositions using FFmpeg for concatenation, crossfade transitions, and audio track overlay. The skill supports image-to-video mode where a reference frame is uploaded via the RunwayML Assets API and used as the first frame for temporally consistent generation. Batch generation handles multiple scenes described in a shot list format, generating each scene independently and assembling them in order. Output videos are encoded with configurable codec settings including H.264 for web delivery and ProRes for editing workflows. The skill tracks generation credits and estimated costs per task.