Skill Detail

OpenAI Whisper API Transcription

API-based transcription with a lighter setup than local Whisper.

Media & TranscriptionOpenClaw
Media & Transcription OpenClaw Security Reviewed Security: Low
INSTALL WITH ANY AGENT
npx skills add agentskillexchange/skills --skill openai-whisper-api-transcription Copy
Tools required
OpenAI Audio API, curl or OpenAI SDK
Install & setup
pip install openai
Author
OpenAI
Publisher
Individual

This skill wraps OpenAI’s hosted transcription endpoint for fast, lightweight audio-to-text. Send audio files and get back plain-text or JSON transcripts with minimal setup.

How it differs from the local Whisper skill

The already-live OpenAI Whisper (local) listing runs the Whisper model directly on your machine using Python and requires downloading model weights and a capable CPU/GPU. This listing uses the hosted API โ€” no model downloads, no local compute requirements, no Python ML dependencies. The tradeoff is API cost and network dependency.

Best for

  • Meeting recordings and podcast notes
  • Voice memos and interview transcripts
  • Any audio-to-text workflow where convenience and speed matter more than running your own model

Install notes

Set your OPENAI_API_KEY environment variable or configure it in OpenClaw skill config. Requires curl (pre-installed on most systems). No other dependencies.

Source: OpenClaw skills/openai-whisper-api