This experiment demonstrates how to deploy and use a text-to-audio model (Stable Audio Open 1.0) using Flexai's inference serving capabilities.
Before starting, make sure you have:
First, create a FlexAI secret that contains your Hugging Face token to access the inference model:
# Enter your HF token value when prompted
flexai secret create MY_HF_TOKENNote: Make sure your Hugging Face token has access to the stabilityai/stable-audio-open-1.0 model. You may need to accept the model's license terms on Hugging Face first.
Start the FlexAI endpoint for the Stable Audio Open 1.0 model:
INFERENCE_NAME=stable-audio-open
flexai inference serve $INFERENCE_NAME --runtime flexserve --hf-token-secret MY_HF_TOKEN -- --task text-to-audio --model stabilityai/stable-audio-open-1.0This command will:
Once the endpoint is deployed, you'll see the API key displayed in the output. Store it in an environment variable:
export INFERENCE_API_KEY=<API_KEY_FROM_ENDPOINT_CREATION_OUTPUT>Then retrieve the endpoint URL:
export INFERENCE_URL=$(flexai inference inspect $INFERENCE_NAME -j | jq .config.endpointUrl -r)You'll notice these export lines use the jq tool to extract values from the JSON output of the inspect command.
If you don't have it already, you can get jq from its official website: https://jqlang.org/
Now you can generate audio by making HTTP POST requests to your endpoint. Here are some examples:
curl -X POST \
-H "Authorization: Bearer $INFERENCE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"inputs": "Relaxing piano music with soft ambient sounds, calm and peaceful",
"parameters": {
"audio_end_in_s": 10.0,
"num_inference_steps": 200,
"guidance_scale": 7.0,
"seed": 42
}
}' \
-o relaxing_music.wav \
"$INFERENCE_URL/v1/audios/generations"Example 2: Nature Soundscape
curl -X POST \
-H "Authorization: Bearer $INFERENCE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"inputs": "Forest ambience with birds chirping and a gentle stream flowing",
"parameters": {
"audio_end_in_s": 15.0,
"num_inference_steps": 200,
"guidance_scale": 7.0,
"negative_prompt": "distorted, low quality, muffled",
"seed": 123
}
}' \
-o nature_sounds.wav \
"$INFERENCE_URL/v1/audios/generations"Example 3: Electronic Beat
curl -X POST \
-H "Authorization: Bearer $INFERENCE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"inputs": "Upbeat electronic music with synthesizers and energetic drums, 128 bpm",
"parameters": {
"audio_end_in_s": 20.0,
"num_inference_steps": 200,
"guidance_scale": 7.0,
"seed": 456
}
}' \
-o electronic_beat.wav \
"$INFERENCE_URL/v1/audios/generations"These will save the generated audio files in your current directory.
The API accepts the following parameters:

To celebrate this launch we’re offering €100 starter credits for first-time users!
Get Started Now