AI Music Generator
Pricing
from $60.00 / 1,000 generated tracks
AI Music Generator
Transform your ideas into professional-grade, radio-ready tracks in seconds. This powerhouse Apify Actor connects to cutting-edge AI music models to give you a complete, automated music production studio right at your fingertips. This tool is designed for speed, creativity, and seamless automation.
Pricing
from $60.00 / 1,000 generated tracks
Rating
0.0
(0)
Developer
Visita Intelligence
Actor stats
1
Bookmarked
2
Total users
1
Monthly active users
22 days ago
Last modified
Categories
Share
AI Music Studio & Generator
Generate AI songs, remixes, instrumentals, lyric videos, cover art, and personas from one Apify Actor.
This Actor is designed for creators, agencies, and product teams that need a reliable, automation-friendly music workflow. You can run it from the Apify Console UI or API, schedule jobs, monitor runs, and connect outputs to your own systems using webhooks.
What does this AI Music Generator Actor do?
This Actor turns a prompt or reference track into production-ready music assets. It supports 8 studio modes, an AI Co-Writer, a Premium Audio Vault, and post-production workflows for media, persona generation, and free details lookup.
Fastest way to try it:
- Set
studioModetooriginal-creation. - Enable
generateLyricsLLM. - Choose
libraryGenre,songMood, and run.
What can this Actor do?
- 🎚️ End-to-end music workflow: generate audio first, then visuals/persona with
taskId. - 🤖 AI lyric writing with genre-aware structure and mood control.
- 📚 Remix support from custom audio links or Premium Audio Vault tracks.
- 🎬 Generate media assets from existing songs.
- 🧑🎤 Generate and reuse AI personas with persistent persona profile storage.
- 🧠 Optional persona-profile conditioning for LLM lyrics via
submitPersonaProfileToLLM. - 🧩 Split stems and extend tracks for post-production.
- 🔎 Free generation details retrieval by
taskIdoraudioIdwithget-detailsmode. - ☁️ Built-in Apify advantages: scheduling, API access, webhooks, logs, dataset export, and run monitoring.
Studio Modes Explained
1. original-creation
Create a new song from scratch from your prompt or AI-generated lyrics.
2. cover-remix
Generate a new vocal/lyrical interpretation over existing audio from:
custom-linkpremium-library
3. instrumental-magic
Add or reshape instrumental backing around source content.
4. stem-splitter
Split tracks into isolated elements (for example, vocals and instrumentals).
5. extend-track
Append new sections to an existing track for longer edits/versions.
6. generate-media
Generate visual assets from an existing music taskId.
7. generate-persona
Generate persona output from an existing music taskId.
8. get-details
Fetch generation details by taskId or audioId without consuming generation credits.
How to generate AI music with this Apify Actor
- Choose
studioMode. - If needed, select
audioSourceand providecustomAudioLinkor vault settings. - Enable
generateLyricsLLMfor automatic songwriting, or provide manualprompt. - Configure
libraryGenre,style, and optional mood/theme settings. - Run the Actor and save returned
taskId. - For visuals/persona, run again in
generate-mediaorgenerate-personamode using thattaskId. - To reuse persona in future songs, enable
usePersona, setgenerationPersonaName, and optionally enablesubmitPersonaProfileToLLM.
How much does AI music generation cost on Apify?
Pricing depends on your selected pricing model and consumed resources per run.
- Cost can vary by mode, run duration, selected model, and polling behavior.
- Audio-first + follow-up media/persona runs usually give clearer cost control than one complex run.
- You can monitor usage in run logs and billing pages, then optimize by reducing retries and tightening input scope.
Tip: for large pipelines, use callBackUrl + asyncPassthrough to avoid long blocking runs.
Persona Workflow (Recommended)
- Generate music (
original-creationorcover-remix) and keep the returnedtaskId. - Run
generate-personawithpersonaTaskId,personaName, andpersonaDescription. - The Actor stores the persona profile in its internal DB (Apify KV store) for later reuse.
- For future generation runs, enable
usePersonaand setgenerationPersonaName. - Enable
submitPersonaProfileToLLMonly when you want persona description guidance included in lyric generation.
Notes:
- Persona lookup is case-insensitive.
- If
submitPersonaProfileToLLMis enabled and persona is not found, the run fails withPERSONA_NOT_FOUND. - If
usePersonais enabled but no persona name is provided, the run fails withPERSONA_NAME_REQUIRED.
Input Example
Use the Input tab in Apify Console for full field descriptions. Example payload:
{"studioMode": "original-creation","generateLyricsLLM": true,"llmModel": "openai/gpt-4o-mini","libraryGenre": "amapiano","songMood": "hype-euphoric","songTheme": "party-hype","prompt": "","style": "female vocals, warm bass, club-ready","model": "V4_5ALL","pollingIntervalSeconds": 20,"asyncPassthrough": false}
Output Example
You can export dataset items as JSON, CSV, XML, RSS, Excel, and more.
{"status": "SUCCESS","taskId": "1234567890abcdef","audioId": "53487c0d-1643-47c7-ad76-4859038ad645","title": "Midnight Code","style": "amapiano, female vocals, club","audio_url": "https://.../track.mp3","video_url": "https://.../lyric-video.mp4","image_url": "https://.../cover.jpg","persona_id": "2774abf480d5b38fc761820dfc7c9c4d","persona_name": "Know Peace","persona_description": "Short persona summary used for continuity","persona_status": "PENDING","persona_error": null}
What data does this Actor return?
| Field | Description |
|---|---|
status | Run outcome (SUCCESS, FAILED, etc.) |
taskId | Task identifier used for polling and follow-up modes |
audioId | Audio identifier used for media/persona follow-up modes |
audio_url | Generated audio track URL |
video_url | Generated video URL (when available) |
image_url | Generated cover/image URL (when available) |
title | Song title |
style | Final style tags used in generation |
persona_id | Generated persona identifier |
persona_name | Persona name used/generated |
persona_description | Persona description/profile summary |
persona_status | Persona workflow status |
persona_error | Persona error details (if any) |
Why use this Actor on Apify instead of a raw script?
- Reliable scheduling and retries.
- Built-in run logs and observability.
- API-first orchestration for apps and no-code tools.
- Easy dataset export and downstream integrations.
- One hosted workflow instead of multiple custom services.
Other Actors by this creator
If you use multiple automation workflows, consider building a connected stack of Actors for:
- content generation
- media processing
- publishing automation
Add your related Actor links here as you expand your Store portfolio.
FAQ, Troubleshooting, and Support
Why did I not get video or image in the same run?
Use generate-media with an existing successful music taskId. Media generation is a follow-up workflow.
Why did I get PERSONA_NOT_FOUND?
The provided generationPersonaName does not exist in the Actor's persona profile DB for this Actor run history. First create it in generate-persona mode, then reuse it (case-insensitive name matching is supported).
Why did I get PERSONA_NAME_REQUIRED?
You enabled usePersona but did not provide generationPersonaName.
What does submitPersonaProfileToLLM do?
When enabled, a short saved persona description is sent to the lyric LLM for stylistic conditioning. When disabled, persona is still submitted to Suno generation payload (voice/style continuity) without profile text conditioning in LLM prompts.
Why are my lyrics too short or too generic?
- Enable
generateLyricsLLM. - Provide clearer
songThemeorllmPromptcontext. - Add stronger
styletags and choose an appropriatelibraryGenre.
What if a selected genre has no vault tracks?
The Actor applies fallback behavior to keep generation working. Genre-aware lyric prompting still follows your selected genre instructions.
Is this Actor beginner-friendly?
Yes. You can run point-and-click from Input UI. Advanced users can automate with API, webhooks, and async orchestration.
Is generated content legal to use?
You are responsible for complying with applicable copyright, platform, and local laws for prompts, references, and outputs. Avoid infringing source material and consult legal counsel for commercial edge cases.
Where can I get help?
- Check run logs and dataset output first.
- Review Input tab field descriptions.
- Open an issue or contact the maintainer through the Actor page for support and feature requests.