AI Music Studio & Generator
Pricing
from $500.00 / 1,000 π΅ generated tracks
AI Music Studio & Generator
Transform your ideas into professional-grade, radio-ready tracks in seconds. This powerhouse Apify Actor connects to cutting-edge AI music models to give you a complete, automated music production studio right at your fingertips. This tool is designed for speed, creativity, and seamless automation.
Pricing
from $500.00 / 1,000 π΅ generated tracks
Rating
0.0
(0)
Developer
Visita Intelligence
Actor stats
1
Bookmarked
3
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
Generate AI songs, remixes, instrumentals, lyric videos, cover art, and personas from one Apify Actor.
This Actor is designed for creators, agencies, and product teams that need a reliable, automation-friendly music workflow. You can run it from the Apify Console UI or API, schedule jobs, monitor runs, and connect outputs to your own systems using webhooks.
What does this AI Music Generator Actor do?
This Actor turns a prompt or reference track into production-ready music assets. It supports 10 studio modes, an AI Co-Writer, a Premium Audio Vault, and post-production workflows for media, persona generation, and free details lookup.
Fastest way to try it:
- Set
studioModetooriginal-creation. - Enable
generateLyricsLLM. - Choose
libraryGenre,songMood, and run.
What can this Actor do?
- ποΈ End-to-end music workflow: generate audio first, then visuals/persona with
taskId. - π€ AI lyric writing with genre-aware structure and mood control.
- π Remix support from custom audio links or Premium Audio Vault tracks.
- π¬ Generate media assets from existing songs.
- π§βπ€ Generate and reuse AI personas with persistent persona profile storage.
- π§ Optional persona-profile conditioning for LLM lyrics via
submitPersonaProfileToLLM. - πΏ Album Mode: Design and produce a full album (up to 20 tracks) in a multi-run, stateful workflow.
- π€‘ Meme Music & Viral Hits: Create internet-breaking songs with subgenres like Phonk, Sillycore, and Meme Rap.
- π° Brainrot Search: Toggle real-time search (Brave API) to incorporate latest viral memes and trends into your lyrics.
- π Global Multilingual Support: Dynamic support for multiple languages including Xhosa, Zulu, and seamless multilingual blending.
- π§ The Humanizer: Built-in anti-AI-clichΓ© logic that bans generic terms like "neon lights" or "echoes" for more authentic, industry-standard lyrics.
- π§© Split stems and extend tracks for post-production.
- π Free generation details retrieval by
taskIdoraudioIdwithget-detailsmode. - βοΈ Built-in Apify advantages: scheduling, API access, webhooks, logs, dataset export, and run monitoring.
Studio Modes Explained
1. original-creation
Create a new song from scratch from your prompt or AI-generated lyrics.
2. cover-remix
Generate a new vocal/lyrical interpretation over existing audio from:
custom-linkpremium-library
3. instrumental-magic
Add or reshape instrumental backing around source content.
4. stem-splitter
Split tracks into isolated elements (for example, vocals and instrumentals).
5. extend-track
Append new sections to an existing track for longer edits/versions.
6. generate-media
Generate visual assets from an existing music taskId.
7. generate-persona
Generate persona output from an existing music taskId.
8. get-details
Fetch generation details by taskId or audioId without consuming generation credits.
9. album-mode
Produce a full album (up to 20 tracks) with AI-assisted tracklist design, batched lyric generation, and batched audio generation. Uses a 3-phase workflow (Plan β Lyrics β Generate) with persistence across multiple Actor runs.
10. meme-music-generator
Create highly viral, absurd, or parody internet songs. Includes specific "Brainrot Search" capabilities to pull in real-time trends from the web.
How to generate AI music with this Apify Actor
- Choose
studioMode. - If needed, select
audioSourceand providecustomAudioLinkor vault settings. - Enable
generateLyricsLLMfor automatic songwriting, or provide manualprompt. - Configure
libraryGenre,style, and optional mood/theme settings. - Run the Actor and save returned
taskId. - For visuals/persona, run again in
generate-mediaorgenerate-personamode using thattaskId. - To reuse persona in future songs, enable
usePersona, setgenerationPersonaName, and optionally enablesubmitPersonaProfileToLLM.
How much does AI music generation cost on Apify?
Pricing depends on your selected pricing model and consumed resources per run.
- Cost can vary by mode, run duration, selected model, and polling behavior.
- Audio-first + follow-up media/persona runs usually give clearer cost control than one complex run.
- You can monitor usage in run logs and billing pages, then optimize by reducing retries and tightening input scope.
Tip: for large pipelines, use callBackUrl + asyncPassthrough to avoid long blocking runs.
Persona Workflow (Recommended)
- Generate music (
original-creationorcover-remix) and keep the returnedtaskId. - Pro Tip for High Quality: If the song has heavy instruments, run the Actor in
stem-splittermode first. Use the resulting VOCAL stem ID to generate your persona. This ensures the voice is captured cleanly without instrumental bleed. - Run
generate-personawithpersonaTaskId,personaName, andpersonaDescription. - The Actor stores the persona profile in its internal DB (Supabase) for later reuse.
- For future generation runs, enable
usePersonaand setgenerationPersonaName. - Enable
submitPersonaProfileToLLMonly when you want persona description guidance included in lyric generation.
Notes:
- Persona lookup is case-insensitive.
- If
submitPersonaProfileToLLMis enabled and persona is not found, the run fails withPERSONA_NOT_FOUND. - If
usePersonais enabled but no persona name is provided, the run fails withPERSONA_NAME_REQUIRED.
Input Example
Use the Input tab in Apify Console for full field descriptions. Example payload:
{"studioMode": "original-creation","generateLyricsLLM": true,"llmModel": "openai/gpt-4o-mini","libraryGenre": "amapiano","songMood": "hype-euphoric","songTheme": "party-hype","prompt": "","style": "female vocals, warm bass, club-ready","model": "V4_5ALL","pollingIntervalSeconds": 20,"asyncPassthrough": false}
Output Example
You can export dataset items as JSON, CSV, XML, RSS, Excel, and more.
{"status": "SUCCESS","taskId": "1234567890abcdef","audioId": "53487c0d-1643-47c7-ad76-4859038ad645","title": "Midnight Code","style": "amapiano, female vocals, club","audio_url": "https://.../track.mp3","video_url": "https://.../lyric-video.mp4","image_url": "https://.../cover.jpg","persona_id": "2774abf480d5b38fc761820dfc7c9c4d","persona_name": "Know Peace","persona_description": "Short persona summary used for continuity","persona_status": "PENDING","persona_error": null}
What data does this Actor return?
| Field | Description |
|---|---|
status | Run outcome (SUCCESS, FAILED, etc.) |
taskId | Task identifier used for polling and follow-up modes |
audioId | Audio identifier used for media/persona follow-up modes |
audio_url | Generated audio track URL |
video_url | Generated video URL (when available) |
image_url | Generated cover/image URL (when available) |
title | Song title |
style | Final style tags used in generation |
persona_id | Generated persona identifier |
persona_name | Persona name used/generated |
persona_description | Persona description/profile summary |
persona_status | Persona workflow status |
persona_error | Persona error details (if any) |
album_project_id | ID of the album project (use this to resume/continue) |
album_name | Title of the album |
album_track_number | Position of the track in the album |
album_progress | Progress string (e.g., "5/10 tracks generated") |
album_status | Status of the album project |
album_sub_mode | Current phase of the album workflow |
Why use this Actor on Apify instead of a raw script?
- Reliable scheduling and retries.
- Built-in run logs and observability.
- API-first orchestration for apps and no-code tools.
- Easy dataset export and downstream integrations.
- One hosted workflow instead of multiple custom services.
Other Actors by this creator
If you use multiple automation workflows, consider building a connected stack of Actors for:
- content generation
- media processing
- publishing automation
Add your related Actor links here as you expand your Store portfolio.
FAQ, Troubleshooting, and Support
Why did I not get video or image in the same run?
Use generate-media with an existing successful music taskId. Media generation is a follow-up workflow.
Why did I get PERSONA_NOT_FOUND?
The provided generationPersonaName does not exist in the Actor's persona profile DB for this Actor run history. First create it in generate-persona mode, then reuse it (case-insensitive name matching is supported).
Why did I get PERSONA_NAME_REQUIRED?
You enabled usePersona but did not provide generationPersonaName.
What does submitPersonaProfileToLLM do?
When enabled, a short saved persona description is sent to the lyric LLM for stylistic conditioning. When disabled, persona is still submitted to Suno generation payload (voice/style continuity) without profile text conditioning in LLM prompts.
Why are my lyrics too short or too generic?
- Enable
generateLyricsLLM. - Our built-in Humanizer automatically filters out generic AI clichΓ©s (e.g. "neon lights", "journey", "tapestry").
- Choose a specific
songMoodandsongTheme. - In Meme Mode, enable
useNewsSearchto pull in real-time viral context.
Does it support my language?
Yes! Select multiple languages from the songLanguages multi-select (including native support for Xhosa and Zulu). The LLM will interweave them naturally for a global sound.
What if a selected genre has no vault tracks?
The Actor applies fallback behavior to keep generation working. Genre-aware lyric prompting still follows your selected genre instructions.
Is this Actor beginner-friendly?
Yes. You can run point-and-click from Input UI. Advanced users can automate with API, webhooks, and async orchestration.
Is generated content legal to use?
You are responsible for complying with applicable copyright, platform, and local laws for prompts, references, and outputs. Avoid infringing source material and consult legal counsel for commercial edge cases.
Where can I get help?
- Check run logs and dataset output first.
- Review Input tab field descriptions.
- Open an issue or contact the maintainer through the Actor page for support and feature requests.