AI Music Studio & Generator avatar

AI Music Studio & Generator

Under maintenance

Pricing

from $500.00 / 1,000 🎡 generated tracks

Go to Apify Store
AI Music Studio & Generator

AI Music Studio & Generator

Under maintenance

Transform your ideas into professional-grade, radio-ready tracks in seconds. This powerhouse Apify Actor connects to cutting-edge AI music models to give you a complete, automated music production studio right at your fingertips. This tool is designed for speed, creativity, and seamless automation.

Pricing

from $500.00 / 1,000 🎡 generated tracks

Rating

0.0

(0)

Developer

Visita Intelligence

Visita Intelligence

Maintained by Community

Actor stats

1

Bookmarked

3

Total users

1

Monthly active users

3 days ago

Last modified

Share

Generate AI songs, remixes, instrumentals, lyric videos, cover art, and personas from one Apify Actor.

This Actor is designed for creators, agencies, and product teams that need a reliable, automation-friendly music workflow. You can run it from the Apify Console UI or API, schedule jobs, monitor runs, and connect outputs to your own systems using webhooks.

What does this AI Music Generator Actor do?

This Actor turns a prompt or reference track into production-ready music assets. It supports 10 studio modes, an AI Co-Writer, a Premium Audio Vault, and post-production workflows for media, persona generation, and free details lookup.

Fastest way to try it:

  1. Set studioMode to original-creation.
  2. Enable generateLyricsLLM.
  3. Choose libraryGenre, songMood, and run.

What can this Actor do?

  • 🎚️ End-to-end music workflow: generate audio first, then visuals/persona with taskId.
  • πŸ€– AI lyric writing with genre-aware structure and mood control.
  • πŸ“š Remix support from custom audio links or Premium Audio Vault tracks.
  • 🎬 Generate media assets from existing songs.
  • πŸ§‘β€πŸŽ€ Generate and reuse AI personas with persistent persona profile storage.
  • 🧠 Optional persona-profile conditioning for LLM lyrics via submitPersonaProfileToLLM.
  • πŸ’Ώ Album Mode: Design and produce a full album (up to 20 tracks) in a multi-run, stateful workflow.
  • 🀑 Meme Music & Viral Hits: Create internet-breaking songs with subgenres like Phonk, Sillycore, and Meme Rap.
  • πŸ“° Brainrot Search: Toggle real-time search (Brave API) to incorporate latest viral memes and trends into your lyrics.
  • 🌍 Global Multilingual Support: Dynamic support for multiple languages including Xhosa, Zulu, and seamless multilingual blending.
  • 🧠 The Humanizer: Built-in anti-AI-clichΓ© logic that bans generic terms like "neon lights" or "echoes" for more authentic, industry-standard lyrics.
  • 🧩 Split stems and extend tracks for post-production.
  • πŸ”Ž Free generation details retrieval by taskId or audioId with get-details mode.
  • ☁️ Built-in Apify advantages: scheduling, API access, webhooks, logs, dataset export, and run monitoring.

Studio Modes Explained

1. original-creation

Create a new song from scratch from your prompt or AI-generated lyrics.

2. cover-remix

Generate a new vocal/lyrical interpretation over existing audio from:

  • custom-link
  • premium-library

3. instrumental-magic

Add or reshape instrumental backing around source content.

4. stem-splitter

Split tracks into isolated elements (for example, vocals and instrumentals).

5. extend-track

Append new sections to an existing track for longer edits/versions.

6. generate-media

Generate visual assets from an existing music taskId.

7. generate-persona

Generate persona output from an existing music taskId.

8. get-details

Fetch generation details by taskId or audioId without consuming generation credits.

9. album-mode

Produce a full album (up to 20 tracks) with AI-assisted tracklist design, batched lyric generation, and batched audio generation. Uses a 3-phase workflow (Plan β†’ Lyrics β†’ Generate) with persistence across multiple Actor runs.

10. meme-music-generator

Create highly viral, absurd, or parody internet songs. Includes specific "Brainrot Search" capabilities to pull in real-time trends from the web.

How to generate AI music with this Apify Actor

  1. Choose studioMode.
  2. If needed, select audioSource and provide customAudioLink or vault settings.
  3. Enable generateLyricsLLM for automatic songwriting, or provide manual prompt.
  4. Configure libraryGenre, style, and optional mood/theme settings.
  5. Run the Actor and save returned taskId.
  6. For visuals/persona, run again in generate-media or generate-persona mode using that taskId.
  7. To reuse persona in future songs, enable usePersona, set generationPersonaName, and optionally enable submitPersonaProfileToLLM.

How much does AI music generation cost on Apify?

Pricing depends on your selected pricing model and consumed resources per run.

  • Cost can vary by mode, run duration, selected model, and polling behavior.
  • Audio-first + follow-up media/persona runs usually give clearer cost control than one complex run.
  • You can monitor usage in run logs and billing pages, then optimize by reducing retries and tightening input scope.

Tip: for large pipelines, use callBackUrl + asyncPassthrough to avoid long blocking runs.

  1. Generate music (original-creation or cover-remix) and keep the returned taskId.
  2. Pro Tip for High Quality: If the song has heavy instruments, run the Actor in stem-splitter mode first. Use the resulting VOCAL stem ID to generate your persona. This ensures the voice is captured cleanly without instrumental bleed.
  3. Run generate-persona with personaTaskId, personaName, and personaDescription.
  4. The Actor stores the persona profile in its internal DB (Supabase) for later reuse.
  5. For future generation runs, enable usePersona and set generationPersonaName.
  6. Enable submitPersonaProfileToLLM only when you want persona description guidance included in lyric generation.

Notes:

  • Persona lookup is case-insensitive.
  • If submitPersonaProfileToLLM is enabled and persona is not found, the run fails with PERSONA_NOT_FOUND.
  • If usePersona is enabled but no persona name is provided, the run fails with PERSONA_NAME_REQUIRED.

Input Example

Use the Input tab in Apify Console for full field descriptions. Example payload:

{
"studioMode": "original-creation",
"generateLyricsLLM": true,
"llmModel": "openai/gpt-4o-mini",
"libraryGenre": "amapiano",
"songMood": "hype-euphoric",
"songTheme": "party-hype",
"prompt": "",
"style": "female vocals, warm bass, club-ready",
"model": "V4_5ALL",
"pollingIntervalSeconds": 20,
"asyncPassthrough": false
}

Output Example

You can export dataset items as JSON, CSV, XML, RSS, Excel, and more.

{
"status": "SUCCESS",
"taskId": "1234567890abcdef",
"audioId": "53487c0d-1643-47c7-ad76-4859038ad645",
"title": "Midnight Code",
"style": "amapiano, female vocals, club",
"audio_url": "https://.../track.mp3",
"video_url": "https://.../lyric-video.mp4",
"image_url": "https://.../cover.jpg",
"persona_id": "2774abf480d5b38fc761820dfc7c9c4d",
"persona_name": "Know Peace",
"persona_description": "Short persona summary used for continuity",
"persona_status": "PENDING",
"persona_error": null
}

What data does this Actor return?

FieldDescription
statusRun outcome (SUCCESS, FAILED, etc.)
taskIdTask identifier used for polling and follow-up modes
audioIdAudio identifier used for media/persona follow-up modes
audio_urlGenerated audio track URL
video_urlGenerated video URL (when available)
image_urlGenerated cover/image URL (when available)
titleSong title
styleFinal style tags used in generation
persona_idGenerated persona identifier
persona_namePersona name used/generated
persona_descriptionPersona description/profile summary
persona_statusPersona workflow status
persona_errorPersona error details (if any)
album_project_idID of the album project (use this to resume/continue)
album_nameTitle of the album
album_track_numberPosition of the track in the album
album_progressProgress string (e.g., "5/10 tracks generated")
album_statusStatus of the album project
album_sub_modeCurrent phase of the album workflow

Why use this Actor on Apify instead of a raw script?

  • Reliable scheduling and retries.
  • Built-in run logs and observability.
  • API-first orchestration for apps and no-code tools.
  • Easy dataset export and downstream integrations.
  • One hosted workflow instead of multiple custom services.

Other Actors by this creator

If you use multiple automation workflows, consider building a connected stack of Actors for:

  • content generation
  • media processing
  • publishing automation

Add your related Actor links here as you expand your Store portfolio.

FAQ, Troubleshooting, and Support

Why did I not get video or image in the same run?

Use generate-media with an existing successful music taskId. Media generation is a follow-up workflow.

Why did I get PERSONA_NOT_FOUND?

The provided generationPersonaName does not exist in the Actor's persona profile DB for this Actor run history. First create it in generate-persona mode, then reuse it (case-insensitive name matching is supported).

Why did I get PERSONA_NAME_REQUIRED?

You enabled usePersona but did not provide generationPersonaName.

What does submitPersonaProfileToLLM do?

When enabled, a short saved persona description is sent to the lyric LLM for stylistic conditioning. When disabled, persona is still submitted to Suno generation payload (voice/style continuity) without profile text conditioning in LLM prompts.

Why are my lyrics too short or too generic?

  • Enable generateLyricsLLM.
  • Our built-in Humanizer automatically filters out generic AI clichΓ©s (e.g. "neon lights", "journey", "tapestry").
  • Choose a specific songMood and songTheme.
  • In Meme Mode, enable useNewsSearch to pull in real-time viral context.

Does it support my language?

Yes! Select multiple languages from the songLanguages multi-select (including native support for Xhosa and Zulu). The LLM will interweave them naturally for a global sound.

What if a selected genre has no vault tracks?

The Actor applies fallback behavior to keep generation working. Genre-aware lyric prompting still follows your selected genre instructions.

Is this Actor beginner-friendly?

Yes. You can run point-and-click from Input UI. Advanced users can automate with API, webhooks, and async orchestration.

You are responsible for complying with applicable copyright, platform, and local laws for prompts, references, and outputs. Avoid infringing source material and consult legal counsel for commercial edge cases.

Where can I get help?

  • Check run logs and dataset output first.
  • Review Input tab field descriptions.
  • Open an issue or contact the maintainer through the Actor page for support and feature requests.