
Flow AI Agent
Pricing
Pay per event

Flow AI Agent
Flow AI is a platform for building, deploying, and monetizing AI agents tailored for the Web3 ecosystem and their community. It enables users to gather insights from onchain and offchain data and run complex transactions.
0.0 (0)
Pricing
Pay per event
0
Total users
1
Monthly users
1
Runs succeeded
>99%
Last modified
7 days ago
Flow AI Agents
Flow AI is a platform for building, deploying, and monetizing AI agents tailored for the Web3 ecosystem and their community. It enables users to gather insights from onchain and offchain data and run complex transactions.
✨ Features
- AI Agent Marketplace: Build and trade AI agents that can interact with various data sources and perform tasks.
- Custom Dashboards: Create dashboards with rich insights to streamline team alignment and processes.
- Data Integration: Integrate with a wide range of on-chain and off-chain Web3 datasets and tools.
- Contract Analysis Reports: Discover information about contract funders, their balances, and more.
- Community Engagement: Enhance onboarding and education for communities through automated reporting and shared insights. 
💰 Pricing
To charge users, define events in JSON format and save them on the Apify platform. Here is an example schema:
To set up the PPE model:
- Go to your Actor's Publication settings.
- Set the Pricing model to
Pay per event
. - Add the pricing schema (see .actor/pay_per_event.json for a complete example).
🔧 How It Works
This template implements a proxy server that can connect to either a stdio-based or SSE-based MCP server and expose it via SSE transport. Here's how it works:
Server types
- Stdio Server (
StdioServerParameters
):- Spawns a local process that implements the MCP protocol over stdio.
- Configure using the
command
parameter to specify the executable and theargs
parameter for additional arguments. - Optionally, use the
env
parameter to pass environment variables to the process.
Example:
MCP_SERVER_PARAMS = StdioServerParameters(command='uv',args=['run', 'arxiv-mcp-server'],env={'YOUR_ENV_VAR', os.getenv('YOUR-ENV-VAR')}, # Optional environment variables)
- SSE Server (
SseServerParameters
):- Connects to a remote MCP server via SSE transport.
- Configure using the
url
parameter to specify the server's endpoint. - Optionally, use the
headers
parameter to include custom headers (e.g., for authentication) and theauth
parameter for additional authentication mechanisms.
Example:
MCP_SERVER_PARAMS = SseServerParameters(url='https://mcp.apify.com/sse',headers={'Authorization': os.getenv('YOUR-AUTH-TOKEN')}, # Replace with your authentication token)
- Tips:
- Ensure the remote server supports SSE transport and is accessible from the Actor's environment.
- Use environment variables to securely store sensitive information like tokens or API keys.
Environment variables:
Environment variables can be securely stored and managed at the Actor level on the Apify platform. These variables are automatically injected into the Actor's runtime environment, allowing you to:
- Keep sensitive information like API keys secure.
- Simplify configuration by avoiding hardcoded values in your code.
Proxy implementation
The proxy server (ProxyServer
class) handles:
- Creating a Starlette web server with SSE endpoints (
/sse
and/messages/
) - Managing connections to the underlying MCP server
- Forwarding requests and responses between clients and the MCP server
- Handling charging through the
actor_charge_function
Key components:
ProxyServer
: Main class that manages the proxy functionalitycreate_proxy_server
: Creates an MCP server instance that proxies requestscharge_mcp_operation
: Handles charging for different MCP operations
MCP operations
The proxy supports all standard MCP operations:
list_tools()
: List available toolscall_tool()
: Execute a tool with argumentslist_prompts()
: List available promptsget_prompt()
: Get a specific promptlist_resources()
: List available resourcesread_resource()
: Read a specific resource
Each operation can be configured for charging in the PPE model.
📚 Resources
- What is Anthropic's Model Context Protocol?
- How to use MCP with Apify Actors
- Apify MCP server
- Apify MCP server documentation
- Apify MCP client
- Model Context Protocol documentation
- TypeScript tutorials in Academy
- Apify SDK documentation
Getting started
For complete information see this article. In short, you will:
- Build the Actor
- Run the Actor
Pull the Actor for local development
If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI:
-
Install
apify-cli
Using Homebrew
$brew install apify-cliUsing NPM
$npm -g install apify-cli -
Pull the Actor by its unique
<ActorId>
, which is one of the following:- unique name of the Actor to pull (e.g. "apify/hello-world")
- or ID of the Actor to pull (e.g. "E2jjCZBezvAZnX8Rb")
You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID.
This command will copy the Actor into the current directory on your local machine.
$apify pull <ActorId>
Documentation reference
To learn more about Apify and Actors, take a look at the following resources: