# AI Agent Interaction Analyzer (`lugubrious_enclosure/ai-agent-interaction-analyzer`) Actor

Evaluate AI agent conversations for quality, bias, and optimization. Uses DeepEval metrics for rigorous LLM-powered analysis or free heuristic scoring.

- **URL**: https://apify.com/lugubrious\_enclosure/ai-agent-interaction-analyzer.md
- **Developed by:** [Rams](https://apify.com/lugubrious_enclosure) (community)
- **Categories:** AI, Automation
- **Stats:** 2 total users, 1 monthly users, 42.9% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage, which gets cheaper the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-usage

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## AI Agent Interaction Analyzer

Evaluate your AI agent conversations for quality, bias, hallucination, and toxicity. Get structured scores and actionable insights to improve your LLM-based applications.

### What It Does

Feed in AI conversations (prompts + responses) and get back detailed evaluation scores across multiple dimensions. Ideal for AI developers, researchers, and teams building LLM-powered products who need to monitor and improve their AI outputs.

### Use Cases

- **Quality monitoring** — Score AI responses for relevance, coherence, helpfulness, and completeness
- **Bias detection** — Identify confirmation bias, gender bias, racial bias, and other fairness issues
- **Hallucination checking** — Detect when AI fabricates facts not grounded in provided context
- **Toxicity screening** — Flag harmful or inappropriate language in AI outputs
- **Model comparison** — Compare response quality across different models or prompt versions
- **Regression testing** — Track quality over time as you update prompts or switch models

### Evaluation Modes

| Mode | Cost | What You Get |
|------|------|-------------|
| **heuristic** | Free (no API key needed) | Fast scoring using text analysis — relevance, coherence, helpfulness, completeness, keyword-based bias detection |
| **deepeval** | Uses your OpenAI API key | Rigorous LLM-as-judge metrics — answer relevancy, faithfulness, coherence, helpfulness, hallucination, bias, toxicity |
| **full** | Uses your OpenAI API key | Both heuristic and DeepEval results combined for a complete picture |

### Input

Provide your conversations as a JSON array. Each conversation needs an `id` and a `messages` array:

```json
{
  "conversations": [
    {
      "id": "conv_001",
      "messages": [
        {"role": "user", "content": "How do I implement caching in Redis?"},
        {"role": "assistant", "content": "Here's how to implement caching with Redis..."}
      ],
      "context": "Optional: ground truth or source documents for faithfulness/hallucination checks"
    }
  ],
  "mode": "heuristic",
  "openaiApiKey": "sk-... (required for deepeval/full mode only)",
  "modelName": "gpt-4o"
}
````

#### Input Fields

| Field | Required | Description |
|-------|----------|-------------|
| `conversations` | Yes (or use URL) | Array of conversation objects to evaluate |
| `conversationUrl` | Alternative | URL to fetch conversation JSON from |
| `mode` | No (default: heuristic) | Evaluation mode: `heuristic`, `deepeval`, or `full` |
| `openaiApiKey` | For deepeval/full | Your OpenAI API key |
| `modelName` | No (default: gpt-4o) | Which OpenAI model to use for evaluation |

### Output

Each conversation gets a structured evaluation result pushed to the dataset:

#### Heuristic Mode Output

```json
{
  "conversation_id": "conv_001",
  "quality": {
    "overall": 0.812,
    "relevance": 1.0,
    "coherence": 0.85,
    "helpfulness": 0.5,
    "completeness": 0.9
  },
  "bias": {
    "toxicity": 0.0,
    "bias_detected": false,
    "categories": []
  }
}
```

#### DeepEval Mode Output

```json
{
  "conversation_id": "conv_001",
  "relevancy": {"score": 1.0, "reason": "...", "passed": true},
  "faithfulness": {"score": 0.8, "reason": "...", "passed": true},
  "coherence": {"score": 0.9, "reason": "...", "passed": true},
  "helpfulness": {"score": 0.85, "reason": "...", "passed": true},
  "hallucination": {"score": 0.0, "reason": "...", "passed": true},
  "bias": {"score": 0.0, "reason": "...", "passed": true},
  "toxicity": {"score": 0.0, "reason": "...", "passed": true},
  "overall": 0.636
}
```

### Metrics Explained

#### Heuristic Metrics (Free)

- **Relevance** — Does the response use terms from the user's question?
- **Coherence** — Is the response well-structured with clear formatting?
- **Helpfulness** — Does it contain actionable content (examples, code, steps)?
- **Completeness** — Is the response proportionally thorough relative to the question?
- **Bias categories** — Detects confirmation, gender, racial, and age bias patterns

#### DeepEval Metrics (LLM-Powered)

- **Answer Relevancy** — Does the response actually answer what was asked?
- **Faithfulness** — Is the response grounded in the provided context? (requires `context` field)
- **Coherence** — Is it logically structured and easy to follow?
- **Helpfulness** — Does it provide actionable, useful information?
- **Hallucination** — Does it fabricate facts not in the context? (requires `context` field)
- **Bias** — Does it contain biased opinions or unfair statements?
- **Toxicity** — Does it contain toxic or harmful language?

### Tips

- Start with `heuristic` mode to quickly screen large batches at zero cost
- Use `deepeval` mode for detailed analysis of important conversations
- Add a `context` field to your conversations to enable faithfulness and hallucination checks
- Use `gpt-4o-mini` as the model for cheaper deepeval runs with slightly lower accuracy
- Export results as CSV from the Dataset tab for spreadsheet analysis

### Pricing

- **Heuristic mode**: Only Apify platform compute costs (minimal)
- **DeepEval/Full mode**: Apify compute + your OpenAI API usage (~$0.01-0.10 per conversation depending on model)

# Actor input Schema

## `conversations` (type: `array`):

Array of conversation objects. Each needs 'id' and 'messages' (array of {role, content}). Optional 'context' for faithfulness checks.

## `conversationUrl` (type: `string`):

URL to fetch conversation JSON from (alternative to inline data).

## `mode` (type: `string`):

heuristic = free/fast, deepeval = LLM-powered metrics, full = both combined

## `modelName` (type: `string`):

Model for DeepEval metrics (e.g. gpt-4o, gpt-4o-mini)

## `openaiApiKey` (type: `string`):

Required for deepeval/full mode.

## Actor input object example

```json
{
  "mode": "heuristic",
  "modelName": "gpt-4o"
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {};

// Run the Actor and wait for it to finish
const run = await client.actor("lugubrious_enclosure/ai-agent-interaction-analyzer").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {}

# Run the Actor and wait for it to finish
run = client.actor("lugubrious_enclosure/ai-agent-interaction-analyzer").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{}' |
apify call lugubrious_enclosure/ai-agent-interaction-analyzer --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=lugubrious_enclosure/ai-agent-interaction-analyzer",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "AI Agent Interaction Analyzer",
        "description": "Evaluate AI agent conversations for quality, bias, and optimization. Uses DeepEval metrics for rigorous LLM-powered analysis or free heuristic scoring.",
        "version": "0.1",
        "x-build-id": "eywom5coxmk1JMdxU"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/lugubrious_enclosure~ai-agent-interaction-analyzer/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-lugubrious_enclosure-ai-agent-interaction-analyzer",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/lugubrious_enclosure~ai-agent-interaction-analyzer/runs": {
            "post": {
                "operationId": "runs-sync-lugubrious_enclosure-ai-agent-interaction-analyzer",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/lugubrious_enclosure~ai-agent-interaction-analyzer/run-sync": {
            "post": {
                "operationId": "run-sync-lugubrious_enclosure-ai-agent-interaction-analyzer",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "conversations": {
                        "title": "Conversations",
                        "type": "array",
                        "description": "Array of conversation objects. Each needs 'id' and 'messages' (array of {role, content}). Optional 'context' for faithfulness checks."
                    },
                    "conversationUrl": {
                        "title": "Conversation Data URL",
                        "type": "string",
                        "description": "URL to fetch conversation JSON from (alternative to inline data)."
                    },
                    "mode": {
                        "title": "Evaluation Mode",
                        "enum": [
                            "heuristic",
                            "deepeval",
                            "full"
                        ],
                        "type": "string",
                        "description": "heuristic = free/fast, deepeval = LLM-powered metrics, full = both combined",
                        "default": "heuristic"
                    },
                    "modelName": {
                        "title": "LLM Model (for deepeval/full mode)",
                        "type": "string",
                        "description": "Model for DeepEval metrics (e.g. gpt-4o, gpt-4o-mini)",
                        "default": "gpt-4o"
                    },
                    "openaiApiKey": {
                        "title": "OpenAI API Key",
                        "type": "string",
                        "description": "Required for deepeval/full mode."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
