AI Visibility & Sentiment
Track how AI models see your brand, analyze sentiment, and monitor citations across ChatGPT, Gemini, Claude, Perplexity, and more.
AI Visibility Report
Get AI visibility scores across all tracked prompts. The visibility score measures how prominently your brand appears in AI model responses, from 0 (not visible) to 100 (prominently featured).
/v2/reports/visibility
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| start_date | string | No | Start date (YYYY-MM-DD) |
| end_date | string | No | End date (YYYY-MM-DD) |
| ai_models | array | No | Filter by AI models (e.g., ["chatgpt", "gemini"]) |
| prompt_ids | array | No | Filter by specific prompt IDs |
| tags | array | No | Filter by prompt tags (e.g., ["brand_monitoring"]) |
| sub_tags | array | No | Filter by prompt sub-tags (e.g., ["competitor_comparison"]) |
| group_by | string | No | Group results: date, ai_model, prompt, or tag |
| date_interval | string | No | For date grouping: day, week, or month |
| limit | integer | No | Max results (1-1000, default: 100) |
| offset | integer | No | Pagination offset (default: 0) |
Example: Visibility by AI Model
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"group_by": "ai_model"
}' \
"https://opttab.com/api/v2/reports/visibility"
Response
{
"success": true,
"data": [
{
"ai_model": "chatgpt",
"avg_visibility_score": 74.2,
"avg_confidence_score": 0.85,
"response_count": 520
},
{
"ai_model": "gemini",
"avg_visibility_score": 71.1,
"avg_confidence_score": 0.82,
"response_count": 380
},
{
"ai_model": "claude",
"avg_visibility_score": 70.8,
"avg_confidence_score": 0.79,
"response_count": 350
}
],
"meta": {
"total_rows": 3,
"group_by": "ai_model",
"query": {
"start_date": "2026-01-01",
"end_date": "2026-01-31"
}
}
}
Example: Visibility Over Time
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"group_by": "date",
"date_interval": "week"
}' \
"https://opttab.com/api/v2/reports/visibility"
Example: Visibility Filtered by Tag
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"tags": ["brand_monitoring"],
"sub_tags": ["competitor_comparison"],
"group_by": "ai_model"
}' \
"https://opttab.com/api/v2/reports/visibility"
Example: Visibility Grouped by Tag
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"group_by": "tag"
}' \
"https://opttab.com/api/v2/reports/visibility"
Response (grouped by tag)
{
"success": true,
"data": [
{
"tag": "brand_monitoring",
"avg_visibility_score": 76.8,
"avg_confidence_score": 0.87,
"response_count": 420
},
{
"tag": "product_research",
"avg_visibility_score": 71.2,
"avg_confidence_score": 0.82,
"response_count": 310
},
{
"tag": "untagged",
"avg_visibility_score": 65.4,
"avg_confidence_score": 0.75,
"response_count": 520
}
],
"meta": {
"total_rows": 3,
"group_by": "tag"
}
}
Use case (ProductsUp): Pull visibility scores per AI model to show a breakdown chart on your customer's dashboard. Use group_by: "date" with date_interval: "week" to display a trend line over time.
Sentiment Analysis Report
Analyze how positively or negatively AI models talk about your brand. The sentiment score ranges from 0 (very negative) to 100 (very positive), with 50 being neutral.
/v2/reports/sentiment
Request Body Parameters
| Parameter | Type | Description |
|---|---|---|
| start_date | string | Start date (YYYY-MM-DD) |
| end_date | string | End date (YYYY-MM-DD) |
| ai_models | array | Filter by AI models |
| tags | array | Filter by prompt tags |
| sub_tags | array | Filter by prompt sub-tags |
| group_by | string | date or ai_model |
Example Request
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"group_by": "ai_model"
}' \
"https://opttab.com/api/v2/reports/sentiment"
Response (grouped by AI model)
{
"success": true,
"data": [
{
"ai_model": "chatgpt",
"sentiment_score": 72.5,
"mentions_count": 180,
"position": 65.4,
"response_count": 520
},
{
"ai_model": "gemini",
"sentiment_score": 68.1,
"mentions_count": 120,
"position": 59.2,
"response_count": 380
}
],
"meta": {
"total_rows": 2
}
}
Response (summary — no group_by)
{
"success": true,
"data": {
"sentiment_score": 68.3,
"raw_sentiment_score": 0.37,
"mentions_count": 340,
"position": 62.1,
"total_responses": 1250,
"company_name": "Acme Corp",
"company_website": "https://acme.com"
}
}
Competitors by AI Model
Get competitors with visibility scores grouped by AI model. See which competitors are mentioned or visible in each AI model's responses (e.g., ChatGPT vs Gemini vs Claude). This complements the visibility report by breaking down competitor presence per model.
/v2/reports/competitors-by-model
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| start_date | string | No | Start date (YYYY-MM-DD) |
| end_date | string | No | End date (YYYY-MM-DD) |
| ai_models | array | No | Filter by AI models (e.g., ["chatgpt", "gemini"]) |
| prompt_ids | array | No | Filter by specific prompt IDs |
| tags | array | No | Filter by prompt tags (e.g., ["brand_monitoring"]) |
| sub_tags | array | No | Filter by prompt sub-tags (e.g., ["competitor_comparison"]) |
| limit | integer | No | Max AI model groups to return (1–500, default: 50) |
| offset | integer | No | Pagination offset (default: 0) |
Example Request
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"ai_models": ["chatgpt", "gemini"],
"tags": ["brand_monitoring"],
"sub_tags": ["competitor_comparison"],
"limit": 50,
"offset": 0
}' \
"https://opttab.com/api/v2/reports/competitors-by-model"
Response
{
"success": true,
"data": [
{
"ai_model": "ChatGPT",
"competitors": [
{
"name": "Competitor A",
"visibility_score": 75,
"mentions_count": 12,
"total_responses": 20,
"favicon_url": "https://example.com/favicon.ico"
},
{
"name": "Competitor B",
"visibility_score": 50,
"mentions_count": 8,
"total_responses": 20,
"favicon_url": null
}
]
},
{
"ai_model": "Gemini",
"competitors": [
{
"name": "Competitor A",
"visibility_score": 68,
"mentions_count": 10,
"total_responses": 18,
"favicon_url": "https://example.com/favicon.ico"
}
]
}
],
"meta": {
"total_rows": 2,
"limit": 50,
"offset": 0
}
}
Use case: Compare how different AI models rank your competitors. For example, ChatGPT may mention Competitor A more often than Gemini. Use this endpoint to power a "Competitors by AI Model" chart or table in your dashboard.
AI Responses
Access raw AI model responses with full text, visibility scores, brand rankings, and products mentioned. Useful for detailed analysis of how each AI model responds to your tracked prompts.
/v2/reports/responses
Request Body Parameters
| Parameter | Type | Description |
|---|---|---|
| start_date / end_date | string | Date range filter (YYYY-MM-DD) |
| ai_models | array | Filter by AI models |
| prompt_ids | array | Filter by prompt IDs |
| tags | array | Filter by prompt tags |
| sub_tags | array | Filter by prompt sub-tags |
| limit / offset | integer | Pagination (max 500 per request) |
Example Request
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"ai_models": ["chatgpt"],
"limit": 10
}' \
"https://opttab.com/api/v2/reports/responses"
Response
{
"success": true,
"data": [
{
"id": 12345,
"prompt_text": "What are the best project management tools?",
"ai_model": "chatgpt",
"response_text": "Here are some of the best project management tools...",
"visibility_score": 78.5,
"confidence_score": 0.89,
"position": 2,
"brand_rankings": ["Asana", "Trello", "Acme PM"],
"products": ["Acme Project Manager Pro"],
"generated_at": "2026-01-15T14:30:00Z"
}
],
"meta": {
"total_rows": 520,
"limit": 10,
"offset": 0
}
}
Citations Report
Track which URLs AI models cite in their responses. Monitor how often your website, blog posts, or product pages are referenced as sources by ChatGPT, Gemini, Perplexity, and others.
/v2/reports/citations
Request Body Parameters
| Parameter | Type | Description |
|---|---|---|
| start_date / end_date | string | Date range filter |
| ai_models | array | Filter by AI models |
| tags | array | Filter by prompt tags |
| sub_tags | array | Filter by prompt sub-tags |
| domain_filter | string | Filter citations containing this domain |
| limit / offset | integer | Pagination (max 1000) |
Example: Get My Domain's Citations
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-01-01",
"end_date": "2026-01-31",
"domain_filter": "acme.com",
"limit": 50
}' \
"https://opttab.com/api/v2/reports/citations"
Response
{
"success": true,
"data": [
{
"response_id": 12345,
"prompt_text": "What are the best project management tools?",
"ai_model": "chatgpt",
"source_url": "https://acme.com/blog/best-pm-tools",
"source_title": "The 10 Best Project Management Tools in 2026",
"source_position": 1,
"generated_at": "2026-01-15T14:30:00Z"
},
{
"response_id": 12346,
"prompt_text": "Compare PM tools for enterprise",
"ai_model": "perplexity",
"source_url": "https://acme.com/enterprise-solutions",
"source_title": "Enterprise Project Management - Acme",
"source_position": 3,
"generated_at": "2026-01-16T09:15:00Z"
}
],
"meta": {
"total_rows": 89,
"limit": 50,
"offset": 0
}
}
Prompt Management
Manage the prompts you're tracking. Prompts are queries that Opttab sends to AI models on your behalf to monitor how they respond. Each prompt can be categorized with a tag and sub_tag for organizing and filtering.
/v2/org/prompts
List all tracked prompts in your workspace. Each prompt includes visibility score (overall average across AI models) and visibility by AI model (visibility_score, visibility_by_model). Supports filtering by tag and sub_tag query parameters.
Query Parameters
| Parameter | Type | Description |
|---|---|---|
| per_page | integer | Results per page (max 200, default: 50) |
| tag | string | Filter by a single tag |
| tags[] | array | Filter by multiple tags |
| sub_tag | string | Filter by a single sub-tag |
| sub_tags[] | array | Filter by multiple sub-tags |
Example: List all prompts
curl -H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
"https://opttab.com/api/v2/org/prompts?per_page=50"
Example: Filter prompts by tag
curl -H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
"https://opttab.com/api/v2/org/prompts?tag=brand_monitoring"
Response
Each item in data includes visibility_score (overall average, or null if no responses yet) and visibility_by_model (object of AI model key → score; empty object if none).
{
"success": true,
"data": [
{
"id": 1,
"prompt_text": "What are the best CRM tools for small businesses?",
"ai_models": ["chatgpt", "gemini", "claude"],
"locations": [],
"tag": "brand_monitoring",
"sub_tag": "competitor_comparison",
"is_active": true,
"run_interval": "weekly",
"last_run_at": "2026-02-10T08:00:00Z",
"created_at": "2026-01-15T10:30:00Z",
"visibility_score": 82.33,
"visibility_by_model": {
"chatgpt": 85.5,
"gemini": 72.0,
"claude": 90.0
}
}
],
"meta": {
"current_page": 1,
"per_page": 50,
"total": 1,
"last_page": 1
}
}
/v2/org/prompts
Create a new tracked prompt with optional tag and sub_tag for categorization.
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompt_text | string | Yes | The prompt to track (max 2000 chars) |
| ai_models | array | No | AI models to track |
| tag | string | No | Category tag (max 255 chars) |
| sub_tag | string | No | Sub-category tag (max 255 chars) |
| run_interval | string | No | daily, weekly, or monthly |
| is_active | boolean | No | Whether the prompt is active (default: true) |
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"prompt_text": "What are the best CRM tools for small businesses?",
"ai_models": ["chatgpt", "gemini", "claude"],
"tag": "brand_monitoring",
"sub_tag": "competitor_comparison",
"run_interval": "week",
"is_active": true
}' \
"https://opttab.com/api/v2/org/prompts"
Response
{
"success": true,
"data": {
"id": 42,
"prompt_text": "What are the best CRM tools for small businesses?",
"tag": "brand_monitoring",
"sub_tag": "competitor_comparison",
"created_at": "2026-02-14T10:30:00Z"
}
}
Category Visibility Prompt Generation
Generate AI visibility prompts from categories, brands, and intents using the Category Visibility engine. The API cross-matches brands with categories (e.g., only "Levi's" with "Jeans"), then generates natural-language questions per intent. You can then save those prompts and start receiving AI model responses in the background.
Typical flow: (1) POST /api/v2/prompts/generate to get suggested prompts → (2) POST /api/v2/prompts/store to save them and trigger AI response generation → (3) GET /api/v2/prompts/{id}/responses to poll for results (allow 30–120 seconds per model).
/v2/prompts/generate
Generate prompts from categories, brands, and intents. Does not save prompts; use the returned array with POST /api/v2/prompts/store to save and start AI response generation.
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| categories | string | Yes | Comma-separated categories (e.g., Jeans, Sneakers, Jackets) |
| brands | string | Yes | Comma-separated brands (e.g., Levi's, Nike, Adidas) |
| intents | array | Yes | One or more: customer_focus, competition_focus, brand_focus, transaction_focus |
Intents: customer_focus — customer needs, reviews, recommendations; competition_focus — brand vs competitor; brand_focus — reputation, quality; transaction_focus — buying, pricing, where to buy.
Example
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"categories": "Jeans, Sneakers",
"brands": "Levi'\''s, Nike",
"intents": ["customer_focus", "competition_focus"]
}' \
"https://opttab.com/api/v2/prompts/generate"
Response
{
"success": true,
"data": {
"prompts": [
{
"text": "What are the most comfortable Levi's jeans for everyday wear?",
"tag": "Jeans",
"sub_tag": "Levi's",
"intent": "customer_focus",
"intent_label": "Customer Focus"
}
],
"total": 40,
"hint": "Use POST /api/v2/prompts/store with these prompts to save them and start generating AI responses."
},
"meta": {
"categories": "Jeans, Sneakers",
"brands": "Levi's, Nike",
"intents": ["customer_focus", "competition_focus"]
}
}
/v2/prompts/store
Save generated prompts and immediately dispatch AI response generation jobs. Each prompt is tracked in your workspace and AI models start generating responses in the background. Use the output from POST /api/v2/prompts/generate or provide your own prompt objects.
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompts | array | Yes | Array of { text, tag?, sub_tag?, intent? } (max 200) |
| ai_models | array | Yes | Models to run: chatgpt, gemini, claude, grok, perplexity, deepseek, mistral, ai-overviews |
| run_interval | string | No | day, week, or month (default: week) |
| locations | array | No | Location codes (e.g., ["us"], default: ["us"]) |
Example
curl -X POST \
-H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"prompts": [
{
"text": "What are the most comfortable Levi'\''s jeans?",
"tag": "Jeans",
"sub_tag": "Levi'\''s",
"intent": "customer_focus"
}
],
"ai_models": ["chatgpt", "gemini", "claude"],
"run_interval": "week",
"locations": ["us"]
}' \
"https://opttab.com/api/v2/prompts/store"
Response
{
"success": true,
"data": {
"prompts": [
{
"id": 101,
"prompt_text": "What are the most comfortable Levi's jeans?",
"tag": "Jeans",
"sub_tag": "Levi's",
"intent": "customer_focus",
"ai_models": ["ChatGPT", "Gemini", "Claude"],
"run_interval": "week",
"created_at": "2026-02-14T12:00:00Z"
}
],
"total_created": 1,
"jobs_dispatched": 1,
"ai_models": ["ChatGPT", "Gemini", "Claude"],
"hint": "AI responses are being generated in the background. Use GET /api/v2/prompts/{id}/responses to retrieve them (allow 30-120 seconds per model)."
}
}
/v2/prompts/{id}/responses
Get AI-generated responses for a specific prompt. Responses are created asynchronously after storing prompts; each model produces its own response with visibility score, confidence, sources, and brand rankings.
Path & Query Parameters
| Parameter | Type | Description |
|---|---|---|
| id | integer | Prompt ID (path) |
| ai_models[] | array | Optional filter by AI model names |
| limit | integer | Max results (1–200, default: 50) |
| offset | integer | Pagination offset (default: 0) |
Example
curl -H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
"https://opttab.com/api/v2/prompts/101/responses"
Response
{
"success": true,
"data": [
{
"id": 5001,
"ai_model": "ChatGPT",
"response_text": "Levi's offers several comfortable options...",
"visibility_score": 100,
"confidence_score": 0.92,
"position": 1,
"sources": [],
"brand_rankings": [],
"products": [],
"generated_at": "2026-02-14T12:02:15Z"
}
],
"meta": {
"prompt": {
"id": 101,
"prompt_text": "What are the most comfortable Levi's jeans?",
"tag": "Jeans",
"sub_tag": "Levi's",
"intent": "customer_focus"
},
"total_responses": 3,
"models_completed": ["ChatGPT", "Gemini", "Claude"],
"models_pending": [],
"is_complete": true,
"limit": 50,
"offset": 0
}
}
Tags & Sub-Tags
Tags and sub-tags provide a flexible taxonomy for organizing your prompts. Use them to categorize prompts by topic, intent, product line, or any custom hierarchy. All report endpoints support tags and sub_tags filters to scope data to specific categories.
/v2/org/tags
List all unique tags in your workspace with prompt counts and associated sub-tags.
Example
curl -H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
"https://opttab.com/api/v2/org/tags"
Response
{
"success": true,
"data": [
{
"tag": "brand_monitoring",
"prompt_count": 12,
"sub_tags": ["competitor_comparison", "market_position", "sentiment_tracking"]
},
{
"tag": "product_research",
"prompt_count": 8,
"sub_tags": ["feature_comparison", "pricing", "reviews"]
},
{
"tag": "industry_trends",
"prompt_count": 5,
"sub_tags": ["emerging_tech", "market_analysis"]
}
],
"meta": { "total": 3 }
}
/v2/org/tags/{tag}/sub-tags
List all sub-tags for a specific tag with their prompt counts.
Example
curl -H "X-API-Key: opttab_xxxxxxxxxxxxxxxxxxxx" \
"https://opttab.com/api/v2/org/tags/brand_monitoring/sub-tags"
Response
{
"success": true,
"data": [
{ "sub_tag": "competitor_comparison", "prompt_count": 5 },
{ "sub_tag": "market_position", "prompt_count": 4 },
{ "sub_tag": "sentiment_tracking", "prompt_count": 3 }
],
"meta": {
"tag": "brand_monitoring",
"total": 3
}
}
Tip: Tags and sub-tags are free-form strings. You can use any naming convention. Common patterns include topic-based tags (brand_monitoring, product_research) with sub-tags for specifics (competitor_comparison, pricing).