Skip to main content

Batch Processing

Process up to 10 profiles concurrently in a single request with pay-per-success billing. Credits are only charged for successfully analyzed profiles.

Overview

Batch processing allows you to analyze multiple professional profiles simultaneously, significantly reducing total processing time and providing better efficiency for bulk operations.

Key Features

  • Concurrent Processing: Up to 10 profiles analyzed simultaneously
  • Pay-per-Success: Only charged for successfully analyzed profiles
  • Detailed Summary: Comprehensive batch statistics and results
  • Individual Results: Separate success/failure status for each profile
  • Time Efficient: ~65% faster than sequential single requests

Performance Benchmarks

ProfilesSingle RequestsBatch RequestTime Saved
5 profiles~20-25 sec~8-10 sec60% faster
10 profiles~40-50 sec~15-20 sec65% faster
50 profiles~200-250 sec~75-100 sec65% faster
100 profiles~400-500 sec~150-200 sec65% faster

Batch Endpoints

Name + Company Batch Processing

POST /crm_basico_batch - Basic Tier Batch

Cost: 1 credit per successful profile Max Profiles: 10 per request Tier Required: Basic or Complete

Request Body:

{
"profiles": [
{
"primeiro_nome": "Satya",
"ultimo_nome": "Nadella",
"empresa": "Microsoft",
"contexto_adicional": "Optional context"
},
{
"primeiro_nome": "Sundar",
"ultimo_nome": "Pichai",
"empresa": "Google"
}
]
}

Example cURL:

curl -X POST "https://api.fluenceinsights.com/crm_basico_batch" \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key-here" \
-d '{
"profiles": [
{"primeiro_nome": "Satya", "ultimo_nome": "Nadella", "empresa": "Microsoft"},
{"primeiro_nome": "Sundar", "ultimo_nome": "Pichai", "empresa": "Google"}
]
}'

Response:

{
"success": true,
"batch_summary": {
"total_profiles": 2,
"successful": 2,
"failed": 0,
"processing_time_ms": 4698.18
},
"results": [
{
"success": true,
"profile_index": 0,
"input": {
"primeiro_nome": "Satya",
"ultimo_nome": "Nadella",
"empresa": "Microsoft"
},
"analise": {
"cargo": "CEO",
"tipo_stakeholder": "C-Level Executive",
"manual_comprador": {...},
"linguagem_impacto": [...]
}
},
{
"success": true,
"profile_index": 1,
"input": {
"primeiro_nome": "Sundar",
"ultimo_nome": "Pichai",
"empresa": "Google"
},
"analise": {...}
}
],
"billing_info": {
"creditos_por_perfil": 1,
"perfis_processados": 2,
"creditos_utilizados": 2,
"creditos_restantes": 198,
"tipo_plano": "complete"
}
}

POST /crm_batch - Complete Tier Batch

Cost: 2 credits per successful profile Max Profiles: 10 per request Tier Required: Complete

Full personality analysis for multiple profiles including OCEAN scores, MBTI, and buyer playbooks.

Request Body: Same structure as /crm_basico_batch Response: Same structure with Complete Tier analysis fields (OCEAN, MBTI, etc.)


Code Examples

Python - Batch Processing

import requests

API_URL = "https://api.fluenceinsights.com/crm_basico_batch"
API_KEY = "your-api-key-here"

profiles = [
{"primeiro_nome": "Satya", "ultimo_nome": "Nadella", "empresa": "Microsoft"},
{"primeiro_nome": "Sundar", "ultimo_nome": "Pichai", "empresa": "Google"},
{"primeiro_nome": "Tim", "ultimo_nome": "Cook", "empresa": "Apple"}
]

response = requests.post(
API_URL,
headers={
"Content-Type": "application/json",
"x-api-key": API_KEY
},
json={"profiles": profiles}
)

if response.status_code == 200:
data = response.json()
summary = data['batch_summary']
print(f"✅ Processed {summary['successful']}/{summary['total_profiles']} profiles")
print(f"⚡ Time: {summary['processing_time_ms']/1000:.1f}s")
print(f"💰 Credits: {data['billing_info']['creditos_utilizados']}")

for result in data['results']:
if result['success']:
analise = result['analise']
input_data = result['input']
print(f"\n✅ {input_data['primeiro_nome']}: {analise['cargo']}")
else:
print(f"\n❌ {result['input']['primeiro_nome']}: {result['error']}")
else:
print(f"❌ Error: {response.json()}")

Python - Processing Large Datasets

def process_large_dataset(profiles, batch_size=10):
"""Process large datasets in optimized batches"""
results = []

for i in range(0, len(profiles), batch_size):
batch = profiles[i:i + batch_size]

response = requests.post(
f"{API_URL}/crm_batch",
headers=headers,
json={"profiles": batch}
)

if response.status_code == 200:
batch_results = response.json()['results']
results.extend(batch_results)

# Track progress
processed = min(i + batch_size, len(profiles))
print(f"Progress: {processed}/{len(profiles)} profiles")

# Rate limiting: small delay between batches
time.sleep(0.5)

return results

# Process 100 profiles in batches of 10
all_results = process_large_dataset(all_profiles, batch_size=10)

JavaScript/Node.js - Batch Processing

const axios = require('axios');

const API_URL = 'https://api.fluenceinsights.com/crm_basico_batch';
const API_KEY = 'your-api-key-here';

async function analyzeBatch() {
const profiles = [
{ primeiro_nome: 'Satya', ultimo_nome: 'Nadella', empresa: 'Microsoft' },
{ primeiro_nome: 'Sundar', ultimo_nome: 'Pichai', empresa: 'Google' },
{ primeiro_nome: 'Tim', ultimo_nome: 'Cook', empresa: 'Apple' }
];

try {
const response = await axios.post(
API_URL,
{ profiles },
{
headers: {
'Content-Type': 'application/json',
'x-api-key': API_KEY
}
}
);

const { batch_summary, results, billing_info } = response.data;
console.log(`${batch_summary.successful}/${batch_summary.total_profiles} profiles`);
console.log(`⚡ Time: ${(batch_summary.processing_time_ms/1000).toFixed(1)}s`);
console.log(`💰 Credits: ${billing_info.creditos_utilizados}`);

results.forEach(result => {
if (result.success) {
const input = result.input;
console.log(`\n✅ ${input.primeiro_nome} ${input.ultimo_nome}: ${result.analise.cargo}`);
} else {
console.log(`\n❌ ${result.input.primeiro_nome}: ${result.error}`);
}
});
} catch (error) {
console.error('❌ Error:', error.response?.data || error.message);
}
}

analyzeBatch();

Best Practices

1. Batch Size Optimization

  • Recommended: 10 profiles per batch (maximum)
  • For large datasets: Process in sequential batches of 10
  • Add delays: 0.5-1s between batches for large-scale processing

2. Handling Partial Failures

def process_batch_results(batch_response):
"""Separate successful and failed results"""
successful = []
failed = []

for result in batch_response['results']:
if result['success']:
successful.append({
'input': result['input'],
'analysis': result['analise']
})
else:
failed.append({
'input': result['input'],
'error': result['error'],
'error_code': result['error_code']
})

# Handle failed profiles
for failure in failed:
if failure['error_code'] == 'INSUFFICIENT_DATA':
# Log or queue for manual review
print(f"⚠️ Insufficient data for {failure['input']}")

return successful, failed

3. Credit Management

# Before batch processing, verify sufficient credits
credits_needed = len(profiles) * 2 # for /crm_batch
current_credits = check_credits(API_KEY)

if current_credits < credits_needed:
print(f"⚠️ Insufficient credits: {current_credits} available, {credits_needed} needed")
# Reduce batch size or stop processing
else:
# Proceed with batch processing
process_batch(profiles)

4. Error Handling

def analyze_batch_with_retry(profiles, max_retries=3):
"""Batch analysis with automatic retry logic"""
for attempt in range(max_retries):
try:
response = requests.post(API_URL, json={"profiles": profiles}, headers=headers)

if response.status_code == 200:
return response.json()
elif response.status_code == 402:
# Insufficient credits - stop immediately
raise Exception("Insufficient credits")
elif response.status_code >= 500:
# Server error - retry with exponential backoff
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
continue
except requests.exceptions.RequestException as e:
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
continue
raise

return None

Response Fields

Batch Summary

FieldTypeDescription
total_profilesintegerTotal number of profiles in batch
successfulintegerNumber of successfully analyzed profiles
failedintegerNumber of failed analyses
processing_time_msfloatTotal processing time in milliseconds

Individual Results

FieldTypeDescription
successbooleanWhether analysis was successful
profile_indexintegerIndex of profile in original batch
inputobjectOriginal input data for this profile
analiseobjectAnalysis results (if successful)
errorstringError message (if failed)
error_codestringError code (if failed)

Billing Info

FieldTypeDescription
creditos_por_perfilintegerCredits charged per successful profile
perfis_processadosintegerNumber of profiles successfully processed
creditos_utilizadosintegerTotal credits used in this batch
creditos_restantesintegerRemaining credits after this batch
tipo_planostringCurrent subscription plan

Error Responses

Insufficient Credits (402)

{
"error": "Insufficient credits",
"message": "Batch requires 20 credits but you have 15",
"profiles_requested": 10,
"credits_per_profile": 2,
"credits_available": 15
}

Batch Size Exceeded (400)

{
"error": "Batch size exceeds maximum",
"message": "Maximum 10 profiles per batch, received 15",
"max_batch_size": 10,
"profiles_received": 15
}

Partial Batch Failure (200 with failed items)

{
"success": true,
"batch_summary": {
"total_profiles": 3,
"successful": 2,
"failed": 1,
"processing_time_ms": 8234.56
},
"results": [
{
"success": true,
"profile_index": 0,
"analise": {...}
},
{
"success": false,
"profile_index": 1,
"input": {"primeiro_nome": "John", "empresa": "Unknown"},
"error": "Profile not found",
"error_code": "PROFILE_NOT_FOUND"
},
{
"success": true,
"profile_index": 2,
"analise": {...}
}
],
"billing_info": {
"creditos_utilizados": 2,
"creditos_restantes": 198
}
}

When to Use Batch Processing

✅ Use Batch Endpoints When:

  • Processing 2+ profiles
  • Time efficiency is important
  • Bulk CRM enrichment needed
  • Analyzing prospect lists
  • Building sales intelligence databases

❌ Use Single Endpoints When:

  • Analyzing only 1 profile
  • Real-time interactive analysis needed
  • Immediate feedback required
  • Testing/debugging

Rate Limits

  • No explicit rate limits currently enforced
  • Recommended: Maximum 5 concurrent batch requests
  • Best practice: Add 0.5-1s delay between large sequential batches
  • Batch size: Maximum 10 profiles per request

Next Steps