Skip to main content
POST
/
v1
/
chat
/
completions
curl -X POST 'https://api.checkthat-ai.com/v1/chat/completions' \
  -H 'Content-Type: application/json' \
  -d '{
    "api_key": "YOUR_API_KEY",
    "model": "gpt-4",
    "messages": [
      {
        "role": "system",
        "content": "You are a fact-checking assistant specialized in claim verification."
      },
      {
        "role": "user", 
        "content": "Is it true that vaccines cause autism?"
      }
    ],
    "temperature": 0.3,
    "max_tokens": 1000
  }'
{
  "id": "chatcmpl-123abc", 
  "object": "chat.completion",
  "created": 1694268190,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Based on extensive scientific research and multiple systematic reviews..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 45,
    "completion_tokens": 234,
    "total_tokens": 279
  },
  "fact_check_metadata": {
    "claims_detected": 1,
    "confidence_score": 0.92,
    "sources_consulted": ["WHO", "CDC", "peer_reviewed_studies"]
  }
}
An OpenAI-compatible endpoint that allows you to use CheckThat AI’s claim normalization and fact-checking capabilities with existing OpenAI-style integrations and tools.
This endpoint follows the OpenAI Chat Completions API format, making it easy to integrate with existing applications and libraries designed for OpenAI’s API.

Key Features

  • Drop-in replacement: Compatible with OpenAI chat completion clients
  • Fact-checking enhanced: Responses include claim analysis and verification
  • Streaming support: Optional streaming responses for real-time interactions
  • Multiple models: Support for various AI models through CheckThat AI

Request Parameters

api_key
string
required
Your CheckThat AI API key for authentication.
model
string
required
The model to use for generating responses. Use /models endpoint to see available options.
messages
array
required
Array of message objects representing the conversation history.
stream
boolean
default:"false"
Whether to stream the response back as it’s generated. Set to true for real-time streaming.
temperature
number
Controls randomness in the response. Lower values make output more focused and deterministic. Range: 0.0 to 2.0.
top_p
number
Alternative to temperature. Controls diversity via nucleus sampling. Range: 0.0 to 1.0.
max_tokens
integer
Maximum number of tokens to generate in the response.

Request Examples

curl -X POST 'https://api.checkthat-ai.com/v1/chat/completions' \
  -H 'Content-Type: application/json' \
  -d '{
    "api_key": "YOUR_API_KEY",
    "model": "gpt-4",
    "messages": [
      {
        "role": "system",
        "content": "You are a fact-checking assistant specialized in claim verification."
      },
      {
        "role": "user", 
        "content": "Is it true that vaccines cause autism?"
      }
    ],
    "temperature": 0.3,
    "max_tokens": 1000
  }'
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_CHECKTHAT_API_KEY',
  baseURL: 'https://api.checkthat-ai.com/v1'
});

const completion = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [
    {
      role: 'user',
      content: 'Fact-check this claim: Coffee consumption prevents diabetes'
    }
  ],
  temperature: 0.1,
  max_tokens: 800
});

console.log(completion.choices[0].message.content);
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_CHECKTHAT_API_KEY",
    base_url="https://api.checkthat-ai.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a fact-checking expert."},
        {"role": "user", "content": "Evaluate this claim: 5G networks cause cancer"}
    ],
    temperature=0.2,
    max_tokens=1200
)

print(response.choices[0].message.content)

Response Format

Standard Response

{
  "id": "chatcmpl-123abc", 
  "object": "chat.completion",
  "created": 1694268190,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Based on extensive scientific research and multiple systematic reviews..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 45,
    "completion_tokens": 234,
    "total_tokens": 279
  },
  "fact_check_metadata": {
    "claims_detected": 1,
    "confidence_score": 0.92,
    "sources_consulted": ["WHO", "CDC", "peer_reviewed_studies"]
  }
}

Streaming Response

When stream: true is set, responses are sent as Server-Sent Events:
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"Based"}}]}

data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" on"}}]}

data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" scientific"}}]}

data: [DONE]

Response Fields

id
string
required
Unique identifier for the chat completion response.
object
string
required
Object type, always “chat.completion” for this endpoint.
created
integer
required
Unix timestamp of when the completion was created.
model
string
required
The model used for generating the completion.
choices
array
required
Array of completion choices (typically contains one choice).
usage
object
Token usage statistics for the request.
fact_check_metadata
object
CheckThat AI specific metadata about claim analysis (unique to our platform).

Integration Benefits

Easy Migration

Drop-in replacement for OpenAI API calls in existing applications

Enhanced Output

All responses include fact-checking analysis and source verification

Library Support

Works with popular OpenAI client libraries without modification

Streaming Support

Real-time streaming for interactive applications and chatbots
Remember to use your CheckThat AI API key, not your OpenAI key, and set the correct base URL when configuring OpenAI client libraries.

Body

application/json
api_key
string
required
model
string
required
messages
OpenAIChatMessage · object[]
required
stream
boolean | null
default:false
temperature
number | null
top_p
number | null
max_tokens
integer | null

Response

Successful Response

The response is of type any.