An OpenAI-compatible endpoint that allows you to use CheckThat AI’s claim normalization and fact-checking capabilities with existing OpenAI-style integrations and tools.
This endpoint follows the OpenAI Chat Completions API format, making it easy to integrate with existing applications and libraries designed for OpenAI’s API.
Key Features
Drop-in replacement : Compatible with OpenAI chat completion clients
Fact-checking enhanced : Responses include claim analysis and verification
Streaming support : Optional streaming responses for real-time interactions
Multiple models : Support for various AI models through CheckThat AI
Request Parameters
Your CheckThat AI API key for authentication.
The model to use for generating responses. Use /models endpoint to see available options.
Array of message objects representing the conversation history. Show Message object format
The role of the message author. Must be one of: “system”, “user”, or “assistant”.
The content of the message.
Whether to stream the response back as it’s generated. Set to true for real-time streaming.
Controls randomness in the response. Lower values make output more focused and deterministic. Range: 0.0 to 2.0.
Alternative to temperature. Controls diversity via nucleus sampling. Range: 0.0 to 1.0.
Maximum number of tokens to generate in the response.
Request Examples
curl -X POST 'https://api.checkthat-ai.com/v1/chat/completions' \
-H 'Content-Type: application/json' \
-d '{
"api_key": "YOUR_API_KEY",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a fact-checking assistant specialized in claim verification."
},
{
"role": "user",
"content": "Is it true that vaccines cause autism?"
}
],
"temperature": 0.3,
"max_tokens": 1000
}'
JavaScript (using OpenAI library)
import OpenAI from 'openai' ;
const client = new OpenAI ({
apiKey: 'YOUR_CHECKTHAT_API_KEY' ,
baseURL: 'https://api.checkthat-ai.com/v1'
});
const completion = await client . chat . completions . create ({
model: 'gpt-4' ,
messages: [
{
role: 'user' ,
content: 'Fact-check this claim: Coffee consumption prevents diabetes'
}
],
temperature: 0.1 ,
max_tokens: 800
});
console . log ( completion . choices [ 0 ]. message . content );
Python (using OpenAI library)
from openai import OpenAI
client = OpenAI(
api_key = "YOUR_CHECKTHAT_API_KEY" ,
base_url = "https://api.checkthat-ai.com/v1"
)
response = client.chat.completions.create(
model = "gpt-4" ,
messages = [
{ "role" : "system" , "content" : "You are a fact-checking expert." },
{ "role" : "user" , "content" : "Evaluate this claim: 5G networks cause cancer" }
],
temperature = 0.2 ,
max_tokens = 1200
)
print (response.choices[ 0 ].message.content)
Standard Response
{
"id" : "chatcmpl-123abc" ,
"object" : "chat.completion" ,
"created" : 1694268190 ,
"model" : "gpt-4" ,
"choices" : [
{
"index" : 0 ,
"message" : {
"role" : "assistant" ,
"content" : "Based on extensive scientific research and multiple systematic reviews..."
},
"finish_reason" : "stop"
}
],
"usage" : {
"prompt_tokens" : 45 ,
"completion_tokens" : 234 ,
"total_tokens" : 279
},
"fact_check_metadata" : {
"claims_detected" : 1 ,
"confidence_score" : 0.92 ,
"sources_consulted" : [ "WHO" , "CDC" , "peer_reviewed_studies" ]
}
}
Streaming Response
When stream: true is set, responses are sent as Server-Sent Events:
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"Based"}}]}
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" on"}}]}
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" scientific"}}]}
data: [DONE]
Response Fields
Unique identifier for the chat completion response.
Object type, always “chat.completion” for this endpoint.
Unix timestamp of when the completion was created.
The model used for generating the completion.
Array of completion choices (typically contains one choice). Show Choice object properties
Index of this choice in the choices array.
The generated message content.
Reason why generation stopped: “stop”, “length”, or “content_filter”.
Token usage statistics for the request. Number of tokens in the input prompt.
Number of tokens in the generated completion.
Total tokens used (prompt + completion).
CheckThat AI specific metadata about claim analysis (unique to our platform). Show Fact-checking metadata
Number of factual claims identified in the conversation.
Confidence level in the fact-checking analysis (0.0 to 1.0).
List of authoritative sources referenced during fact-checking.
Integration Benefits
Easy Migration Drop-in replacement for OpenAI API calls in existing applications
Enhanced Output All responses include fact-checking analysis and source verification
Library Support Works with popular OpenAI client libraries without modification
Streaming Support Real-time streaming for interactive applications and chatbots
Remember to use your CheckThat AI API key, not your OpenAI key, and set the correct base URL when configuring OpenAI client libraries.
messages
OpenAIChatMessage · object[]
required
stream
boolean | null
default: false
The response is of type any .