Overview
The CheckThat AI Python SDK provides a seamless interface to access 11+ models from leading AI providers through a single, unified API. Built as a drop-in replacement for the OpenAI SDK, it includes integrated fact-checking and claim normalization capabilities.
Features
π Unified LLM Access : Access all the latest models from OpenAI, Anthropic, Google Gemini, xAI, and Together AI through a single API
π Claim Normalization : Standardize and structure claims for analysis
β
Fact-Checking : Built-in claim verification and evidence sourcing
π OpenAI Compatible : Drop-in replacement for OpenAI Python SDK
β‘ Async Support : Full async/await support for high-performance applications
π‘οΈ Type Safety : Complete type hints for better development experience
π― Claim Refinement : Iterative improvement of response accuracy with configurable quality thresholds
π Evaluation Metrics : Built-in support for G-Eval, bias detection, hallucination detection, and more
π Always Up-to-Date : Access to the newest models as soon as theyβre released
Installation
Requirements : Python 3.8+ with full type hint support
Quick Start
Basic Usage
The simplest way to get started is using your OpenAI API key:
import os
from checkthat_ai import CheckThatAI
# Initialize the client with your OpenAI API key
api_key = os.environ.get( "OPENAI_API_KEY" )
client = CheckThatAI( api_key = api_key)
# IMPORTANT: Always check available models first
models = client.models.list()
print ( "Available models:" , models)
# Use exactly like OpenAI's client with latest models
response = client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Use latest available models
messages = [
{ "role" : "user" , "content" : "Fact-check this claim: The Earth is flat" }
]
)
print (response.choices[ 0 ].message.content)
Environment Setup
Set up your environment variables for the providers you want to use:
# ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY = "your-openai-key"
export ANTHROPIC_API_KEY = "your-anthropic-key"
export GEMINI_API_KEY = "your-gemini-key"
export XAI_API_KEY = "your-xai-key"
export TOGETHER_API_KEY = "your-together-key"
Client Configuration
Synchronous Client
from checkthat_ai import CheckThatAI
client = CheckThatAI(
api_key = "your-api-key" , # Required: Provider API key
base_url = "https://api.checkthat-ai.com/v1" , # Optional: Custom base URL
timeout = 30.0 , # Optional: Request timeout in seconds
max_retries = 3 , # Optional: Max retry attempts
default_headers = { "User-Agent" : "MyApp/1.0" }, # Optional: Custom headers
)
Asynchronous Client
import asyncio
from checkthat_ai import AsyncCheckThatAI
async def main ():
client = AsyncCheckThatAI(
api_key = "your-api-key" ,
timeout = 30.0 ,
max_retries = 3
)
response = await client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Use latest models
messages = [
{ "role" : "user" , "content" : "What is quantum computing?" }
]
)
print (response.choices[ 0 ].message.content)
await client.close() # Important: close the client when done
# Run the async function
asyncio.run(main())
Chat Completions
Basic Chat
response = client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Use latest models
messages = [
{ "role" : "system" , "content" : "You are a helpful fact-checking assistant." },
{ "role" : "user" , "content" : "Is coffee consumption linked to increased longevity?" }
],
temperature = 0.1 , # Lower temperature for fact-checking
max_tokens = 1000
)
print (response.choices[ 0 ].message.content)
Streaming Responses
response = client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Use latest models
messages = [{ "role" : "user" , "content" : "Tell me about climate change" }],
stream = True ,
temperature = 0.7
)
for chunk in response:
if chunk.choices[ 0 ].delta.content:
print (chunk.choices[ 0 ].delta.content, end = "" , flush = True )
Async Streaming
async def stream_response ():
client = AsyncCheckThatAI( api_key = "your-api-key" )
stream = await client.chat.completions.create(
model = "claude-sonnet-4-20250514" , # Use latest models
messages = [{ "role" : "user" , "content" : "Explain artificial intelligence" }],
stream = True
)
async for chunk in stream:
if chunk.choices[ 0 ].delta.content:
print (chunk.choices[ 0 ].delta.content, end = "" , flush = True )
asyncio.run(stream_response())
Multi-Provider Usage
Using Different Providers
You can switch between providers seamlessly by using different API keys and model identifiers:
OpenAI Models
Anthropic Models
Google Models
xAI Models
from checkthat_ai import CheckThatAI
# Use OpenAI models
openai_client = CheckThatAI( api_key = os.getenv( "OPENAI_API_KEY" ))
response = openai_client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Latest OpenAI model
messages = [{ "role" : "user" , "content" : "Hello from OpenAI!" }]
)
Model Discovery
# Get all available models
models = client.models.list()
# Print models by provider
for provider in models[ "models_list" ]:
print ( f " \n { provider[ 'provider' ] } Models:" )
for model in provider[ 'available_models' ]:
print ( f " - { model[ 'name' ] } : { model[ 'model_id' ] } " )
Structured Output Generation
Generate structured, type-safe responses using Pydantic models with the dedicated parse() method:
Basic Structured Output
With Claim Refinement
from checkthat_ai import CheckThatAI
from pydantic import BaseModel, Field
from typing import List
class MathStep ( BaseModel ):
step_number: int = Field( description = "The step number" )
explanation: str = Field( description = "What happens in this step" )
equation: str = Field( description = "The mathematical equation" )
class MathSolution ( BaseModel ):
problem: str = Field( description = "The original problem" )
steps: List[MathStep] = Field( description = "Step-by-step solution" )
final_answer: str = Field( description = "The final answer" )
client = CheckThatAI( api_key = "your-api-key" )
# Use the parse() method for structured outputs
response = client.chat.completions.parse(
model = "gpt-5-2025-08-07" , # Use latest models that support structured outputs
messages = [
{ "role" : "system" , "content" : "You are a helpful math tutor." },
{ "role" : "user" , "content" : "Solve: 2x + 5 = 13" }
],
response_format = MathSolution
)
# Access the parsed response directly
solution = response.choices[ 0 ].message.parsed
print ( f "Problem: { solution.problem } " )
print ( f "Answer: { solution.final_answer } " )
for step in solution.steps:
print ( f "Step { step.step_number } : { step.explanation } " )
Evaluation Metrics
The SDK supports various quality evaluation metrics:
G-Eval : General evaluation framework
Bias : Bias detection and analysis
Hallucinations : Hallucination detection
Hallucination Coverage : Coverage analysis of hallucinations
Factual Accuracy : Fact-checking accuracy
Relevance : Content relevance assessment
Coherence : Response coherence evaluation
# Access available metrics programmatically
from checkthat_ai._types import AVAILABLE_EVAL_METRICS
print ( "Available metrics:" , AVAILABLE_EVAL_METRICS )
# Use metrics with structured outputs
response = client.chat.completions.parse(
model = "gpt-5-2025-08-07" ,
messages = [ ... ],
response_format = YourModel,
refine_claims = True ,
refine_metrics = [ "factual_accuracy" , "hallucinations" ] # Specify metrics to use
)
Supported Models
The SDK provides access to all the latest models from multiple providers:
OpenAI : GPT-5, GPT-5 nano, o3, o4-mini, GPT-4o, and more
Anthropic : Claude Sonnet 4, Sonnet Opus 4.1, Claude 3.5 Sonnet, and more
Google : Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 1.5 Pro, and more
xAI : Grok 4, Grok 3, Grok 3 Mini, and more
Together AI : Llama 3.3 70B, Deepseek R1 Distill Llama 70B, and more
β οΈ Important: Always Check Available Models Before using any model, query the /v1/models endpoint to get the current list of available models: # Get the most up-to-date list of available models
models = client.models.list()
print ( "Available models:" )
for provider in models[ "models_list" ]:
print ( f " \n { provider[ 'provider' ] } :" )
for model in provider[ "available_models" ]:
print ( f " - { model[ 'name' ] } ( { model[ 'model_id' ] } )" )
Model availability depends on:
Your API keys and provider access
Current provider service status
Regional availability
Your subscription tier with each provider
Advanced Usage
OpenAI Models
Anthropic Models
Google Models
xAI Models
from checkthat_ai import CheckThatAI
# Use OpenAI models
openai_client = CheckThatAI( api_key = os.getenv( "OPENAI_API_KEY" ))
response = openai_client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Latest OpenAI model
messages = [{ "role" : "user" , "content" : "Hello from OpenAI!" }]
)
Model Discovery
# Get all available models
models = client.models.list()
# Print models by provider
for provider in models.models_list:
print ( f " \n { provider[ 'provider' ] } Models:" )
for model in provider[ 'available_models' ]:
print ( f " - { model[ 'name' ] } : { model[ 'model_id' ] } " )
Advanced Usage
Context Management
import contextlib
@contextlib.asynccontextmanager
async def get_client ():
client = AsyncCheckThatAI( api_key = "your-api-key" )
try :
yield client
finally :
await client.close()
# Usage
async def main ():
async with get_client() as client:
response = await client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
print (response.choices[ 0 ].message.content)
client = CheckThatAI(
api_key = "your-api-key" ,
default_headers = {
"User-Agent" : "MyFactCheckApp/1.0" ,
"X-App-Version" : "1.2.3"
}
)
# Add request-specific headers
response = client.chat.completions.create(
model = "gpt-5-2025-08-07" , # Use latest models
messages = [{ "role" : "user" , "content" : "Hello!" }],
extra_headers = { "X-Request-ID" : "unique-request-id" }
)
Conversation Management
class ConversationManager :
def __init__ ( self , client , model = "gpt-5-2025-08-07" ):
self .client = client
self .model = model
self .messages = []
def add_message ( self , role , content ):
self .messages.append({ "role" : role, "content" : content})
def get_response ( self , user_input ):
self .add_message( "user" , user_input)
response = self .client.chat.completions.create(
model = self .model,
messages = self .messages,
temperature = 0.1
)
assistant_message = response.choices[ 0 ].message.content
self .add_message( "assistant" , assistant_message)
return assistant_message
# Usage
conv = ConversationManager(client)
conv.add_message( "system" , "You are a fact-checking assistant." )
response1 = conv.get_response( "Is the Earth round?" )
response2 = conv.get_response( "What evidence supports this?" )
Error Handling
The SDK uses the same exception types as the OpenAI SDK for compatibility, plus custom CheckThat AI exceptions:
from openai import (
OpenAIError,
APIError,
APITimeoutError,
RateLimitError,
BadRequestError,
AuthenticationError,
PermissionDeniedError,
NotFoundError,
ConflictError,
UnprocessableEntityError,
InternalServerError
)
from checkthat_ai._exceptions import InvalidModelError, InvalidResponseFormatError
def robust_completion ( client , messages , max_retries = 3 ):
for attempt in range (max_retries):
try :
response = client.chat.completions.create(
model = "gpt-5-2025-08-07" ,
messages = messages,
timeout = 30.0
)
return response
except InvalidModelError as e:
print ( f "Invalid model: { e } " )
# List available models
models = client.models.list()
print ( "Available models:" )
for provider in models[ "models_list" ]:
print ( f " { provider[ 'provider' ] } : { len (provider[ 'available_models' ]) } models" )
break
except InvalidResponseFormatError as e:
print ( f "Response format error: { e } " )
break
except RateLimitError as e:
print ( f "Rate limited, waiting... (attempt { attempt + 1 } )" )
time.sleep( 2 ** attempt) # Exponential backoff
except APITimeoutError:
print ( f "Request timed out (attempt { attempt + 1 } )" )
except AuthenticationError as e:
print ( f "Authentication failed: { e } " )
break # Don't retry auth errors
except APIError as e:
print ( f "API error: { e } " )
if attempt == max_retries - 1 :
raise
return None
Custom Exceptions
InvalidModelError : Raised when trying to use an unsupported or unavailable model
InvalidResponseFormatError : Raised when thereβs an issue with structured output format
Best Practices
1. Resource Management
# Always close async clients
async def good_practice ():
client = AsyncCheckThatAI( api_key = "your-api-key" )
try :
response = await client.chat.completions.create( ... )
return response
finally :
await client.close()
# Or use context managers
async def better_practice ():
async with AsyncCheckThatAI( api_key = "your-api-key" ) as client:
response = await client.chat.completions.create( ... )
return response
2. API Key Security
import os
from pathlib import Path
def get_api_key ():
# Try environment variable first
api_key = os.getenv( "OPENAI_API_KEY" )
if not api_key:
# Try loading from secure file
key_file = Path.home() / ".config" / "checkthat" / "api_key"
if key_file.exists():
api_key = key_file.read_text().strip()
if not api_key:
raise ValueError ( "API key not found. Set OPENAI_API_KEY environment variable." )
return api_key
client = CheckThatAI( api_key = get_api_key())
3. Model Selection
def get_optimal_model ( task_type : str , complexity : str ) -> str :
"""Select the best model based on task requirements."""
if task_type == "fact_checking" :
if complexity == "high" :
return "gpt-5-2025-08-07" # Most accurate for complex claims
else :
return "claude-sonnet-4-20250514" # Good balance
elif task_type == "creative_writing" :
return "claude-opus-4.1-2025-05-20" # Creative tasks
elif task_type == "code_generation" :
return "gpt-5-2025-08-07" # Strong coding capabilities
elif task_type == "analysis" :
return "gemini-2.5-pro-002" # Good analytical reasoning
else :
return "gpt-5-2025-08-07" # Default reliable choice
# Usage
model = get_optimal_model( "fact_checking" , "high" )
response = client.chat.completions.create( model = model, ... )
4. Monitoring and Logging
import logging
import time
logging.basicConfig( level = logging. INFO )
logger = logging.getLogger( __name__ )
class MonitoredClient :
def __init__ ( self , api_key ):
self .client = CheckThatAI( api_key = api_key)
self .request_count = 0
self .total_tokens = 0
def chat_completion ( self , ** kwargs ):
start_time = time.time()
self .request_count += 1
try :
response = self .client.chat.completions.create( ** kwargs)
# Track usage
if hasattr (response, 'usage' ):
self .total_tokens += response.usage.total_tokens
duration = time.time() - start_time
logger.info( f "Request { self .request_count } completed in { duration :.2f} s" )
return response
except Exception as e:
logger.error( f "Request { self .request_count } failed: { e } " )
raise
Migration from OpenAI SDK
If youβre already using the OpenAI SDK, migration is straightforward:
Before (OpenAI)
After (CheckThat AI)
from openai import OpenAI
client = OpenAI( api_key = "your-openai-key" )
response = client.chat.completions.create(
model = "gpt-4" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
Zero Breaking Changes : The API is 100% compatible with OpenAI SDK patterns.
Support and Contributing
Getting Help
When reporting issues, please include your Python version, SDK version, and a minimal reproducible example.