You can use our API to send individual queries or have long-running conversations with chat models. You do not need to configure a system prompt for claim normalization tasks or even regular chat queries.Our backend API endpoints are configured with our custom system prompts to handle both generic and claim normalization tasks.Queries run against a model of your choice. You are welcome to use any model from multiple providers.
Retrieve a list of available models using the client.models.list() method:
Copy
from checkthat_ai import CheckThatAIimport os# Initialize client with your provider API keyclient = CheckThatAI(api_key=os.getenv("OPENAI_API_KEY"))# Get all available modelsmodels = client.models.list()# Print models by providerfor provider in models.models_list: print(f"\n{provider['provider']} Models:") for model in provider['available_models']: print(f" - {model['name']}: {model['model_id']}")
Models List Response
Copy
{ "models_list": [ { "provider": "OpenAI", "available_models": [ { "name": "GPT-4o", "model_id": "gpt-4o", "description": "Most capable GPT-4 model, optimized for chat and code" }, { "name": "GPT-5", "model_id": "gpt-5", "description": "Latest GPT-5 model with enhanced reasoning" } ] } ]}
from checkthat_ai import CheckThatAIimport osclient = CheckThatAI(api_key=os.getenv("OPENAI_API_KEY"))# Enable streaming with stream=Trueresponse = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "user", "content": "Tell me about the latest developments in renewable energy"} ], stream=True, temperature=0.7, max_tokens=1500)# Process streaming chunksprint("Streaming response: ", end="", flush=True)for chunk in response: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True)print("\n") # New line when complete
Streaming Benefits: Streaming responses provide better user experience for long-form content, allow for real-time interaction, and can reduce perceived latency in chat applications.
Memory Management: When using streaming, especially with async operations, ensure you properly close clients and handle exceptions to prevent memory leaks.