IronaAI recommends the best-suited LLM to respond by analyzing an array of messages and a list of available LLMs, letting you handle the LLM calls however you like.
When to use model_select
Use model_select
if you want to integrate IronaAI
into an existing project.
The lightweight ironaai
package minimizes dependency conflicts and works smoothly with your existing code for making LLM requests.
selected_models = client.chat.completions.model_select(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the golden ratio."}
],
models=['openai/gpt-4o', 'anthropic/claude-3-5-sonnet-20240620']
)
## Response output
# selected_models = 'openai/gpt-4o'
When to use create
Use create
if you’re starting a new project & want to skip writing extra code for LLM calls.
Most examples here use create, but you can switch to model_select
anytime.
completion_response, selected_models = client.completions.create(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the golden ratio."}
],
models=['openai/gpt-4o', 'anthropic/claude-3-5-sonnet-20240620']
)
Responses are generated using AI and may contain mistakes.