diff --git a/docs/docs/usage-guide/changing_a_model.md b/docs/docs/usage-guide/changing_a_model.md index f5d9b333..e6926a99 100644 --- a/docs/docs/usage-guide/changing_a_model.md +++ b/docs/docs/usage-guide/changing_a_model.md @@ -379,7 +379,7 @@ To bypass chat templates and temperature controls, set `config.custom_reasoning_ ```toml [config] -reasoning_efffort = "medium" # "low", "medium", "high" +reasoning_effort = "medium" # "low", "medium", "high" ``` With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.