We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't use openai o3-mini
config: [llm] model = "o3-mini" base_url = "https://api.openai.com/v1" api_key = "sk-proj-valid_key" #max_completion_tokens = 4096 ### --->(tried also) #max_tokens = 4096 ### --->(tried to coment line) temperature = 0.0
| ERROR | app.llm:ask_tool:260 - API error: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}
I confirm that work's with gpt-4o model
The text was updated successfully, but these errors were encountered:
#411 Use the code from this PR
Sorry, something went wrong.
本地已经复现,用的deepseek模型,谷歌搜索,需要开通代理;
[llm] model = "deepseek-chat" base_url = "https://api.deepseek.com" #api_key = "sk-cpstrqpeumbxgdojvibrgtmmkrhsgmqafvwywfflzwchopat" api_key = "sk-123" max_tokens = 4096 temperature = 0.0
[llm.vision] model = "deepseek-chat" base_url = "https://api.deepseek.com" api_key = "sk-123"
写了一个完整的教程,可以参考这个: https://mp.weixin.qq.com/s/G1wbK_7SmjMDC_zQ1xx7dA
Successfully merging a pull request may close this issue.
Can't use openai o3-mini
config:
[llm]
model = "o3-mini"
base_url = "https://api.openai.com/v1"
api_key = "sk-proj-valid_key"
#max_completion_tokens = 4096 ### --->(tried also)
#max_tokens = 4096 ### --->(tried to coment line)
temperature = 0.0
| ERROR | app.llm:ask_tool:260 - API error: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}
I confirm that work's with gpt-4o model
The text was updated successfully, but these errors were encountered: