-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Save raw responses for base models #284
Conversation
Added raw_responses attribute to the smolagents models. This list is appended at every call with the response object of the models.
@matthewcarbone the LLM outputs are already all saved at the agent level under |
@aymeric-roucher checking model = AzureOpenAIServerModel(
model_id = ...,
api_key=api_key,
api_version=...,
azure_endpoint=base_url,
)
agent = MultiStepAgent(tools=[], model=model, add_base_tools=False, max_steps=1)
result = agent.run("Why is the sky blue?", ) (see #282). Also, I'm not specifically talking about the text output, I'm talking about the metadata from the call, which in my case, is accessed via
It does not appear that the full |
@aymeric-roucher it also occurred to me we can turn the other attributes ( Let me know if you're on board, or if you feel I'm missing something! |
@aymeric-roucher moving the discussion back here so as to not bog down the separate discussion in #270 😄
I certainly agree, but I think the first step is to store everything at "maximum fidelity". The issue of serializing at different levels of fidelity could be challenging since the response format of every LLM API could be different, right? How do you want to proceed here? I can try to modify this PR accordingly, and store things at the agent level, but that might be difficult to do without breaking changes at this stage. Thoughts? |
@aymeric-roucher not trying to poke too much but this repo is being developed at breakneck pace, I just don't want this to get lost 😄 Any further thoughts on this? I think it would probably be easiest to store things at the |
Hey @matthewcarbone , I'm working on improving our logging system (to have a separate logger etc) and including your ideas of "storing everything" |
@clefourrier is there anything I can do to contribute? I'm strongly interested in learning the system here. Also feel free to close this PR if you feel it conflicts with the changes you're going to make. No need for it to clog up the open PRs list 👍 |
I'll probably ask you to take a look once it feels nice if you've got the bandwidth :) |
Sure sounds good to me! Feel free to close this if you wish. If there's currently an open PR or issue discussing this please do point me to it! |
It's here but still a wip |
It seems the discussion is now in: I'm closing this PR in favor of that. Please, feel free to reopen if you think this could add something different. And thank you! |
Currently, it seems that the full response metadata is not saved anywhere in the agent logs or in the model. I could be missing something, but from the way
response
is parsed here, it would seem that all information e.g. content filter results is lost during the calls. I would very much like to access this information as an extra safety layer. In addition, I think it would probably be good to allow users to access this, at least at the model level.I've implemented this very simple change, which does seem to work in my local testing! Really nothing too crazy, and I'm happy to modify the PR in whatever way makes sense if the maintainers feel it needs additions/changes. Now, you can access the model's raw responses via
model.raw_responses
. I think it might make more sense to implement at the agent level, but I figure it's a start.