Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented the ability to train rewards in preference comparison against multiple policies #529
Implemented the ability to train rewards in preference comparison against multiple policies #529
Changes from 28 commits
a7eda89
1a9ab5d
d442baa
003bbcd
f6a3d44
33110aa
8c5433d
83f8b8b
803ffde
21e11fe
5c9fc1e
e478ec3
75c0b80
73ace03
b0b3b93
dc60be8
2958d08
303b032
230774c
d44cc85
07fa830
dd72656
709cb42
06084ad
a2b790f
510abd8
232b56e
09670d3
a5ab914
408e248
4df8551
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, even though you could just do integer division of
num_steps // num_agents
manually as a user.I might not have bothered to add this feature for two reasons: I don't see it's clearly more readable to pass this flag than it is to do the above, and I don't see why a user wishing to specify the total number of steps would want the total to be exactly
num_steps
instead of just a multiple ofnum_agents
.However, now that you've done it I don't oppose having it, as it doesn't preclude users from still splitting it manually. You might want to add input validation to make sure that each agent has at least one step of training inside MixtureOfTrajectoryGenerators._partition (e.g. raise
if steps < n
).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this the default behavior for the following reason. Often in RL research papers we want to have a specific budget of environment interactions. In this context, I felt having the default behavior respect that budget would make the most sense. However, I do agree, this could be handled at the level of the scripts and reduce the complexity of the business logic a bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, thanks for your clarification. I just figured that for a case of e.g. 7 agents, 2113 steps instead of 2114 (a multiple of 7) would not really make a difference, but maybe it does to some users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this would make a difference to most users. I'm mainly concerned with what the default behavior. Furthermore, I believe it should be to split training steps equally amongst the agents. Thus, the
num_steps
records the actually budget of environment interactions across all agents.We could add this in the script level. However, I suspect that down stream users of the core API will also just end up implementing something similar to
num_steps//num_agents
themselves.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having default behavior preserve total training timesteps budget sems good to me. Integer division
//
vs something more clever that handles remainders seems unimportant -- we usually have hundreds of thousands of timesteps and single-digit numbers of agents.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There'll be hilariously little diversity between agents in this case, but not much we can do there. (Support loading different agents I guess? But that seems overkill for what's a rare use case.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not single agent in the sense that it's... zero agent? This is a bit counterintuitive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't save checkpoints if there's multiple agents? That's a little sad.