Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add labels as a criteria for volume policy #8713

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

shubham-pampattiwar
Copy link
Collaborator

Thank you for contributing to Velero!

Please add a summary of your change

Implementation for design #8503

Does your change fix a particular issue?

Fixes #8256

Please indicate you've done the following:

Copy link

codecov bot commented Feb 20, 2025

Codecov Report

Attention: Patch coverage is 73.77049% with 16 lines in your changes missing coverage. Please review.

Project coverage is 59.51%. Comparing base (9235fe1) to head (1801a42).
Report is 11 commits behind head on main.

Files with missing lines Patch % Lines
pkg/backup/item_backupper.go 23.07% 7 Missing and 3 partials ⚠️
internal/resourcepolicies/resource_policies.go 78.94% 4 Missing ⚠️
pkg/podvolume/backupper.go 0.00% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8713   +/-   ##
=======================================
  Coverage   59.51%   59.51%           
=======================================
  Files         371      371           
  Lines       40179    40241   +62     
=======================================
+ Hits        23911    23949   +38     
- Misses      14775    14798   +23     
- Partials     1493     1494    +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

kaovilai
kaovilai previously approved these changes Feb 20, 2025
Copy link
Member

@kaovilai kaovilai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. This matches the current design.

sseago
sseago previously approved these changes Feb 20, 2025
kaovilai
kaovilai previously approved these changes Feb 20, 2025
sseago
sseago previously approved these changes Feb 20, 2025
kaovilai
kaovilai previously approved these changes Feb 20, 2025
@@ -122,11 +135,14 @@ func (p *Policies) match(res *structuredVolume) *Action {
return nil
}

func (p *Policies) GetMatchAction(res any) (*Action, error) {
func (p *Policies) GetMatchAction(res any, client crclient.Client) (*Action, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had an internal discussion. We feel it is not a good idea to include a dynamic retrieve into the resource policy filtering module, original it only does static match comparisons --- from a static object data against the static policy conditions. Reasons:

  • The resource policy filtering is a very fundermental module, it may be used by any part, even 3rd plugins. And the API server connection may not be possible
  • We cannot always assume that the object we are checking is from the current API server, e.g., if we use the filter in restore, the PVC should be from the backed up data, instead of the cluster

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Therefore, we suggest to check the PVC labels from the PVC object, instead of PV object. The existing checks on PV object stays as is.
Let us know if that works.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lyndon-Li Everything else is checked on PV (size, nfs, etc). This was added to look up the PVC for the PV. The problem is that we expect that users will generally have labels on PVCs but not on PVs since the PVCs are created explicitly, while the PVs are provisioned for them. Not only that, non-privileged users aren't even allowed to edit PVs -- so if the label selector is applied to the PVs instead of the PVCs, then the usefulness of this feature is significantly reduced.

Also, these policies don't really make sense on restore -- we're deciding how to back up the volumes (fs-backup vs snapshot vs. no data backup) -- at restore, we're stuck with the backup-time decisions and can only restore via the backup method.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shubham-pampattiwar Unless there's a way to do this from the PVC object -- but that may not be possible, since we need to find the first matching policy, so we need to go through all the policies in order for each volume.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lyndon-Li Talking to @shubham-pampattiwar just now, I think we found another way to do this that doesn't require passing in a client. In fact, in most cases we already have a PVC obj in memory that we can use.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lyndon-Li Discussed this with @sseago what if just introduce a new struct and pass that on the GetMatchAction function ?

type VolumeFilterData struct {
   pv PersistentVolume
   vol Volume
   pvc PersistentVolumeClaim
}

Copy link
Contributor

@Lyndon-Li Lyndon-Li Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, we agree on introducing a new struct for GetMatchAction.
Secondly, in order to make it flexible, it's better to pick the required fields from Volume, PersistentVolume and PersistentVolumeClaim and add them to this new struct. It is not necessary to couple with the aforementioned k8s objects, we don't need all the fields from them.

Sorry, if we didn't make the suggestions clear.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's better to pick the required fields from Volume, PersistentVolume and PersistentVolumeClaim and add them to this new struct

@Lyndon-Li We already extract the necessary information for filtering and store it in the structuredVolume struct. Given that, I’m unsure about the need to extract the same information again only to reassign it to structuredVolume. Could you clarify the intent behind this change?

If the goal is to refactor how conditions are passed to the volume policy engine, I believe that would be outside the scope of this PR. Let’s align on whether this change is essential here or if it should be handled separately.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get point that the upper level of the engine is to filter k8s resources, so it is reasonable to pass k8s resources. I agree on that. I also agree that based on the current code, it will be a duplication of we add another struct very similar to structuredVolume.
But I also feel that we should not have expected that the engine needs to filter multiple objects and filter them separately may not be reasonable. So how to create a logical struct to connect/represent these objects (not only for volume, but for all objects) may be something we can do better.

Right now, let's go with your proposal #8713 (comment).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lyndon-Li The other advantage of passing in the whole k8s resource is that if we later decide to allow filtering on PVC name, for example, we won't need to change the struct we pass in -- only the internals of the filtering, so this will be a more stable API.

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add changelog file

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

handle err

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

use labels selector.matches

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

make update

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

remove fetching pvc from volume policy filtering

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>

add more ut coverage

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Copy link
Collaborator

@sseago sseago left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is going in the right direction, but a couple of suggestions in my review comments.

Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

RFE: Extend Volume Policy criteria to include label selector
4 participants