Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Use storage class from annotation for host-assisted PVC operation #3585

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

TheRealSibasishBehera
Copy link

What this PR does / why we need it:
modify the getStorageClassCorrespondingToSnapClass function to use the storage class from the annotation for host-assisted PVC operations.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #3530

Special notes for your reviewer:

Release note:

NONE

modify the `getStorageClassCorrespondingToSnapClass` function to use the storage class from the annotation for host-assisted PVC operations.

Fixes: kubevirt#3530

Signed-off-by: Sibasish Behera <[email protected]>
@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/M labels Jan 8, 2025
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign aglitke for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@TheRealSibasishBehera
Copy link
Author

cc: @akalenyu

@akalenyu
Copy link
Collaborator

akalenyu commented Jan 8, 2025

Thank you for the PR! 🙏

What do you think about capturing the mapping between the volumesnapshotclass->storageclass
on the volumesnapshotclass annotations instead?
This way, the admin configures it once on a cluster level and subsequent restores should work seamlessly for any DV

@kubevirt-bot
Copy link
Contributor

@TheRealSibasishBehera: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cdi-linter dcf7cfb link false /test pull-cdi-linter
pull-cdi-generate-verify dcf7cfb link true /test pull-cdi-generate-verify
pull-containerized-data-importer-e2e-destructive dcf7cfb link true /test pull-containerized-data-importer-e2e-destructive

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@@ -450,7 +450,12 @@ func (r *SnapshotCloneReconciler) createTempHostAssistedSourcePvc(dv *cdiv1.Data
return nil
}

func (r *SnapshotCloneReconciler) getStorageClassCorrespondingToSnapClass(driver string) (string, error) {
func (r *SnapshotCloneReconciler) getStorageClassCorrespondingToSnapClass(dv *cdiv1.DataVolume, driver string) (string, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this only covers the non-CSI path (initLegacyClone), I assume you're more interested in the CSI one?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes I made are for cases where TempHostAssistedSourcePvc is created, i.e., the csi driver is not present, or the csi driver for the source and target is mismatched. IIRC both are triggered by initLegacyClone . the first one is my use case

happy to work on any similar cases. Are you referring to the remaining case, which is the csi driver for target pvc and snapshot match? It tries for a direct snapshot to target pvc restore , but it fails because of a csi incompatibility issue ?

Copy link
Collaborator

@akalenyu akalenyu Jan 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh right, I forgot the issue details. Let me stress again that what you're describing is very unusual today

the csi driver is not present


It tries for a direct snapshot to target pvc restore , but it fails because of a csi incompatibility issue ?

Yes, here's a ref to this path

func getStorageClassNameForTempSourceClaim(ctx context.Context, vsc *snapshotv1.VolumeSnapshotContent, client client.Client) (string, error) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I will work on this

Copy link
Collaborator

@akalenyu akalenyu Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So just to poke at the setup a little bit again, the provisioner has VolumeSnapshots support (which I thought implies it's CSI) but there's no CSIDriver object?

@TheRealSibasishBehera
Copy link
Author

Thank you for the PR! 🙏

What do you think about capturing the mapping between the volumesnapshotclass->storageclass on the volumesnapshotclass annotations instead? This way, the admin configures it once on a cluster level and subsequent restores should work seamlessly for any DV

Yes it sounds good that the configuration can be set once .

If I understand correctly, we would refer to the VolumeSnapshotClass from the source snapshot during restores. However, I am wondering about the scenario where the VolumeSnapshotClass is deleted after the snapshot is created. In such case we cant refer to the appropriate storageclass used in VolumeSnapshotClass annotation

@akalenyu
Copy link
Collaborator

akalenyu commented Jan 8, 2025

Thank you for the PR! 🙏
What do you think about capturing the mapping between the volumesnapshotclass->storageclass on the volumesnapshotclass annotations instead? This way, the admin configures it once on a cluster level and subsequent restores should work seamlessly for any DV

Yes it sounds good that the configuration can be set once .

If I understand correctly, we would refer to the VolumeSnapshotClass from the source snapshot during restores. However, I am wondering about the scenario where the VolumeSnapshotClass is deleted after the snapshot is created. In such case we cant refer to the appropriate storageclass used in VolumeSnapshotClass annotation

I think that is either unlikely or impossible when a snapshot that was provisioned by it still exists
(that snapshot cannot be deleted forever in these cases)

@TheRealSibasishBehera
Copy link
Author

I think that is either unlikely or impossible when a snapshot that was provisioned by it still exists (that snapshot cannot be deleted forever in these cases)

Got it, I’ll update the implementation

Also, do you think this ability needs to be documented anywhere?

@akalenyu
Copy link
Collaborator

akalenyu commented Jan 9, 2025

I think that is either unlikely or impossible when a snapshot that was provisioned by it still exists (that snapshot cannot be deleted forever in these cases)

Got it, I’ll update the implementation

Also, do you think this ability needs to be documented anywhere?

Yes, feel free to drop a hint at

# Data Volume cloning from a VolumeSnapshot source
## Introduction

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dco-signoff: yes Indicates the PR's author has DCO signed all their commits. release-note-none Denotes a PR that doesn't merit a release note. size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Random Storage Class Selection During Host-Assisted Snapshot Restore
3 participants