Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate-release-meta: make kubevirt image ref use tag on FCOS #2817

Closed
wants to merge 1 commit into from
Closed

generate-release-meta: make kubevirt image ref use tag on FCOS #2817

wants to merge 1 commit into from

Conversation

bgilbert
Copy link
Contributor

On FCOS, we want users to pull using image tags. The tags will be configured by code in coreos/fedora-coreos-pipeline#515, presumably by manually executing skopeo etc. since cosa doesn't have code for that. Thus, since we can't read the tag out of meta.json directly, have the release metadata generator predict what the tag will be and put it in the release metadata image field. Put the digest-based pullspec in the digest-ref field added in coreos/stream-metadata-go#46.

On RHCOS, we want to pin to specific digests, so put the digest-based pullspec in both fields.

cc @rmohr @dustymabe

@bgilbert
Copy link
Contributor Author

It'd be more architecturally correct to add a cosa wrapper command that creates and uploads the multi-arch manifest and updates meta.json, so generate-release-meta doesn't need to make assumptions. Thoughts?

On FCOS, we want users to pull using image tags.  However, the pipeline
doesn't move the tag during build, but during release, by pushing the tag
behind the back of cosa.  Predict what that tag will be and put it in the
release metadata `image` field.  Put the digest-based pullspec in the new
`digest-ref` field.

On RHCOS, we want to pin to specific digests, so put the digest-based
pullspec in both fields.

xref coreos/stream-metadata-go#46
xref coreos/fedora-coreos-pipeline#515
@openshift-ci
Copy link

openshift-ci bot commented Apr 15, 2022

@bgilbert: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/rhcos 94ef8e6 link true /test rhcos

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@dustymabe
Copy link
Member

It'd be more architecturally correct to add a cosa wrapper command that creates and uploads the multi-arch manifest and updates meta.json, so generate-release-meta doesn't need to make assumptions.

yeah.. I think that would be preferable.

arch_dict["media"].setdefault("kubevirt", {}).setdefault("image", {})
arch_dict["media"]["kubevirt"]["image"].update(input_.get("kubevirt", {}))
if "image" in i:
if args.distro == 'rhcos':
Copy link
Member

@cgwalters cgwalters Apr 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per coreos/stream-metadata-go#46 (comment)
I think I'd say...let's make this a general flag like --use-container-tags or something which would default to true; we can then choose to flip it on or off on specific releases (distros) as we choose.

Honestly too at a higher level...I think getting container image references from stream metadata is just weird and awkward. Is anyone actually going to do that versus just pulling directly the published tags?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly too at a higher level...I think getting container image references from stream metadata is just weird and awkward. Is anyone actually going to do that versus just pulling directly the published tags?

I think so, but maybe I'm wrong.

To me the answer is that you use the published tags until something goes wrong. Then you need to start investigating things so you start digging. Maybe the image tag changed since you first started having the issue and you need access to and older build to do the investigation.

Maybe as a result of this exercise you stop using the published tags and start using digests because you want a little more control like @rmohr describes over in coreos/stream-metadata-go#46 (comment)

Either way I think it's important to provide the convenience factor around a container in a registry with :latest semantics but to expose the underlying data as well for people to use if they choose the "less convenient, but more controllable" route, exactly as we do with GCP today.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I completely agree. I think we do want digested pull specs in the stream metadata.

Having floating tags in the stream metadata...sure, but I'm saying we could probably just omit them too.

One important detail in all of this to consider is "source of truth". I think what we'd probably want to start is a controller that uses the stream metadata as source of truth, and synchronizes the container tags.

(But...for the oscontainer I think it's the opposite, xref #2685 )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see. You were thinking that putting the container image floating tag into the stream metadata was not worth it. I was thinking you were arguing against having the digested pull specs, which you are clearly in support of.

I'd still like to have the floating image tag there similar to the GCP image family. It's theoretically possible we could move the image in the future to some other registry (or another org) and it would be good to have people consume and get that info from the stream metadata I would think. Maybe in practice people don't do that, but at least the ones who do will get migrated easily.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we're in total agreement except you added this new paragraph with which I disagree 😄

It's theoretically possible we could move the image in the future to some other registry (or another org) and it would be good to have people consume and get that info from the stream metadata I would think. Maybe in practice people don't do that, but at least the ones who do will get migrated easily.

I don't think we can rely on that. If we move things, we would need to mirror to the old location for quite some time. It's an API.

Another way to look at this, and I'm sure @rmohr will agree - the whole point of the kubevirt "containerdisk" thing is that pulling a VM image is pulling a container image. Stream metadata is currently a FCOS¹-specific invention, but their goal is that multiple distributions/operating systems could publish containerdisks, and kubevirt can just directly, natively consume it.

¹ Well and as of recently including RHCOS, so "FCOS and derivatives"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think getting container image references from stream metadata is just weird and awkward. Is anyone actually going to do that versus just pulling directly the published tags?

Maybe not, but that's okay. The purpose of stream metadata is to provide our recommendations for what images users should run and how they should obtain them. Analogously, the design has always included hardcoded strings for GCP, DigitalOcean, and Packet image aliases. Even if a user examines the stream metadata by hand and then hardcodes the tag in their own configs, we've successfully directed them to the correct information source.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree that we can't rely on the stream metadata to rapidly move registries. But I don't think that use case is very important as a reason to put the tag in stream metadata.

Copy link
Member

@rmohr rmohr Apr 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way for a little bit more context on what we do with "generic" containerdisk builds maintained by the kubevirt community: https://quay.io/organization/containerdisks. The images are built automatically based on the various OS-release locations. For instance centos-stream is built pretty often: https://quay.io/repository/containerdisks/centos-stream?tab=tags.

Note that also here, while providing floating tags like9 and 8, we also provide fixed tags of all builds, so that people have clear entrypoints to currently used versions. In practice the floating tags are mostly only useful to testing (testing future versions early) but not for production. So it is mostly convenient to have floating tags for early-testing, since it eases the auto-discover part on CI pipelines, but it is also there not recommended to use them for production workloads.

Note, I know that openshift has image streams which have some release and rollback logic based on images pushed with floating tags, but in general the k8s and container community discourages the use of floating tags (like :latest) for production workloads to my knowledge.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xref the discussion starting at coreos/stream-metadata-go#46 (comment) for why we prefer floating tags.

@bgilbert
Copy link
Contributor Author

Discussed this OOB with @jlebon and we'd like to go for a more structured approach where all tags are arch-independent and are recorded in meta.json. Details in coreos/fedora-coreos-pipeline#515 (comment).

@bgilbert bgilbert closed this Apr 19, 2022
@bgilbert bgilbert deleted the kubevirt branch April 19, 2022 21:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants