-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publish the KubeVirt image on quay #515
Conversation
Signed-off-by: Roman Mohr <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should put this in the coreos
Quay org, not coreos-assembler
. (Arguably coreos-assembler shouldn't be in coreos-assembler
either, but historical reasons.)
Signed-off-by: Roman Mohr <[email protected]>
Well, for consistency we should be pushing the main image (what is now There's also #359 (comment) |
Should it be something like quay.io/coreos/kubevirt:stable to keep the fcos name for things like the osimage? Or would you prefer more complex tag names? |
Let's take a look at these things from a high level and try to hammer out some strategy on organization/naming: coreos/fedora-coreos-tracker#1171 |
--upload \ | ||
--registry quay.io/coreos | ||
--name fedora-coreos | ||
--tag ${params.STREAM} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
buildextend-kubevirt
doesn't push a multi-arch manifest, and the architecture isn't being included in the tag name here, so this isn't multi-arch aware.
Also, this code moves the tag at build time, rather than release time. Updating the ${params.STREAM}
tag should be deferred until release.Jenkinsfile
, so we don't commit to promoting a new release prematurely.
A good solution for both issues might be to drop --tag
here, and in release.Jenkinsfile
, create and push a multi-arch manifest that references the version-specific tags for each arch we're building for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Will do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proposed cosa changes for putting the tag in stream metadata are in coreos/coreos-assembler#2817. They make some assumptions about what the pipeline code here will do; see the PR description over there for context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed this with @jlebon OOB. We think the right path forward is:
- Build the ociarchive in
build.Jenkinsfile
, but do not upload it here - In
release.Jenkinsfile
, invoke a cosa command that does the following:- Locally creates a multi-arch manifest (e.g.
podman manifest create
) - Walks the
meta.json
files for all arches, imports the KubeVirt ociarchives, and adds them to the manifest (e.g.podman manifest add
) - Uploads the manifest to one or more tags specified on the command line (e.g.
podman manifest push
) - For each arch that includes a KubeVirt image, add the destination image tags to the arch's
meta.json
- Locally creates a multi-arch manifest (e.g.
release.Jenkinsfile
will then re-upload themeta.json
s and generate release metadata- For each arch, the release metadata generator should select a tag from the list in
meta.json
(perhaps the first tag for simplicity) and generate theimage
pullspec from that
- For each arch, the release metadata generator should select a tag from the list in
This allows us to support multi-arch via fully arch-independent tags, and also properly encodes the tags in meta.json
rather than having the release metadata generator synthesize them as coreos/coreos-assembler#2817 does.
Note that it also removes the automatic generation of version-arch
tags performed by kubevirt.py
. That way we don't have any arch-specific tags at all, and the tagging is ultimately controlled by the pipeline rather than by cosa code. For now, we think there shouldn't be any version-specific tags at all: they complicate garbage collection, they provide an affordance that encourages pinning, and they're not strictly necessary for pinning (because the image digest will still be available in stream metadata). If that turns out to be a problem in practice (e.g. because the container registry GCs unreferenced images too quickly), the pipeline can always add versioned tags later.
@rmohr, could you take a shot at implementing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more info that might be helpful:
Check out this bit for existing code in release.Jenkinsfile
which pushes the oscontainer to Quay.io. I think we'll need something similar for the oscontainer to what is being proposed here eventually (we're not currently adding those to meta.json
or release or stream metadata, but we probably will in the future, and of course the multi-arch bit is just as relevant).
release.Jenkinsfile
will then re-upload themeta.json
s and generate release metadata
Note this bit will already take care of that part. So we just need to make sure the new bits happens before that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be ok if I still upload the images where the ociarchive is already available, but without creating a tag? In the later stage I would then just create a multi-arch image based on the meta.json file which I get, and only upload a stream-name tagged multi-arch manifest referencing the previously pushed digest-only images.
Edit: It would kind of keep the meta.json generation where it is right now, and would kind of only promote the final image by tagging and having a multi-arch image which can then be referenced easily.I'm okay with that approach, assuming we're not at risk of the untagged container being garbage-collected between build and release. The release stage would still need to modify the
meta.json
though, to add the tag(s).
Something like this could enable us to run kubevirt tests on the images as well.
With that design, we'd need to decide whether the digest pullspecs in the stream metadata should refer to the arch-specific images or to the multi-arch manifest.
@jlebon, any thoughts on this change?
Good question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I wasn't aware you could even push an image without any tags. Was trying this locally but hitting:
$ podman push quay.io/jlebon/pet@sha256:3cc9f1da962d1a9492664497ad8b18a01ee12ccd621e1a5cb7b902c62a5b1429 --remove-signatures
Error: Copying this image requires changing layer representation, which is not possible (image is signed or the destination specifies a digest)
Anyway, also OK overall with the approach if we get it working. Slightly concerned about the inherent race we'd be introducing but I suspect it shouldn't be an issue in practice (and if it is, we can add some temporary tags which the release job can clean up after pushing the manifest image).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With that design, we'd need to decide whether the digest pullspecs in the stream metadata should refer to the arch-specific images or to the multi-arch manifest.
One minor argument I just thought of for the latter: you can derive from it the former, but not the other way around IIUC. So providing the multi-arch manifest is strictly more information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anyway, also OK overall with the approach if we get it working. Slightly concerned about the inherent race we'd be introducing but I suspect it shouldn't be an issue in practice (and if it is, we can add some temporary tags which the release job can clean up after pushing the manifest image).
Turns out, that I can push fine (skopeo copy oci-archive:builds/35.20220420.dev.0/x86_64/fedora-coreos-35.20220420.dev.0-kubevirt.x86_64.ociarchive docker://quay.io/rmohr/fedora-coreos@sha256:a3d79f558f2bf032abd9e36b7ae9b2a1799696acc13b53c1cfcec5b1fa6af720
) but the image disappears almost immediately from the registry. I will follow your initial suggestion. Just need to re-download the oci-archives to then push them with scopeo (seems to be no way to reference oci archives from remote locations directly). It is not super-nice but doable. Thanks for the patience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A first implementation in coreos/coreos-assembler#2828. Let me know what you think.
Closing this one since I am not actively working on it. Doing some cleanup. Thanks everyone for your help. |
1 similar comment
Closing this one since I am not actively working on it. Doing some cleanup. Thanks everyone for your help. |
Follow up of #514 for discussing a final location of the KubeVirt container image on quay.
The idea is that people can reference the image directly based on the stream they want to select: