-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Platform Request: kubevirt #1126
Comments
Is it really a requirement to wrap the qcow2 in a container? I can see how it would be useful in some cases, but most VM disk images are available as a qcow2 or raw disk image. Can Kubevirt not import a qcow2 directly (even if it then wrapped it in a container and stored it in a local registry)? |
Since it is fully k8s native, it is the preferred and general way on how to exchange images in kubevirt and make them available. Container registries in public and private clusters are the common denominator which allow us unified delivery, auditing, mirroring flows. A sub-project in KubeVirt also supports importing various sources, including qcow2 over http. All are not optimal though and have their own downsides (like for http import, people would have to provide shasums directly to ensure integrity). The most compatible ways are container registries accessible from a cluster. Think about it like the global AMI store. The kubevirt community also started creating a general containerdisk store in quay (https://quay.io/organization/containerdisks). It is backed by some tooling which is scraping release sites to pick up new released images and make them available in a unified way. I think this is a great example, where the superiority of the containerDisk is shown. Once I know where a disk is, it removes all variations on how to identify and verify them and how to detect and find updates. |
In my opinion if a containerdisk is required then we at least need to create a new artifact for this (i.e. we can't just ship the openstack qcow2 like we do today and go on with life), so we probably need a new platform. I guess an alternative is that we still just ship the openstack qcow2, but we document how to create a containerdisk out of it and then interact with kubevirt to install it. |
Yes, having a containerdisk is one of my top priorities. coreos/coreos-assembler#2750 has the create and publish flow already (target locations and credentials to push there are of course not there). |
Thanks for filing this @rmohr! I think the additional answers prompted by the template are helpful. My 2c on this is: let's just create a new platform ID and artifact for it. This means duplicating some code in Ignition and Afterburn, but long-term would be cleaner. Some random points:
|
That all sounds reasonable to me (but it is not a must for kubevirt, we don't have the race issue since we only have config drive). Ony question though: For hypershift we want to have a rhcos image for openshift 4.11. If we would introduce a new Just since I am new in this area pointing to coreos/coreos-assembler@f0e6c52 again on what exactly I mean to ensure we talk about the same thing (basically
That sounds great! |
I think there has been some mixups on some of those answers, but reading through it I think that the environment looks like this:
@rmohr is that a correct summary? |
I'm not convinced we need to define an entirely new platform for the KubeVirt use case. My understanding is that the containerdisk format is just a transport mechanism for the OpenStack qcow2 disk image. End users will ultimately be booting guest VMs in OpenStack, so they won't require additional support from the likes of Igntion or Afterbun bootstrap the VM. (They can use existing support for OpenStack in Ignition/Afterburn.) Shouldn't it be fine to just wrap the OpenStack qcow2 image in a container format without any additional changes? What else is needed to support the use case of delivering the container image? (Edit: This comment was sitting in my browser before I saw the new comments above...so this question might be moot) |
Yes
Yes
Yes
Yes, but if you use cloud-init, the hostname will be provided by the platform metadata over these drives too (basically in addition to dhcp at the same time, therefore the overlap).
Correct.
In principle yes. I think the mixup comes from the fact that we send in some scenario the same infos on multiple channels at the same time :) |
@miabbott just to clarify: kubevirt has nothing to to with OpenStack. It is 100% built on kubernetes. It is not an "abstraction-layer" based on k8s to openstack. |
Understood. So it's conceivable that we may want to produce multiple containerdisks for different virt platforms in the future? |
It sounds like this is really a new platform, which happens to provide OpenStack-compatible configuration mechanisms. We already support other platforms that made a similar choice. If so, I agree with @jlebon that we should define a new Ignition platform ID rather than trying to reuse We should not ship a |
Yes that describes it perfectly.
The main issue with that is probably that we then provide an image as part of the openstack platform which effectively can't be consumed by openstack :)
Could you point me roughly to the locations where these tools would need to be extended? |
Yeah, that's fair.
For Ignition, you can probably do something similar to coreos/ignition@1f710f7 (and also add docs in supported-platforms.md). For Afterburn, you can do something like coreos/afterburn@542ee1b. |
Thanks I will check that out. One last question: If openstack adds new features which you pick up, it may have to be duplicated for kubevirt. Is this a concern? |
@bgilbert done. Now I need to figure out how to test it all together :) |
Updated the PRs with test results. |
@miabbott I am not sure I understand that question. Could you elaborate? |
I think I was confusing myself by some of the details from the original RFE that specifically mentioned shipping the OpenStack qcow2 in the containerdisk format. So I was making the assumption we'd have different containerdisks for different hypervisors/virt platforms (i.e. vSphere, RHEV, OpenStack, etc). Looking at the implementation in coreos/coreos-assembler#2750 and the docs in https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk-workflow-example, it's more clear that the qcow2 shipped in the containerdisk is generic. Regardless, I think you are getting the right information from others on this thread, so please continue to do the good work :) |
Not for now. We have some existing code duplication which this will make slightly worse, but that's a problem for another day. |
@rmohr We'll also need to update stream-metadata-rust to add the container-disk link. |
Opened a PR: coreos/stream-metadata-rust#24. Thanks. |
And I just realized that docs should be updated too, and in particular the stream metadata rationale. |
We now have https://quay.io/repository/fedora/fedora-coreos-kubevirt to upload our KubeVirt images. |
Is this going to be backported to 4.13 ? |
Considering this is a new artefact, we are currently not planning to backport this in 4.13. |
For Fedora CoreOS we'll ship to quay.io/fedora/fedora-coreos-kubevirt. See coreos/fedora-coreos-tracker#1126 (comment)
For Fedora CoreOS we'll ship to quay.io/fedora/fedora-coreos-kubevirt. See coreos/fedora-coreos-tracker#1126 (comment)
For Fedora CoreOS we'll ship to quay.io/fedora/fedora-coreos-kubevirt. See coreos/fedora-coreos-tracker#1126 (comment)
The kubevirt artifact was added to the FCOS pipeline to build in coreos/fedora-coreos-pipeline#860 |
@dustymabe is this the last step to have a kubevirt fcos at https://quay.io/repository/fedora/fedora-coreos-kubevirt ? |
It's there! The images are getting pushed as new builds come in. For example, I'll comment here when that happens. |
Thanks!, I am already testing the testing-devel. |
Docs landed in coreos/fedora-coreos-docs#528 |
@dustymabe, I have being testing the testing-devel and rawhide and looks like neither of them has the qemu guest agent with is quite useful for kubevirt, do you know if they can be build with it installed and activated ? The one I was using for testing got it correctly configured:
|
In CoreOS we only have one image across all platforms, so adding the qemu guest agent would appear everywhere. This has come up a few times, see #74 as well as coreos/afterburn#458 You could probably comment on #74 with what specific functionality you see missing. |
But why qemu-guest-agent is present at https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20230303.3.0/x86_64/fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2.xz ? |
Well, you made me double check, but:
But again the thing that's really important to understand here is that for us what's in the disk images (qcow2, AMI) is 95% just a "shell" around the container image which is what we use for OS updates. (Yes, today FCOS uses ostree native but it's helpful to still think of the OS content this way) IOW you can do:
And whatever you see there is the same stuff exactly that is in the disk image when you boot it. |
I think I cooked in the qemu-guest-agent, pushed to my quay.io/ellorent repo and forget about it, sorry about this. |
The fix for this went into |
The fix for this went into |
The fix for this went into |
In order to implement support for a new cloud platform in Fedora CoreOS, we need to know several things about the platform. Please try to answer as many questions as you can.
KubeVirt is an extension to kubernetes which allows managing VMs side-by-side with container workloads. It just recently entered the CNCF incubation phase (https://www.cncf.io/projects/kubevirt/) and is used for various virtualizatin products based on kubernetes. A non-exhaustive list of the most famous vendors:
KubeVirt aims to be as feature-rich as solutions like OpenStack or oVirt, allowing to converge the whole infrastructure stack to pure k8s to have unfied API paradigms and simpler to manage stacks, when working with k8s based infrastructure.
KubeVirt
is the official name. Where needed lowercasekubevirt
is used. In the kuberenetes apiKubeVirt
has its ownkubevirt.io
group name (kubevirt.io/v1/namespaces/mynamespace/virtualmachineinstances/myvm
). From a technical perspective, in the kubernetes worldkubevirt.io
.KubeVirt supports a broad range of boot config sources:
If no user-data is present, the VM has to be configured manually via
ssh
,VNC
or the like.If no ssh keys are given, people can access the vms via
console
orvnc
to do initial setup.DHCP is sufficient. cloud-init network config v1 can be used (and is automatically populated if cloud-init is used for bringing in user-data).
It is helpful if a console is provided on the first
virtio-serial
device. It is not mandatory to get a properly working VM, but it is very common that our users connect for various tasks viakubectl virt console {myvm}
to this console to debug the VM. VNC consoles are also popular. We connect a small vga device to a qemu vnc server by default. Users can opt out of the vga device.Ther exist a few ways to indicate readiness, all optional:
We support the qemu-guest-agent and recommend it (gives an overall better integration experience, also for services building on top, since there is first-class API support for retriving e.g. IP information of additional devices which can be used for routing, ...). We also support ssh-key injection and readiness probes based on the guest agent.
We have containerDisks which basically are qcow2 files wrapped in containers and pushed to arbitrary container registries.
A very simple example to create on would be such a Dockerfile:
which can be built and pushed like this:
The containerDisks can the be imported and used in different KubeVirt-enabled clusters in various ways.
A non exclusive list is:
ContainerDisks can be hosted on private and public registries and freely mirrored, while one can ensure integrity by referencing the container digests.
KubeVirt is to a certain degree bound to the network model of k8s. In k8s every pod gets a different IP/MAC on each pod start. Responsible for this are the CNIs (container network interface). For as long as the VMs are ephemeral, the IP assignment to the VM works perfect. It gets a little bit more tricky when we talk about persistent root disks. There the guests very often don't identify
eth0
after reboot again because of the changed MAC address and DHCP is not performed. We have ways to work around this with different network models, but if the guest can handle this, it gives the best user-experience.KubeVirt is compatible with the openstack images. For visibility and discoverability for technical processes and users it would be helpful to have a containerDisk for kubevirt published and documented, as well as having it listedn in the release and stream json files. Getting its own sections and platform entries.
I would however prefer to keep theopenstack
ignition ID in the guest.A new
kubevirt
platform ID is introduced in the following PRs:The text was updated successfully, but these errors were encountered: