-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path270
106 lines (53 loc) · 35.1 KB
/
270
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
Good morning and good afternoon. Thank you for attending the CXL Consortium Introduction to the Compute Express Link CXL Fabric Manager webinar. Today's webinar is presented by Vincent Haché, Director of Systems Architecture at Rambus, and I will hand it off to Vincent to begin the webinar.
Thanks, Elsa. Good morning and afternoon, everyone. Very excited to get an opportunity to present the work we've done on CXL 2.0 Fabric Management. My name is Vincent Haché and I'm Director of Systems Architecture for Rambus' CXL Data Center products.
Today, we're going to start off with a quick overview of CXL. The challenges that motivated its definition and what it has to offer. Then we're going to take a look at the system management requirements, the types of conventions and deployments that we expected CXL to be used in that helped us form the motivation for the management architecture we've defined. Then there are two key components to fabric management that we will review. The first is what is a fabric manager? What are its roles and responsibilities? The second is an important concept by a module within the CXL fabric management architecture, which is a component command interface. We'll take a look at what that module does. Then we'll review, understanding the two concepts, we'll review the management architecture again. We'll take a look at the list of management command sets defined in CXL 2.0 and its ECNs to give you an idea of the types of operations that can be performed. Then we'll finish with a bit of a deep dive into MLD management and how MLDs are configured for deployment. This is one of the advanced areas of fabric management, so it should tie a lot of these concepts together.
There are five key resources that define the content covered in this webinar. The first obviously is the CXL 2.0 specification. That's where a lot of the groundwork is defined in in-band management through what we'll look at as a mailbox. It introduces the fabric management API and details on the MCTP transport. There are a bunch of updates and fixes applied to those details in the CXL 2.0 or Errata. There's a type 3 management using MCTP CCI ECN, which enables MCTP based device management and generalizes some key concepts defined in the specification that previously were switch specific. DMTF has also published two important documents covered in this webinar. There's the CXL FM API over MCTP binding specification and the CXL type 3 device CCI over MCTP binding specification, both of which lay out the MCTP specific details for running MCTP based management commands.
So CXL was developed to address some challenges we found in the development of next gen data centers, the key one being a drive for faster data processing and increases in performance. Industry also saw an increasing demand for heterogeneous computing environments and the disaggregation of servers. And of course, a very important capability that CXL has helped address is a need for increased memory capacity and bandwidth. So it's been developed as an open industry supported cache coherent interconnect. It leverages PCIe electricals and introduces three mix and match transport protocols. It provides a low latency cache coherent interconnect between CPUs, accelerators, other components at low latency. And it's really removed a lot of the complexity from cache coherent interfaces.
There are three types of devices defined in the CXL specification. And those device categories are defined based on what of those three mix and match protocols a particular component supports. What we call the type 1 device is used for caching devices and accelerators. So these are devices like a SmartNIC or devices that need to run remote atomics. And they support protocols, the .io protocol, which is used by each device type for configuration and management, and the .cache protocol. The type 2 device is the category used for accelerators that have host addressable memory. So GPUs, FPGAs, any sort of dense computation type component. And these types of devices support all three protocols. Again, .io for configuration and management, .cache and .memory. And then the third type of device are memory devices, memory buffers used for the expansion of memory bandwidth and capacity, as well as introducing novel memory media types, such as persistent memory. So those protocols that these types of devices support are .io and .mem.
So if we take a look at the types of deployments we see in industry and the type of systems that we would like CXL employed in as a way of defining the requirements for a management framework, some of the examples we see is if we look at it on the left, an integrated server and a management transport set-- a convention set up by NVMe where a PCIe connected baseband management controller is able to tunnel management traffic through a CPU's root complex to a device. And it passes it, packages MCTP packets transported over PCIe VDMs. So if we look at the type of system MCTP packets transported over PCIe VDMs. In a rack-mount appliance in a disaggregated data center deployment, you would have something we imagine like this J-bomb here, which stands for just a bunch of memory, which is sort of an extension of the convention set out by a J-bomb, just a bunch of disks, where cable ports are presented at the front of an appliance that connect into a CXL switch. And that switch fans out to multiple endpoints, in this case, type 3 memory devices. And the appliance supports pooling, disaggregation, dynamic composition of systems. So there's, again, a BMC on board handling the assignment of resources and the management of the endpoints. In this example, that BMC is SMBus connected to the CXL switch.
Looking ahead, however, CXL has also defined new types of devices. And we need to define a management framework that will address the management needs there as well. So an extension to the rack mount appliance example is one that deploys multilogical devices, or MLDs, which is a type of memory device that presents multiple functional interfaces over a single physical link, which is shared among multiple hosts. So in this case, unlike the previous diagram of a rack mount appliance, we have type 3 devices whose resources can be shared among multiple hosts. We also have new types of devices, like multi-headed devices, where the type 3 device itself has multiple physical interfaces for more than one host to share memory resources. So these are the types of system configurations where we developed a management architecture to address. So let's take a look at a couple of the key concepts that we've defined and see how it all plays together.
So first, what is a fabric manager? So there's no firm definition of a fabric manager. It's more of a conceptual term. And it's referring to the application-specific logic that's composing disaggregated systems, allocating pooled resources, managing platforms, things like that. And it can take many forms. That logic can reside in a BMC in a rack mount appliance. It could be management software running in a host. Some CXL switches may choose to implement embedded processors and run embedded firmware. And in those scenarios, the firmware running in the switch, if it's making decisions about the deployment of resources, then that would be considered a fabric manager.
We intentionally defined a flexible framework so that CXL could be used in a variety of applications. It would be limiting to the standard to have designed it just for enterprise or data center or server deployments. We've defined it flexibly so that it can be used in embedded applications, in automotive applications, industrial applications. So there's a lot of flexibility in having a loose definition of a fabric manager. We've also ensured that most management capabilities are optional to decrease the burden of implementation for deployment. The only time a fabric manager is strictly required is when some of those advanced system operations are desired. So one example being the use of MLDs. An MLD isn't going to make decisions about how to deploy its resources autonomously. An FM needs to reside somewhere in the system, and it is responsible for assigning the individual logical devices from an MLD to host so that they may access the resources. In a similar way, but at a much coarser granularity of assignment of resources, is a memory pooling application where whole endpoints are being moved between hosts. And in that type of deployment, an FM is responsible for binding the switch ports to the various host hierarchies.
The next concept to understand is a component command interface.
So commands that are sent to components, components being type 1, type 2, type 3 device or switch, are processed by a module within the device we refer to as the component command interface, or CCI. There are two types defined. There's a mailbox CCI, which is presented through memory registers in a device's PCIe/MMIO space, and MCTP-based CCIs, which are presented as MCTP endpoints. And this is not a cued interface. It's one command at a time. There are length-- there's a mechanism for supporting lengthy operations without tying up the CCI, however. Those types of operations are run as background operations. A device is allowed to implement multiple CCIs, each with a varying degree of capabilities. So one CCI could be privileged and support very special administration type commands, whereas another could be just used for debugging or general use. The command opcodes that are sent to a CCI are two bytes, the first byte, the upper byte being the command set, and the lower byte being the command within that set. And a fabric manager is able to discover the list of commands that are supported by a component by reading that component's command effects log. And we'll take a look at the contents of the command effects log. I'll take a look at the contents of the command effects log in a few slides.
So if we look at the mailbox CCI in detail, this is located in PCIe/MMIO space, as previously mentioned. There are two types of mailboxes. There's what's called the primary mailbox. And that's a privileged mailbox. It's intended for low-level privileged operations. It's designed for use by the driver. The secondary mailbox is optional. And that's designed for more log and event record access. It doesn't support interrupts. It doesn't support background operations. It's allowed so that-- it's defined so that utilities running on the host can send commands to the device without interfering with the driver's operation. Command inputs are written to a command payload register space. And outputs are read from the same region. And a mailbox can optionally generate MSI or MSIX interrupts to notify the host of command completion or background operation completion, things like that. So on the right here, we've got some excerpts from the CXL 2.0 specification that show you how a driver discovers the location of the mailbox and starts to operate with it. So first, in PCIe configuration space, there's what is called a DVSEC, or Designated Vendor Specific Extended Capability, which is a configuration structure. CXL uses quite a bit. There are a few DVSECs defined in the CXL 2.0 specification for a variety of purposes. And the DVSEC that uses ID 8 is the Register Locator DVSEC, which advertises the BAR number address within that BAR of, in this case, a memory device register block. And that uses register identifier 3. So first thing, the driver will go and read that DVSEC and find the location of the CXL memory device register. When it jumps to that offset in PCI memory space, it will find a device capability array. And so this is an array of various device capabilities. That capability header indicates the ID of the capability. So ID 2 is used for the primary mailbox, and ID 3 is used for the secondary mailbox. And then an address within the memory device register region, an offset within that region, for the location of that capabilities structure. So the driver will read the address of the mailbox registers. And at that location, it will find the mailbox structure, which you see there on the bottom right. So the first register within that structure is the mailbox Capabilities Register that advertises things like support for interrupts, the mailbox Control Register, which allow things like interrupts to be configured, the Command Register, where the opcode is programmed to initiate the command, the mailbox Status that indicates whether the mailbox is ready for use, and also the completion status of any command that's ongoing. Background Command Status is reported out separately. There's a dedicated register for the status. And this includes a return code, as well as a percent complete field. And then there's the Command Payload Register region, where the inputs are written to and the outputs of the command are read from.
Next, an MCTP-based CCI. So being MCTP-based, this is a packet-based interface. And the way this type of CCI is discovered is that first, the Fabric Manager is going to discover all MCTP endpoints. And that's just done using MCTP-based spec-defined discovery. As part of that, the supported MCTP message types is checked on every MCTP endpoint. CCIs are going to be the MCTP endpoints that advertise support for the CXL MCTP message types. So these message type values are defined in the DMTF published binding specifications. Type number 7 is used for FM API commands. And type number 8 are used for general commands and memory device commands. There's no limitation on the physical interfaces over which this MCTP packet traffic can be passed. So this is supported over any interface for which an MCTP binding spec is defined. On the right, there is a description of what the MCTP packets are going to look like. So the top right diagram is taken from the FM API over MCTP binding specification. It starts off-- the packet starts off with a physical medium-specific header. So in the case of a PCIe VDM, this would be a TLP header. Then there's the MCTP transport header, which has things like destination, MCTP endpoint ID, and source endpoint ID, and the fields required for breaking a packet up across multiple transactions. And then there's the CXL FM API message body. So that part of the packet is where the CXL 2.0 specification defined transport header is used. And so you'll see that on the bottom right. So the command opcode, the message tag, payload length, return code, payload, those are the fields that live in that message body in the format shown here.
So I mentioned earlier that time-consuming traffic and that time-consuming management operations run in the background. And so these are called background commands. This is something that is decided on in the specification. When the command is defined, it's not something that's chosen on a device-to-device basis. A CCI can only support one at a time. It's recommended that for devices that have multiple CCIs, only one background operation is supported throughout the device at a time. And that translates to only one of the CCIs. It's recommended that only one of the CCIs on the device supports background commands. So the command to initiate the background operation receives an immediate response. And that's an indication that the background operation has started. If we take a look at the ladder diagram on the right and take an example of the transfer firmware command, which is defined as a background command, when the FM initiates that, the component will begin an internal process to program the firmware using the arguments provided in the transfer firmware command and provide an immediate success completion, indicating that a background operation has begun. At that point, the FM is allowed to check status, either through the register or using the check status op code if it's an MCTP-based CCI. As that internal process is running, there's a requirement that a percent complete value is updated at least every two seconds. So the FM will get an indication of the percent completion of the process as it runs. So in this example, it checks a couple times while the process is running. And then when the process is 100% complete, optionally, the component will send an interrupt to the FM if the FM has enabled interrupts on the interface. Then when the FM goes and checks the status following the completion, it will receive a successful completion for the background operation and be able to check the return code for the background command.
So if we start to plug the spec terminology that we've defined, these key concepts into one of the examples from our requirements, in this case, the rack amount appliance, this is what that system would look like. So in the hosts, we have device drivers that are accessing mailbox CCIs in both the switch and the Type 3 device. The baseband management controller has an FM running in it. And that FM is accessing MCTP-based CCIs in both the switch and all four Type 3 endpoints. Now, one thing that is assumed-- it's not explicitly defined in the spec-- but it's assumed that the FM is not running alone. So a concept or an entity that we refer to in systems like this is something called the system orchestrator or orchestrator. And so this would be a higher level body of logic managing an entire data center, managing multiple FMs, and sending high level commands down to the FM, so something that might look like Redfish as one example. And it's the FM's job to break a high level command like that up into the CXL-specific commands required to complete a high level operation.
So now let's take a look at the types of management command sets and the sorts of operations they support.
The command sets are split into three different ranges. There's the general device opcodes from 0 to 3FFF, class-specific opcodes from 4,000 to BFFF. So those overlap, but depending on the class type that's receiving them, the interpretation of the opcode changes. And then there is a range from C000 to FFFF for vendor-specific opcodes.
The general component command sets, as I said, use 0 to 3FFF. And they're applicable to all classes of devices. So type 1, type 2, type 3 devices, and switches. And these are the command sets that are used to discover and configure generic capabilities. They're used in the discovery and management of all those types of devices. So the command sets we find in this range are the information and status command set, which includes commands like identify and checking the status of the CCI. Or the events command set is used to read and clear event records and configure interrupts. There's the firmware update command set, which is used to program and activate firmware images. It's a timestamp command set used to program the timestamp used by the component. And there's a logs command set used to access component logs.
Now, I mentioned before that the command effects log is an important log for discovery. It's used for discovery of the supported commands in a CCI. So the contents of this log are illustrated here. This is a command effects log entry structure. And it's a d-word in length, every entry. The first two bytes define the command opcode. And the second two bytes define the effects of that command. So the presence of an entry with an associated opcode indicates to the fabric manager that that component supports that opcode. If an opcode is not listed in the command effect log, then the CCI does not support it. The command effects of a command are defined in the specification of the command. So this will not vary device to device. The question is, why do we define the command effects? Now, this is set up so that system software can make policy-based decisions. You can see the types of commands that are advertised here, configuration change after cold reset, immediate configuration change, immediate data change. They indicate when a command will be impactful to the operation of component in a way that system software would need to be aware. And this allows the system software to make policy-based decisions about what commands are allowed to run without having to-- keeping on track of spec updates and spec maintenance and maintaining something along the lines of an allow list or a deny list for specific opcode values. It can decide on the types of policies that are permitted, the types of command effects that are permitted, and not have to worry about new opcodes being defined. One of the other command effects that's reported out is secondary mailbox supported. And so this is a mechanism to discover the capabilities of the secondary mailbox through the primary mailbox.
The memory device command sets fall within the class specific range 4,000 to BFFF. And it's applicable to type 2, type 3 devices, devices that support the .mem protocol. It includes all the commands specific to management of memory media. The types of things that system firmware will use during boot and kernel drivers will use after boot. So this includes an identify command set that's used to identify the memory device capabilities. Capacity configuration and label storage. This is used to manage the labels that are used for persistent memory. Health info and alerts, which reports out media state, temperature, and provides health alerts. Media and poison management, which is used for the management of the media itself, and reading out poison lists, and clearing, and managing poisoned lines of memory. The sanitize command set, which is used to securely clear memory when it's no longer in use. Persistent memory data at rest security command set is used to set security parameters, lock memory, unlock memory. Security pass through is used for the pass through of SFSC commands. And then there's an SLD QoS telemetry to manage the QoS properties of an individual device.
FM API command sets fall also within the class specific command opcode range. These are applicable to CXL switches and MLDs. So it includes binding commands, commands for the assignment of LDs, and low level port control, things like that. The sorts of commands that an FM would use to manage switch attached disaggregated resources. So the first command set in this group is the physical switch command set, which is used to identify the capabilities of the switch, check physical port status, reset ports. Virtual switch command set, which is used to bind and unbind ports and LDs in multi VCS switches, VCS being virtual CXL switch. MLD port command set is used for tunneling commands down to MLDs. We'll take a look at command tunneling in detail in some subsequent slides. And then the MLD component command set is used and processed by the MLDs themselves. It's used for capacity allocation and configuring QoS among the LDs within an MLD.
So let's take a look at how all of these opcodes are used to manage an MLD.
So MLDs, as a quick introduction, they're again, they use a single physical interface and present multiple functional interfaces up to six different interfaces. They present multiple functional interfaces to up to 16 hosts. The different functional interfaces are used with LDID. And that's a 4-bit field that runs from 0 to F. LDID FFFF is mandatory. It's served as the FM-owned LD. It's a management target. There are no associated memory resources. So an FM-owned LD is only .io accessible. .io carries a 16-bit LD identifier, whereas in something like a .mem traffic, there are only four LDID bits. So over .mem, there's no way to address the FM-owned LD. It's only .io accessible. So the MLD must implement a CCI for each LD it presents and an additional one for the FM-owned LD. The FM-owned LD is used to manage the whole component itself. It's designed for access by the FM exclusively. And then the requirement for a CCI per LD is for host drivers running on the system. The FM is allowed to tunnel commands to an MLD through a switch as needed. It's also able to tunnel to the individual LDs when it's provisioning an MLD for deployment.
So if we take a look at the command tunneling scheme defined, so an FM does not require a direct connection to an MLD in order to manage it. It can tunnel through a switch. When it tunnels through the switch, the FM, for example, uses an MCTP-capable interface and connects to an MCTP-based CCI. And in the MLD port command set, there's a command defined called tunnel management command request. The arguments for that request are the port that the MLD is connected to and then the actual command to be tunneled to that port. So the switch will process that, extract the request in the payload from the tunnel management command, and pass that command down to the MLD itself, where the FM-owned LD will process that command. In this example, it's a set LD allocations request. It'll provide a response, send that back to the switch, and the switch will bundle that back up into a tunnel management command response.
This tunneling framework can also be used in cases where the FM has a direct connection to an MLD and would like to manage the individual LDs within it. So in this case, it's passing another tunnel management command request. The input argument this time is an LD identifier. It's not interpreted as port ID. And the command being passed here is set LSA. So the FM-owned LD receives the tunnel management command, extracts the set LSA request. That gets processed by the targeted LD. The response gets bundled back up and passed to the FM in a tunnel management command response.
These layers of tunneling can be combined. So an FM that is only connected to the switch may package up a tunnel management command request within another tunnel management command request. So that gets passed to the switch, which extracts the payload, which includes another layer of tunneling and a set LSA request, which is then processed by the FM-owned LD. The set LSA request is extracted, processed by LD number one, and the response is bundled up and passed back to the FM.
So if we take a look at what MLD management would look like in a disaggregated system during provisioning and deployment, we'll start off with host A and host B powered off. It is expected that the switch in the endpoint and the Fabric Manager will power up and configure the system before the hosts are powered on.
The FM will begin. It's detected the MCTP-based CCI in the switch. It's going to run and identify, discover that it's a switch, discover the number of physical ports, and discover that there is an MLD port. It's going to use the get physical port state for that.
Once it discovers that there's an MLD port, it's going to start tunneling management commands down that MLD port. The first command it will tunnel is identify to understand the capabilities of the MLD. And then it's going to send a set LD allocations command to partition up the memory resources within the MLD. So you can see now that the memory resources within this MLD have been split into two regions.
Then the FM is going to bind LD0 to host A and bind LD1 to host B. At this point, the MLD has been provisioned, deployed, and it's ready for the hosts to boot.
So when they come up, they'll have access to the resources assigned to the LDs that have been down to them.
Quick summary.
So the key concepts that we covered, we've defined a flexible architecture in order to serve a variety of applications. The management framework is available over many physical interfaces. An FM is any logic that initiates management commands. And a CCI is the management command target in components. We took a look at all of the management command sets available, and we took a look at an example of MLD management. So with that, I will turn it back to Elsa and see if we have any questions.
Yes. Thank you, Vince. We will now begin the Q&A portion of the webinar, so please share your questions in the question box. We do have a few questions, so I will start with the first one. Can sanitize operate on securely locked memory?
That's a good question. I don't see any restrictions in the sanitize command relevant to security. There is a secure erase command, however. So I would need to review the overlap of the two offline to say for sure.
The next question is, in one of the diagram D showed, host is talking to CCI mailbox of switchport, and FM also talks to the CCI using the SMbus. Why does the host driver need to talk to the CCI? Is it to demand some actions from the FM, or how will FM know if there is any request from the host using the CCI mailbox?
I believe that's referring to this diagram here. So the host access to a switch CCI and FM access to switch CCI are not for the purposes of a host communicating with an FM. It is expected that any host to FM interaction is going to be application specific and falls outside of the scope of CXL. That is something that would typically be handled through the system orchestrator. So if the host had information relevant to the actions an FM was executing, that's information that would be parsed and processed by the system orchestrator. So the communication path as defined today is that host A and host B would communicate through the system orchestrator. The host A access and host B access to the CCI is for things like reading out error logs from the CXL switch or less system level configuration and deployment and more just low level error handling and detection. That's the general idea. There's also no strict requirement for that. A CXL switch does not require a CCI. So this is just a representative example.
Does FM need to communicate to devices using CXL.io? Could you give some examples what all these cases it needs to do? Does FM need to send any CXL.io to host in any case for error reporting?
So there's no CXL.io path defined from FM to host. That said, one of the examples of an FM deployment is embedded firmware running in a CXL switch. And so some sort of proprietary vendor specific command could be defined for CXL switch mailbox that enables a communication path between a host and a switch. But that's proprietary. So it's not a CXL.io path. It's a path between a host and a switch, but that's proprietary. There's no requirement for FM and host interaction. So no path has been defined. To the question of, does the FM need to communicate to devices using CXL.io? Well, the mailbox CCI is present in PCIe MMIO space. From a transport perspective, the host is sending .io transactions to interact with the mailbox CCI.
How does the switch know that the port is an MLD port to report it to FM?
Yes. So during link negotiation, both ends of a link advertise their capabilities. The link speeds they support, the flit modes that they support, and whether or not they support MLD. It is not mandatory for a CXL switch to support an MLD port. The requirements of a CXL switch to support MLDs is that the switch is responsible for applying the LDID tag. The .mem transaction is coming from a host. The host doesn't understand what LDID from an MLD has been assigned to it. So that .mem traffic comes with a blank LDID field. And it's the switch's responsibility to tag LDID on that. So that's an optional feature of a switch. So that capability is advertised during what's defined as alternative protocol negotiation. Both ends of a link will advertise that they are MLD capable. And so the switch will know at link up time whether a port has negotiated in MLD mode. And that's how it discovers an MLD is present to report to the FM.
Are background commands allowed to return data in the mailbox payload? If so, how do they share the output payload with the regular command output payload running in parallel?
That's a very good question. Someone was paying attention. They are not. Background commands are not allowed to support output parameters. So the structure of a command set that includes background commands is different from just the standard input parameters, output parameters type commands. The command is broken up into an initiate process command, check status process command, get output status command. So if they're -- yeah, the short answer is no, they are not allowed returning output payloads. And you've identified the reason why they are not. There would be issues with background commands overriding the output commands of non-background commands.
You recommend only one CCI to support background command in a single device, but the CXL says background command per opcode regardless which CCI is using. So how can it be done?
Yeah, so let me give you an example. In the CXL switch on the screen, there are three CCIs presented. The transfer firmware command for updating CXL switch firmware is a background command. So the recommendation is only one of those CCIs supports transfer firmware. So the command effects log entry that host A and host B pulled from their mailbox CCIs will not include the transfer firmware opcode, whereas the MCTP-based CCI will support transfer firmware in its command effects log when the FM reads it.
You mentioned the host will boot later on after a switch and FM is up. Does it mean FM will discover all devices before the host comes up?
In general, yes. I mean, it's not as black and white as in a large data center with a lot of appliances and a lot of hosts. It's going to change. You might have some hosts up, although they already have resources deployed, and a host goes down and resources are assigned to it before it boots up. That's going to depend on the process of system composition defined for a data center. There's no hard requirement. That's just sort of a representative example.
Your slide shows GbE as an FM physical IF. Is there an industry consensus building around GbE or are there other interfaces also being widely considered? If so, which ones?
Right. Everything above the CCI to FM interface falls outside of the scope of CXL. CXL is not specifying that GbE is used as the transport to an orchestrator. To reiterate an earlier comment, the orchestrator is not a spec-defined term. This is all just sort of a possible implementation. There are no restrictions or requirements on the FM's capability and its interaction model with a higher level body of logic like a system orchestrator.
Is the switch an external switch just like Cisco or Arista?
I can't comment on Cisco or Arista's plans, but in this diagram, it is assumed -- well, if we talk about what a J-bom rack mount appliance might look like in the future, the type 3 devices would be individual fruits, memory modules that are in a form factor that can be hot removed and hot added while the appliance is up. And the CXL switch is a dedicated component on the J-bom platform. Those CXL links are running through some sort of external connector. So it is expected that the CXL switch is an SoC on its own.
Thank you, Vincent, for sharing your expertise. We will address all the questions we received today in a future blog, so please follow the CXL Consortium on Twitter and LinkedIn for updates. The presentation recording will also be available on the CXL Consortium's YouTube channel and slides will be available on the CXL Consortium website. We would like to encourage all our viewers interested to learn more about CXL to join the CXL Consortium, download the evaluation copy of the CXL 2.0 specification, and engage with us on Twitter, LinkedIn, and YouTube. Once again, thank you for attending CXL Consortium's introduction to the CXL Fabric Manager webinar.