-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path62
21 lines (10 loc) · 4.62 KB
/
62
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Hi, my name is Erich Hanke, and I am Senior Principal Engineer of Memory and Storage Products at Intelliprop. A recent focus and what I'll be showing you today is fabric-attached memory and coherent host connections. Today, specifically, I'll be giving you an overview demo of our CXL fabric adapter, fabric-attached memory, and fabric manager software.
Before I get into the demo, I want to give you a quick overview of the demo building blocks, and then I'll show a live demo of dynamic fabric management software running.
These are prototype hardware that's being used to configure the fabric. We have the Sphinx CXL fabric adapter, the Intelliprop discrete switch, codenamed Hermes, an ARM host node named Orthus, fabric-attached memory resources, switch boxes, and media boxes.
Today's demo will be on a much smaller fabric topology than what is being shown here, but this is an example of a supported topology which has multiple CXL host nodes, ARM-based fabric manager, an ARM SoC node, multiple levels of switching, and fabric-attached memory devices. One of the nice features of this fabric topology is the ability to support multiple routes through the fabric. This fabric topology is a 3x3 HyperX, and if you look on the left-hand side, you can see multiple links originating from each of the nodes into the switches and media boxes, and even all the way through to the ZMMs. So if a cable is unplugged, alternative routes can be taken through the fabric so your system does not crash. The topology is flexibly configured either statically or dynamically by the Zephyr fabric manager using in-band configuration packets originating from the Sphinx CXL fabric adapter. At the start of the day, the fabric manager crawls out across the fabric to discover fabric-attached components and build the native model of the fabric topology. This fabric topology can then be graphed using a tool called NetworkX, which is what we used to draw this fabric graph on the left-hand side. Next I will share a VNC screen and walk through a dynamic configuration of small fabric, and I'll show you how we allocate a portion of the fabric-attached memory as a block device and present it to the CXL host.
So now you can see my screen and my VNC. In the upper left-hand corner, you can see in the left middle, you can see two logins to a debug shell of the fabric-attached memory devices. In the lower left-hand corner, you can see the SOL console running on the CXL host. The CXL host is running Ubuntu 20.04 and a custom 5.13 kernel. Then we have these three other terminals here that I'm going to run some tools on. So the first thing that I'm going to do is launch our Zephyr fabric manager. Here I'm going to run the Linux local management services. Now on this bottom terminal, I'm going to run a tool called lsgenz. Here you can see three components that make up this fabric. One is a bridge, and two are memory components. This middle memory component is configured not only as a memory device, but also as a switch.
So I can draw the topology using a network X tool. You can see a very simple topology where I have the yellow PFM, or primary fabric manager, connected to ZMM0, which is then connected to ZMM1. The green line represents that we have a up link in between all the components. So this ZMM0 is acting not only as a memory device, but also as a switch device. So if we have traffic running from the PFM to ZMM1, it'll take the path through ZMM0.
I'm going to run a tool now called lstopo. You can see we don't have any of the ZMM resources configured or connected to this CXL host just yet. To do so, I'm going to run a tool called post-zephyr. And I'm going to give a 32 gigabyte block device to this host. You can see this is the config file that was posted to the Zephyr Fabric Manager. Zephyr Fabric Manager then pushed this configuration into the tables and configuration of its host.
I can run lstopo again. You can see we now have a 32 gigabyte block device that is presented.
I can also run lsblock. You can now see this 32 gig device show up. So I can go and then do any sort of block type thing now.
I'm going to run a little benchmark doing just some random 4K reads on that block device. You can see it's now running, just running a little bit of a benchmark using the CXL.mem interface to send the traffic through the Fabric Adapter down to the Fabric Resource. So that was the quick demo of the Fabric Topology running the Zephyr Fabric Manager to in-band configure this small fabric. So please come by and check out the larger demo that we have running at SC21. And check out the CXL booth and the IntelliProp booth, which is just one or two doors down. Thank you very much and have a great day.