Compute Express Link is closer to becoming a reality, and vendors are laying the groundwork for companies to take advantage of the new processor-to-device interconnect, including IntelliProp, which designs CXL-specific chips.
CXL is an open source standard designed to enable high performance and higher-capacity connections between memory and different types of processors. The new connection standard is aimed at allowing for PCIe connections to memory — even in external arrays — and for different generations of memory, such as DDR4 and DDR5, to be used in the same pool.
IntelliProp, based in Longmont, Colo., has been making chips for memory and data storage since 1999. The company is also focused on CXL, and today unveiled a network-attached memory (NAM) system that features a new CXL chip. Released as a field-programmable gate array card, NAM can be inserted into servers or arrays to connect to memory arrays, bypassing the normal capacity limitations of memory.
John Spiers, a 30-year storage veteran who became IntelliProp CEO just last month, discussed how CXL can affect the memory, storage and hyperscaler markets in the coming years.
IntelliProp is already producing CXL-ready products, but you’re waiting for the next generation of CPUs before going to market. How close are users to seeing CXL in action?
John Spiers: I think what is slowing things down is that this is memory. Memory requires ASICs [application-specific integrated circuits] and chips. SAN and NAS and other shared-resource technologies can be done with off-the-shelf server parts and drives. But [memory-related products like] CXL require specialty chips to make them work, and it takes time to build a chip.
There are dozens of companies building chips now, and starting later this year and into next, you’ll see lots of products announced around CXL — and momentum will kick in.
IntelliProp’s NAM system features a new CXL chip — is that chip competitive to Intel or AMD offerings?
Spiers: No, it’s a partnership. It is a CXL chip that is fully functional — that allows shared memory and an expansion of memory outside of the server. Our chip has a processor, switch and memory controller all based on the CXL specification. When Intel launches its new processor later this year, it will be the first server with CXL on the motherboard. This new server can connect to IntelliProp’s chip and NAM, and then can pool memory across servers.
John SpiersCEO, IntelliProp
Is CXL something for every data center or is it really an AI, machine learning, high-performance computing and hyperscaler interconnect?
Spiers: I see it across the industry. The memory utilization problem is the same in small IT shops as it is in large hyperscalers, just like storage utilization is a problem. That’s why a lot of these guys started adopting SAN and NAS.
They’re going to adopt [CXL] as well, for the same reasons: Get utilization rates up, get efficiency up, and get performance up.
Will CXL increase possible workloads for memory and potentially increase the cost — for example, an all-DRAM Oracle Exadata buying up more memory?
Spiers: Google going from 40% to 80% to 90% utilization doubles the availability of memory without spending more money on DRAM. If, in your example, Oracle decided to do an all-DRAM Exadata, it would increase the performance of databases in the cloud and to customers by a thousandfold, but it would be extremely expensive. Extreme performance customers and use cases exist with cost not being a factor, so they pay for that performance and won’t drive memory prices up overall.
Will Intel exiting the Optane business negatively affect CXL or are there enough workarounds with other persistent memory options?
Spiers: With a product like MemVerge that used Optane PMem to make Optane look like memory, CXL will replace that, instead making flash look like memory. A faster tier of flash, or storage class memory, could be incorporated into CXL, but whether Optane survives and gets pulled into the CXL ecosystem, I don’t know. The industry will be after a cheaper memory technology than DRAM. Getting DRAM utilization efficiency up is key because it is so darn expensive.
Could CXL affect how future servers and arrays are designed?
Spiers: A server could be created with almost no DRAM. For arrays and flash storage overall, vendors use DRAM in their controller for caching. With CXL, they can expand DRAM outside the controller head and incorporate CXL into the controller, having a huge DRAM cache in front of the disks. From here, vendors can figure out how much cache to put in front of their storage, so the customers never see disk latency.
Would this introduce a new job title into the enterprise, like CXL admin, or will these skills be transferrable from existing storage admins?
Spiers: There will be a management framework associated with CXL, similar to the storage management industry. There are stacks of software out there sold with storage arrays to manage them. All the things you see in storage, tiering and policy management, for example, you will see similar features for memory as well. I think there will be new jobs and new companies that pop up to focus on managing memory.
Will CXL shift the performance bottleneck away from compute to networking?
Spiers: I think it will. If you’re smart about how you do caching, I think storage performance will improve tremendously. You can optimize all different memory and storage tiers to make applications run seamlessly without bottlenecks.
Editor’s note: This Q&A has been edited for clarity and conciseness.
Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware and private clouds. I have previously worked at StorageReview.com.