For More details please send an email on info@vvdntech.com.

Xilinx T1 Telco Accelerator Card

Industry’s 1st Telco Convergence NIC with O-RAN Fronthaul Offload and L1 Offload.

Overview

The Xilinx T1 Telco Accelerator Card is a multi-function small form factor PCIe card that performs both O-RAN fronthaul and 5G NR layer-1 acceleration in a single slot PCIe card. Engineered for Telco Grade Reliability (NEBs), the T1 Card solution delivers 10x performance increase over software only implementation (>20x solution cost reduction). T1 Card is built around Xilinx Zynq Ultrascale + MPSoC & RFSoc. The T1 Card has two 25G SFP28 and x16 PCIe interfaces bifurcated into two x8 interfaces. This Board is dual slot FHHL (full-height, half-length) form factor. 100G CMAC interface is provided between ZU21DR and ZU19EG for data transfer operations.

Key Features

  • Xilinx ZU21DR RFSoC for 5G baseband processing offload
  • Xilinx MPSOC ZU19EG for 5G O-RAN fronthaul termination
  • x64 bit 4GB DDR4 Memory interfaced to PL
  • x32 bit 2GB DDR4 Memory interfaced to PS
  • Two SFP28 cages for 25G
  • x16 PCIe interface bifurcated to two x8 interfaces to the host device through PCIe edge finger connector
  • x16 standard NIC card form factor ( 112mm x 168mm )
  • JTAG Connectors for Debugging


Reference Design on T1

  • VVDN's L1 offload design with BBDEV standard APIs, which can easily interface with compliant 5G L1 software stack.
  • VVDN's O-RAN split option 7-2x fronthaul design with 2x25G Ethernet/eCPRI interface with DPDK standard APIs for interfacing with compliant 5G L1 software stack.
  • DU/CU
  • 5G/LTE Base stations
  • Data Center Acceleration
  • Finance
  • Networking
  • Security
Specifications
Dimensions The card is compliant with the PCIe CEM 3.0 specification as a dual-slot, low profile PCIe add-in card.
  • Height: 111.15mm, PCB Thickness: 1.57mm, Length: 167.65mm
  • Cooling Enclosure installed
  • Assembly Thickness: 18.71mm
PCIe Connector/Data-Rates
PCI Express Generation Performance
Gen 1 2.5 (GT/S)
Gen 2 5 (GT/S)
Gen 3 8 (GT/S)
DDR4 Specifications DDR4 RAM configuration
  • ZU19EG PS: 32-bit w/ECC, 2400MHz DDR 4, 2Gbytes
  • ZU19EG PL: 64-bit w/ECC, 2400MHz DDR 4, 4Gbytes
  • ZU21DR PS: 32-bit w/ECC, 2400MHz DDR 4, 2Gbytes
  • ZU21DR PL: 64-bit w/ECC, 2400MHz DDR 4, 4Gbytes
Network Interfaces Having two 25G SFP28 and x16 PCIe interfaces bifurcated to two x8 interfaces.
USB Maintenance Port The ADYA card includes a covered micro-USB maintenance port at the I/O bracket.
Operating System Compatibility The following operating systems are supported:
  • CentOS 7.4/7.5
  • RHEL 7.4/7.5
  • Ubuntu 16.04.4
Operating and Storage Temperature Conditions
Specification Condition
Operating Temperature C to 40°C
Storage Temperature -40°C to 75°C
Operating Humidity 10% to 85%
Storage humidity 5% to 90%

VVDN IPs on T1

VVDN's Fronthaul ORAN 7-2x design is available with DPDK APIs framework for interfacing with 5G L1 stack based on 5G ORAN fronthaul interface

  • U/C – plane message are fully handled in RTL
    • IQ Compression and De compression : BFP and MC
    • IQ bit width support from 9 bit to 16 bits
    • C/U -plane Tx and Rx window Management
  • S-Plane : Time stamp insertion and Extraction in RTL
    • Can achieve Class C Performance
    • IEEE1588 and SyncE Support
  • Software support for M-plane traffic
  • DPDK Based API for Transferring IQ sampled to and from Host
  • Easley scalable Architecture to support multiple front haul interfaces
  • Priority Based on massage Type
  • Support for 10G,25G interfaces

VVDN's L1 Offload design is available with BBDev APIs, which can be easily interfaced with baseband (L1) software stack

  • DPDK baseband device library framework to provide workloads for offload.
  • Poll mode driver based on Xilinx QDMA to submit data to HW accelerator.
  • One core is available for LDPC encode operation and three for decode operation.
  • BBDev API for LDPC, rate matching hardware acceleration.
  • User can submit the CB payload to the BBDEV driver after the CRC attachment.
  • The BBDEV driver will fill the corresponding descriptor ring, and hardware accelerator will submit the encode/decoded data in the dequeue ring after the respective operation.

In order to improve your experience and our services, we use cookies on our website. By continuing to browse and interact on this website you agree to our privacy policy.