PCI-Express to VME bridge
This wiki page describes the PCI-Express to VME bridge HDL module. It was originally implemented by MEN Mikro Elektronik GmbH on the A25 SBC and later open-sourced in the context of the CERN's VME Single Board Computers supply contract.
The PCIe-to-VME bridge translates the read and write operations in the PCIe address space to read and write transactions on the VME bus. It acts as a PCIe Endpoint on one side and VME bus Master on the other. The bridge can generate VME single cycles and block transfers. The following access types are currently supported:
- VME single cycles: A16, A24, A32 with any of the D8, D16, D32 data widths
- VME block transfers: A16, A24, A32 with any of the D8, D16, D32 plus the A32D64 multiplexed block transfer (MBLT)
The VME block transfers are executed by a built-in Direct Memory Access (DMA) engine, where the blocks of data are transferred between the system memory and the VME bus, bypassing the CPU. In general this is a faster and more efficient way of exchanging multiple data words, as the CPU is free to continue its normal operation until the DMA engine is done with a programmed task. The bridge supports also some features added in the VME64x extensions. It is able to use the geographical addressing pins and generate a special type of A24 access to read and write the CR/CSR configuration space of VME slaves installed in the same crate. However, none of the fast transfer modes (2eVME, 2eSST) is currently implemented.
If you're looking for a VME-bus Slave, see the project VME64x to Wishbone Core.
Supported FPGA platforms
Manufacturer | Family | Hardware board | Synthesized FPGA bitstream |
---|---|---|---|
Intel | Cyclone IV | MEN A25 VME Single Board Computer | A25 binaries |
HDL architecture
The internal HDL architecture of the PCIe-to-VME bridge is presented in the figure above. It is built around the Wishbone (WB) bus, an open-source computer bus often used in FPGA designs. The bridge is split into several VHDL modules communicating together through the central Wishbone Interconnect block. Each of these modules has its own function and can either control other modules (is a Wishbone Master), be controlled (is a Wishbone Slave) or have both interfaces:
- PCIe2WB - PCI Express x4 version 1.1 Endpoint with both WB Master and WB Slave interface. The firmware allows to select either x1 or x2 or x4 PCIe.
- VMEbus - VME Master with both WB Master (for DMA transfers) and WB Slave (for single cycle accesses) interface
- Flash - WB Slave module that interfaces the Flash chip outside the FPGA
- SRAM - WB Slave module that interfaces the SRAM chip outside the FPGA
- Version ROM - WB Slave module with FPGA memory blocks initialized at synthesis time with various information about the firmware (the so called chameleon table)
The PCIe2WB module is in fact a wrapper for the Intel auto-generated IP core. This IP core customizes a PCI Express IP block hardened in the Cyclone IV FPGA chip. Currently we use it as a four-lane PCIe version 1.1 interface with vendor Id and device Id specified by MEN. It provides several memory windows assigned to 4 Base Address Registers. These memory windows are then mapped to the wishbone addresses and therefore allow accessing the WB Slave devices as well as generating different VME accesses (see table below).
BAR no. | name | offset |
---|---|---|
BAR0 | Version ROM | 0 |
BAR0 | Flash | 200 |
BAR0 | VMEbus - registers | 10000 |
BAR0 | VMEbus - A16D16 access | 20000 |
BAR0 | VMEbus - A16D32 access | 30000 |
BAR1 | SRAM | 0 |
BAR2 | VMEbus - A24D16 access | 0 |
BAR2 | VMEbus - A24D32 access | 1000000 |
BAR3 | VMEbus - A32 access | 0 |
BAR4 | VMEbus - CR/CSR access | 0 |
The VMEbus module can act as both a VME Master and a VME Slave. It
provides several Wishbone address spaces for various types of access
(A16, A24, A32, CR/CSR). These WB windows are then directly mapped to
the PCIe memory windows. The behavior of this module depends on the
detected VME crate slot where it is installed. If slot 1 is detected,
the bridge generates system clock and reset signals to the VME
backplane, also a bus arbiter module is activated. The VHDL module can
also read the VME interrupt lines of all 7 levels and is equipped with a
DMA engine. DMA is responsible for performing block transfers and
multiplexed block transfers between the VME bus and the system memory
without the active control of the CPU during the transfer. The
controller can be configured by writing a set of linked buffer
descriptors to the SRAM area (maximum 112). Each descriptor specifies
the complete information about the
transfer like: the source and destination device, size of the transfer,
source and destination address, VME modifier and data width (for VME
transactions). Additionally one can specify if the DMA should
automatically increment the source or destination address for each
transferred word in the block, e.g. for transferring data words to SRAM.
The source and destination device can be any of of VME bus, PCIe2WB or
SRAM modules.
Sources
Documentation
Articles
- Solving Vendor Lock-in in VME Single Board Computers Through Open-Sourcing of the PCIe-VME64x bridge, ICALEPCS 2017.
- CERN Upgrades Particle Accelerator with an FPGA-based PCIe-to-VME64x Bridge, Embedded Systems Engineering, January 2019 (no link available).
- Des cartes processeurs VME au top jusqu’en 2032… grâce à un pont PCIe-VME64x sur FPGA, L'embarqué magazine, mars 2019
- CERN: Teilchenbeschleuniger wird mit Open Source VME-CPU-Boards aufgerüstet June 2019
Contacts
- Grzegorz Daniluk - CERN
Project Status
Date | Event |
---|---|
03-2017 | Start of PCIe-VME64x bridge insourcing project at CERN |
10-2017 | PCIe-VME64x bridge presented during ICALEPCS 2017 conference |
02-2018 | Version 3.11 of HDL, drivers and testbench released |
02-2018 | PCIe-VME64x bridge HDL and drivers are validated |
04-2018 | MEN A25 Single Board Computer board declared operational at CERN |
05-2018 | Version 3.15 of HDL released with improvements for BLT and MBLT performance |
07-2022 | Plans to extend the core to support 2eSST cycles |
04-2023 | The core will be used in a new VME64x SBC Single Board Computer |
17 April 2023