gdrcopy
A low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
While GPUDirect RDMA is meant for direct access to GPU memory from third-party devices, it is possible to use these same APIs to create perfectly valid CPU mappings of the GPU memory.
The advantage of a CPU driven copy is the very small overhead involved. That might be useful when low latencies are required.
GDRCopy offers the infrastructure to create user-space mappings of GPU memory, which can then be manipulated as if it was plain host memory (caveats apply here).
A simple by-product of it is a copy library with the following characteristics:
very low overhead, as it is driven by the CPU. As a reference, currently a cudaMemcpy can incur in a 6-7us overhead.
An initial memory pinning phase is required, which is potentially expensive, 10us-1ms depending on the buffer size.
Fast H-D, because of write-combining. H-D bandwidth is 6-8GB/s on Ivy Bridge Xeon but it is subject to NUMA effects.
Slow D-H, because the GPU BAR, which backs the mappings, can't be prefetched and so burst reads transactions are not generated through PCIE
The library comes with two tests:
sanity, which contains unit tests for the library and the driver.
copybw, a minimal application which calculates the R/W bandwidth.
GPUDirect RDMA requires NVIDIA Tesla or Quadro class GPUs based on Kepler, Pascal, Volta, or Turing, see GPUDirect RDMA. For more technical informations, please refer to the official GPUDirect RDMA design document.
The device driver requires GPU display driver >= 418.40 on ppc64le and >= 331.14 on other platforms. The library and tests require CUDA >= 6.0. Additionally, the sanity test requires check >= 0.9.8 and subunit.
# On RHEL$ sudo yum install check check-devel subunit subunit-devel# On Debian$ sudo apt install check libsubunit0 libsubunit-dev
Developed and tested on RH7.x and Ubuntu18_04. The supported architectures are Linux x86_64 and ppc64le.
Root privileges are necessary to load/install the kernel-mode device driver.
We provide three ways for building and installing GDRCopy
$ cd packages $ CUDA=<cuda-install-top-dir> ./build-rpm-packages.sh $ sudo rpm -Uvh gdrcopy-kmod-<version>.<platform>.rpm $ sudo rpm -Uvh gdrcopy-<version>.<platform>.rpm $ sudo rpm -Uvh gdrcopy-devel-<version>.<platform>.rpm
$ cd packages $ CUDA=<cuda-install-top-dir> ./build-deb-packages.sh $ sudo dpkg -i gdrdrv-dkms_<version>_<platform>.deb $ sudo dpkg -i gdrcopy_<version>_<platform>.deb
$ make PREFIX=<install-to-this-location> CUDA=<cuda-install-top-dir> all install $ sudo ./insmod.sh
Execute provided tests:
$ sanity Running suite(s): Sanity 100%: Checks: 11, Failures: 0, Errors: 0 $ copybw testing size: 4096 rounded size: 65536 device ptr: 5046c0000 bar_ptr: 0x7f8cff410000 info.va: 5046c0000 info.mapped_size: 65536 info.page_size: 65536 page offset: 0 user-space pointer:0x7f8cff410000 BAR writing test... BAR1 write BW: 9549.25MB/s BAR reading test... BAR1 read BW: 1.50172MB/s unmapping buffer unpinning buffer closing gdrdrv
Depending on the platform architecture, like where the GPU are placed in the PCIe topology, performance may suffer if the processor which is driving the copy is not the one which is hosting the GPU, for example in a multi-socket server.
In the example below, the K40m and K80 GPU are respectively hosted by socket0 and socket1. By explicitly playing with the OS process and memory affinity, it is possible to run the test onto the optimal processor:
$ GDRCOPY_ENABLE_LOGGING=1 GDRCOPY_LOG_LEVEL=0 LD_LIBRARY_PATH=$PWD:$LD_LIBRARY_PATH numactl -N 0 -l copybw -d 0 -s $((64 * 1024)) -o $((0 * 1024)) -c $((64 * 1024))GPU id:0 name:Tesla K40m PCI domain: 0 bus: 2 device: 0 GPU id:1 name:Tesla K80 PCI domain: 0 bus: 132 device: 0 GPU id:2 name:Tesla K80 PCI domain: 0 bus: 133 device: 0 selecting device 0 testing size: 65536 rounded size: 65536 device ptr: 2305ba0000 bar_ptr: 0x7fe60956c000 info.va: 2305ba0000 info.mapped_size: 65536 info.page_size: 65536 page offset: 0 user-space pointer:0x7fe60956c000 BAR writing test, size=65536 offset=0 num_iters=10000 DBG: sse4_1=1 avx=1 sse=1 sse2=1 DBG: using AVX implementation of gdr_copy_to_bar BAR1 write BW: 9793.23MB/s BAR reading test, size=65536 offset=0 num_iters=100 DBG: using SSE4_1 implementation of gdr_copy_from_bar BAR1 read BW: 787.957MB/s unmapping buffer unpinning buffer closing gdrdrv
or on the other one:
drossetti@drossetti-hsw0 16:52 (1181) gdrcopy>GDRCOPY_ENABLE_LOGGING=1 GDRCOPY_LOG_LEVEL=0 LD_LIBRARY_PATH=$PWD:$LD_LIBRARY_PATH numactl -N 1 -l copybw -d 0 -s $((64 * 1024)) -o $((0 * 1024)) -c $((64 * 1024))GPU id:0 name:Tesla K40m PCI domain: 0 bus: 2 device: 0 GPU id:1 name:Tesla K80 PCI domain: 0 bus: 132 device: 0 GPU id:2 name:Tesla K80 PCI domain: 0 bus: 133 device: 0 selecting device 0 testing size: 65536 rounded size: 65536 device ptr: 2305ba0000 bar_ptr: 0x7f2299166000 info.va: 2305ba0000 info.mapped_size: 65536 info.page_size: 65536 page offset: 0 user-space pointer:0x7f2299166000 BAR writing test, size=65536 offset=0 num_iters=10000 DBG: sse4_1=1 avx=1 sse=1 sse2=1 DBG: using AVX implementation of gdr_copy_to_bar BAR1 write BW: 6812.08MB/s BAR reading test, size=65536 offset=0 num_iters=100 DBG: using SSE4_1 implementation of gdr_copy_from_bar BAR1 read BW: 669.825MB/s unmapping buffer unpinning buffer closing gdrdrv
GDRCopy works with regular CUDA device memory only, as returned by cudaMalloc. In particular, it does not work with CUDA managed memory.
gdr_pin_buffer()
accepts any addresses returned by cudaMalloc and its family.
In contrast, gdr_map()
requires that the pinned address is aligned to the GPU page.
Neither CUDA Runtime nor Driver APIs guarantees that GPU memory allocation
functions return aligned addresses. Users are responsible for proper alignment
of addresses passed to the library.
On POWER9 where CPU and GPU are connected via NVLink, CUDA9.2 and GPU Driver v396.37 are the minimum requirements in order to achieve the full performance. GDRCopy works with ealier CUDA and GPU driver versions but the achievable bandwidth is substantially lower.
For reporting issues you may be having using any of NVIDIA software or reporting suspected bugs we would recommend you use the bug filing system which is available to NVIDIA registered developers on the developer site.
If you are not a member you can sign up.
Once a member you can submit issues using this form. Be sure to select GPUDirect in the "Relevant Area" field.
You can later track their progress using the My Bugs link on the left of this view.
上一篇:DL4AGX
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
shih-styletransfer
shih-styletransfer Code from Style Transfer ...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com