TVM性能评估分析(三)
Figure 1. TVM’s WebGPU backend close to native GPU performance when deploying models to the web.
Figure 2. WebGPU is to write shaders for primitive operators in deep neural networks
Figure 3. Build a WebGPU runtime inside TVM’s JS runtime
Figure 4. Comparing the execution of a full computational graph via TVM’s WebGPU backend and native targets
Figure 5. 2D convolution with data layout in NCHW4c and weight layout in OIHW4o4i. Left: The input tensor in NCHW4c layout. One moving filter of the kernel is colored in blue. One element of the input and kernel is colored in grey. Mid: The packed input and kernel in the grey block. Right: The output in NCHW4c layout. Inside the one element depicted, there are four packed elements in channel sub-dimension.
Figure 6. Workflow of running quantized models
Figure 7. A full deep learning compiler stack to support machine learning workloads for diverse hardware backends.
Figure 8. Golang Interface over TVM Runtime
Figure 9. Import, Compile, Integrate and Deploy