DLPrimitives Blog :: Releases http://blog.dlprimitives.org/ Development Blog Pytorch OpenCL backend - simplified http://blog.dlprimitives.org/post/11 http://blog.dlprimitives.org/post/11 <div style="direction:ltr"> <p>Now installation of opencl backend for pytorch is really simple.</p> <ol> <li>Install nighly version of pytorch for CPU in virtual environment</li> <li>Clone <code>dlrpim_backend</code> repository and checkout <code>true_out_of_tree_support</code> branch</li> <li>Update submodules</li> <li><p>Run few commands inside repo</p> <pre><code> mkdir build cd build cmake -DCMAKE_PREFIX_PATH=$VIRTUAL_ENV/lib/python3.8/site-packages/torch/share/cmake/Torch .. make cd .. </code></pre></li> <li><p>Run mnist training:</p> <pre><code> python mnist.py --device=ocl:0 </code></pre></li> </ol> <p>That's it.</p> <p>Keep updated</p> </div> Inference of ONNX Models using DLPrimitives http://blog.dlprimitives.org/post/10 http://blog.dlprimitives.org/post/10 <div style="direction:ltr"> <p>I worked on integration of inference of ONNX models using DLPrimitives. It isn't a simple task since ONNX operator set is very reach and many things can be implemented in different ways.</p> <p>After many revisions and improvements I managed to validate multiple imagenet pretrained networks from pytorch, mxnet and few based on TensorFlow (see about issues with TF later)</p> <p>How do you create a dlprimitives network using ONNX Model?</p> <pre><code>// load and parse ONNX Model dp::ONNXModel model; model.load(onnx_path); // create network dp::Context ctx(device_id); dp::Net net(ctx); // load parameters net.load_model(model); </code></pre> <p>And you are ready to go.</p> <p>I validated following networks and frameworks:</p> <ul> <li>Pytorch, op-sets 9, 11, 13, nets <code>alexnet</code>, <code>vgg16</code>, <code>resnet18</code>, <code>resnext50_32x4d</code>, <code>wide_resnet50_2</code>, <code>efficientnet_b0</code>, <code>efficientnet_b4</code>, <code>regnet_y_400mf</code>, <code>squeezenet1_0</code>, <code>mobilenet_v2</code>, <code>densenet121</code></li> <li>MXNet: <code>vgg11_bn</code>, <code>alexnet</code>, <code>mobilenetv2_0.25</code>, <code>mobilenet0.25</code>, <code>densenet121</code>, <code>resnet18_v1</code>, <code>squeezenet1.0</code></li> <li>Tensorflow: op-sets 9, and 11 limited initial support, channel first format: <code>resnet50</code>, <code>densenet121</code></li> </ul> <p>Some networks on pytorch don't pass due to lack of some of the operators. The situation with TensorFlow is more complicated and only few networks worked ok.</p> <h2>TensorFlow</h2> <p>When I stated validated pretrained keras networks I discovered very surprising thing. TensorFlow uses asymmetrical padding in some cases, since in TF/Keras you don't explicitly provide padding but rather give some vague definition of "same" or "valid" for the padding, in some cases padding may differ on start and end of the image.</p> <p>Interestingly, cuDNN does not even provide asymmetrical padding option for convolutions. Looking into the code TF does padding manually is such case (that is actually huge waste of memory and memory bandwidth)</p> <p>So implementing these convolutions will require implementing of new simple padding layer just to make sure we can use dlprimitives for inference of TF models.</p> <p>To be continued...</p> </div> Documentation is Online http://blog.dlprimitives.org/post/3 http://blog.dlprimitives.org/post/3 <div style="direction:ltr"> <p>I published recent documentation online:</p> <p><a href="http://dlprimitives.org/docs">http://dlprimitives.org/docs</a></p> </div>