Skip to content
This repository was archived by the owner on Feb 16, 2019. It is now read-only.
Radha Giduthuri edited this page Dec 6, 2017 · 11 revisions

See development workflow page for developer instructions.


Here are some useful tutorial documents:


Getting Started with Neural Networks Inference Sample Application with ROCm

Setup and run annInferenceServer

  1. Recommended server configuration: EPYC(tm) processor with multiple Vega-based GPUs
  2. Install Ubuntu 16.04 64-bit
  3. Install ROCm from AMD repositories
  4. amdovx-modules: Checkout, Build, and Install
  5. Add /opt/rocm/bin to PATH environment variable
  6. Add /opt/rocm/lib to LD_LIBRARY_PATH environment variable
  7. Run /opt/rocm/bin/annInferenceServer

Setup and run annInferenceApp

  1. Use another workstation for annInferenceApp
  2. Build annInferenceApp
  3. Run annInferenceApp
    • Connect to the server (use port 28282)
    • Upload CAFFE model (such as ResNet-50 for ImageNet)
      • Select .prototxt, .caffemodel, input dimensions, and other optional parameters
      • Click Upload & Compile
    • Run Inference
      • Select synset txt file with one name of each output label per line
      • Select input image folder (such as ImageNet validation data)
      • Click Run

Clone this wiki locally