This repository was archived by the owner on Feb 16, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 48
HOME
Radha Giduthuri edited this page Dec 6, 2017
·
11 revisions
See development workflow page for developer instructions.
Here are some useful tutorial documents:
Setup and run annInferenceServer
- Recommended server configuration: EPYC(tm) processor with multiple Vega-based GPUs
- Install Ubuntu 16.04 64-bit
- Install ROCm from AMD repositories
- amdovx-modules: Checkout, Build, and Install
- Add
/opt/rocm/bintoPATHenvironment variable - Add
/opt/rocm/libtoLD_LIBRARY_PATHenvironment variable - Run
/opt/rocm/bin/annInferenceServer
Setup and run annInferenceApp
- Use another workstation for
annInferenceApp - Build annInferenceApp
- Run
annInferenceApp- Connect to the server (use port 28282)
- Upload CAFFE model (such as ResNet-50 for ImageNet)
- Select
.prototxt,.caffemodel, input dimensions, and other optional parameters - Click
Upload & Compile
- Select
- Run Inference
- Select synset txt file with one name of each output label per line
- Select input image folder (such as ImageNet validation data)
- Click
Run