When i followed the instructions as specified in the docker setup, it always give out of memory error. But I am already using an AWS P3 instance, which has a Tesla V100.
Is this expected or sometime is wrong in my setup?
My config:
tensorflow-gpu: 1.10.0
keras: 2.0.9
Error from vgg_normalised.py line 38:
OOM when allocating tensor of shape [3] and type float
[[Node: vgg_encoder/preprocess/Const_1 = Constdtype=DT_FLOAT, value=Tensor<type: float shape: [3] values: -103.939 -116.779 -123.68>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Thanks!
When i followed the instructions as specified in the docker setup, it always give out of memory error. But I am already using an AWS P3 instance, which has a Tesla V100.
Is this expected or sometime is wrong in my setup?
My config:
tensorflow-gpu: 1.10.0
keras: 2.0.9
Error from vgg_normalised.py line 38:
Thanks!