Skip to content

OOM issue #22

@ecilay

Description

@ecilay

When i followed the instructions as specified in the docker setup, it always give out of memory error. But I am already using an AWS P3 instance, which has a Tesla V100.
Is this expected or sometime is wrong in my setup?

My config:
tensorflow-gpu: 1.10.0
keras: 2.0.9

Error from vgg_normalised.py line 38:

OOM when allocating tensor of shape [3] and type float
[[Node: vgg_encoder/preprocess/Const_1 = Constdtype=DT_FLOAT, value=Tensor<type: float shape: [3] values: -103.939 -116.779 -123.68>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions