Skip to content

Would this work at scale? #210

@seabass011

Description

@seabass011

I'm wondering if I could use autofaiss + pyspark to store ~100 billion vectors. I read in a thread on the faiss package that milvus is just basically faiss, but already distributed. I was wondering if I could use autofaiss and then distribute the data across a bunch of nodes.

Do you think this is a reasonable solution if I need to store a ton of vectors?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions