documentation/vector_indexing.myco

9 lines
1.6 KiB
Plaintext
Raw Normal View History

2024-11-28 15:56:46 +00:00
[[Neural nets|Modern technology]] has allowed converting many [[things]] to [[vectors]], allowing things related to other things to be found through finding records with the highest/highest/lowest dot product/cosine similarity/L2 distance with/to/from queries. This can be done exactly through brute force, but this is obviously not particularly efficient. [[Algorithms]] allow sublinear runtime scaling wrt. record count, with some possibility of missing the best (as determined by brute-force) match. The main techniques are:
* graph-based
* product quantization (lossy compression)
2024-11-28 20:43:57 +00:00
* inverted lists (split vectors into clusters, search a subset of the clusters)
2024-11-28 20:53:31 +00:00
IVF-DAC (for some reason), which is just inverted lists combined with product quantization, was historically the most common way to search large vector datasets. However, recall is very bad in some circumstances (most notably when query/dataset vectors are drawn from significantly different distributions: see [[https://arxiv.org/abs/2305.04359]] and [[https://kay21s.github.io/RoarGraph-VLDB2024.pdf]]). The latter explains this phenomenon as resulting from the nearest neighbours being split across many more (and more widely distributed) clusters (cells) than with in-distribution queries.
2024-11-28 21:03:33 +00:00
Graph-based approaches aim to create graphs such that a greedy search on the graph toward closer (by the vector distance metric) vertices rapidly converges on (most of the time) the best-matching vertex. These generally offer better search time/recall tradeoffs but have worse build time and are in some sense more [[cursed]] algorithmically.