title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1370.6
|
have all of these different data points the little crosses and then we have these three
| 1,370.6 | 1,380.8 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1376.7199999999998
|
other points which are going to be our cluster centroids.
| 1,376.72 | 1,388.28 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1380.8
|
So around each or based in each of our centroids we expand a catchment radius around each of
| 1,380.8 | 1,395.36 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1388.28
|
those and as you can see here where each of those circles collides it creates the edge
| 1,388.28 | 1,399.08 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1395.36
|
of what are going to be our almost like catchment cells.
| 1,395.36 | 1,406.14 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1399.08
|
This is called a Voronoi diagram or try it's a really hard word Dirichlet tessellation
| 1,399.08 | 1,410.8 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1406.1399999999999
|
I don't know if that's correct but it sounds I think it sounds pretty cool so I thought
| 1,406.14 | 1,412.48 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1410.8
|
I'd throw that in there.
| 1,410.8 | 1,419.12 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1412.48
|
So we create these cells in each one of those cells any data point within those cells will
| 1,412.48 | 1,428 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1419.1200000000001
|
be allocated to that given centroid and then when you search within a specific cell you
| 1,419.12 | 1,433.36 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1428.0
|
pass your XQ value in there and that will be compared the XQ value will be compared
| 1,428 | 1,438.96 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1433.3600000000001
|
to every single cluster centroid but not the other values within that cluster or the other
| 1,433.36 | 1,446.04 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1438.96
|
clusters only the cluster centroids and then from that you find out which centroid is the
| 1,438.96 | 1,454.08 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1446.04
|
closest to your query vector and then what we do is we restrict our search scope to only
| 1,446.04 | 1,462.56 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1454.08
|
the data points within that cluster or that cell and then we calculate the nearest vector
| 1,454.08 | 1,467.68 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1462.56
|
so at this point we have all the vectors only within that cell and we compare all of those
| 1,462.56 | 1,469.4 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1467.68
|
to our query vector.
| 1,467.68 | 1,473.28 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1469.4
|
Now there is one problem with this which is called the edge problem.
| 1,469.4 | 1,478.4 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1473.28
|
Now we're just showing this in two-dimensional space obviously in reality for example the
| 1,473.28 | 1,484.92 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1478.4
|
data set we're using we have 128 dimensions so dimensionally the edge problem is kind
| 1,478.4 | 1,491.08 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1484.92
|
of complicated when you think about it in the hundreds of dimensions but what this is
| 1,484.92 | 1,497.88 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1491.08
|
is so with say with our query we find our query vectors right on the edge of one of
| 1,491.08 | 1,504 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1497.8799999999999
|
the cells and if we sell n probe value so I mentioned n probe here that's how many
| 1,497.88 | 1,508.44 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1504.0
|
cells we search if that is set to one it means that we're going to restrict our search to
| 1,504 | 1,516.56 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1508.4399999999998
|
only that cell even though if you if you look at this we have two or we have I'm trying
| 1,508.44 | 1,523.16 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1516.56
|
to think so this one for sure is closer to our query vector than any of the magenta data
| 1,516.56 | 1,531.32 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1523.1599999999999
|
points and possibly also this one and this one but and maybe even this one but we're
| 1,523.16 | 1,538.44 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1531.32
|
not going to consider any of those because we're restricting our search only to this
| 1,531.32 | 1,546.04 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1538.44
|
cell so we're only going to look at you know these data points and also these over here
| 1,538.44 | 1,553.96 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1546.04
|
so that's that's the edge problem but we can get around that by not just searching one
| 1,546.04 | 1,559.56 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1553.96
|
cell but by searching quite a few so in this case our n probe value is eight and that means
| 1,553.96 | 1,566.76 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1559.56
|
we're going to search eight of the nearest centroids or centroid cells and that's how
| 1,559.56 | 1,574.64 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1566.76
|
IVF works let's go ahead and implement that in code so first thing we need to do is sell
| 1,566.76 | 1,583.2 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1574.64
|
n list value which is the number of centroids that we will have within our within our data
| 1,574.64 | 1,588.4 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1583.2
|
and then this time so this is a little bit different we need to set the the final vector
| 1,583.2 | 1,593.12 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1588.4
|
search that we're going to do so we're this is kind of split into two different operations
| 1,588.4 | 1,599.2 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1593.1200000000001
|
right so we're searching based on clusters and then we're actually comparing the full
| 1,593.12 | 1,603.88 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1599.2
|
vectors within the selected clusters so we need to define how we're going to do that
| 1,599.2 | 1,610.48 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1603.88
|
final that final search between our full vectors and our query vector so what we do is write
| 1,603.88 | 1,618.16 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1610.48
|
vice so do index flat we're going to index five p you can use l2 as well we set our dimension
| 1,610.48 | 1,622.88 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1618.16
|
it so we're just initializing a flat index there and then what we're going to do is feed
| 1,618.16 | 1,632.24 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1622.88
|
that into our IVF index so our IVF index is vice index IVF and flat because we're using
| 1,622.88 | 1,639.08 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1632.24
|
the flat indexes the flat vectors there we need to pass our quantizer so the this step
| 1,632.24 | 1,646.64 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1639.08
|
here the other step to the search process the dimensionality and also our n list values
| 1,639.08 | 1,652.64 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1646.64
|
of how many cells or clusters we're going to have in there and with this because we're
| 1,646.64 | 1,659.4 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1652.64
|
clustering data we need to do something else so in fact let me show you so if we write
| 1,652.64 | 1,665.1 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1659.4
|
index dot is trained we get this false if we wrote off any of our other indexes this
| 1,659.4 | 1,668.84 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1665.1000000000001
|
would have been true because they don't need to be trained because we're not doing clustering
| 1,665.1 | 1,674.96 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1668.8400000000001
|
or any other form of training or optimization there so what we need to do is we need to
| 1,668.84 | 1,681.04 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1674.96
|
train our index before we use it so we write index train and we just pass all of our vectors
| 1,674.96 | 1,690.44 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1681.04
|
into that but it's very quick so it's not really an issue and then we do index add pass
| 1,681.04 | 1,700.44 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1690.44
|
our data and then what we do one thing so I want to show you we have our n probe value
| 1,690.44 | 1,709.98 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1700.44
|
we'll search with one for now so we search one cell and to search we write di as we have
| 1,700.44 | 1,720.48 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1709.98
|
every other time search execute okay okay so I mean super fast 3.32 milliseconds I think
| 1,709.98 | 1,732.08 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1720.48
|
that's maybe the fastest other than how bad performing or low quality hsw index so let's
| 1,720.48 | 1,745.32 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1732.08
|
see how how that's performed so you write mp dot in on d baseline hi you see it's not
| 1,732.08 | 1,753.24 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1745.32
|
too bad to be fair like 50 50 almost so that's actually pretty good but what we can do if
| 1,745.32 | 1,760.2 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1753.24
|
we want it to be even better is we increase the n probe value so let's go up to four so
| 1,753.24 | 1,765.68 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1760.2
|
that's increased the wartime quite a bit so from like three to 125 which is now super
| 1,760.2 | 1,772.12 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1765.68
|
slow actually but now we're getting perfect results we can maybe decrease that to two
| 1,765.68 | 1,777.2 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1772.1200000000001
|
so now it's faster that could have been a one-off sometimes occasionally you get a really
| 1,772.12 | 1,786.28 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1777.2
|
slow search and just happens sometimes so this is so we set n probe to super fast and
| 1,777.2 | 1,794.04 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1786.28
|
super accurate so that that's a very good index as well so these are the stats I got
| 1,786.28 | 1,799.56 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1794.04
|
in terms of recall and search time in milliseconds for different n probe values and different
| 1,794.04 | 1,807.48 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1799.56
|
endless values so again it's all it's just about balancing it again index size the only
| 1,799.56 | 1,811 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1807.48
|
thing that affects your index size here is obviously the size of your data and the endless
| 1,807.48 | 1,815.74 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1811.0
|
value but you can increase the endless value loads and the index size hardly increases
| 1,811 | 1,824.68 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1815.74
|
so this is like increasing by 100 kilobytes per like double of the endless value so this
| 1,815.74 | 1,833.8 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1824.68
|
is very it's like nothing so that's it for this video and we covered quite a lot so I'm
| 1,824.68 | 1,840.4 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1833.8
|
gonna leave it there but I think these all these indexes are super useful and quite interesting
| 1,833.8 | 1,845.2 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1840.4
|
and figuring out just playing around with them like you see I've done loads with these
| 1,840.4 | 1,851.56 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1845.2
|
these graphs just seeing what is faster what is slower what where the good quality is I'm
| 1,845.2 | 1,857.04 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1851.56
|
just playing around the parameters and seeing what you can get out of it is super useful
| 1,851.56 | 1,863.84 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1857.04
|
for actually understanding these now what I do want to do going forward is actually
| 1,857.04 | 1,868.32 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1863.8400000000001
|
explore each one of these indexes in more depth because you've only covered them like
| 1,863.84 | 1,876.88 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1868.32
|
very very very high level at the moment so in future videos articles we're going to go
| 1,868.32 | 1,884.64 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1876.8799999999999
|
into more depth and explore them a lot more so that we pretty interesting I think so that's
| 1,876.88 | 1,890.88 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1884.6399999999999
|
it for this video thank you very much for watching and I will see you in the next one
| 1,884.64 | 1,900.16 |
Choosing Indexes for Similarity Search (Faiss in Python)
|
2021-08-09 15:04:10 UTC
|
https://youtu.be/B7wmo_NImgM
|
B7wmo_NImgM
|
UCv83tO5cePwHMt1952IVVHw
|
B7wmo_NImgM-t1890.88
| 1,890.88 | 1,900.16 |
|
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t0.0
|
Hi and welcome to the video.
| 0 | 4 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2.0
|
Here we're going to have a look at
| 2 | 6 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t4.0
|
how we can use NSP
| 4 | 8 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t6.0
|
or Net Sentence Prediction
| 6 | 10 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t8.0
|
to train a BERT model.
| 8 | 12 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t10.0
|
Now in
| 10 | 14 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t12.0
|
a previous video I covered
| 12 | 16 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t14.0
|
how NSP works but I didn't
| 14 | 18 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t16.0
|
really cover how you actually train
| 16 | 20 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t18.0
|
a model using it. So
| 18 | 22 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t20.0
|
that's what we're going to do here.
| 20 | 24 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t22.0
|
So we're going to jump
| 22 | 26 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t24.0
|
straight into it and
| 24 | 28 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t26.0
|
we have this notebook.
| 26 | 30 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t28.0
|
Here is the data that we're going
| 28 | 32 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t30.0
|
to be using. I will load that
| 30 | 34 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t32.0
|
in in a moment but first thing I
| 32 | 36 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t34.0
|
want to do before doing that is
| 34 | 38 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t36.0
|
import and initialise everything we
| 36 | 40 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t38.0
|
need. So
| 38 | 42 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t40.0
|
obviously when we are downloading that
| 40 | 44 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.