You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to have the decoder take in a tensor of indices that are already on the gpu and return a tensor of the same shape. For example if inds is indices of shape (..., 1), I = decoder[inds] returns I of shape (..., H,W, C).
Motivation, pitch
No response
The text was updated successfully, but these errors were encountered:
Indices need to be on CPU, and because of the underlying implementation, I doubt that having indices on GPU will ever speed-up decoding.
For the expected shape: we don't support arbitrary input dimensions for the indices, but you should be able to flatten the indices tensor, call decoder.get_frames_at_indices(flat_indices), and then re-shape the output tensor to the original shape.
Out of curiosity, why does your tensor of indices have some leading dimension?
🚀 The feature
Is there a way to have the decoder take in a tensor of indices that are already on the gpu and return a tensor of the same shape. For example if
inds
is indices of shape (..., 1),I = decoder[inds]
returnsI
of shape(..., H,W, C)
.Motivation, pitch
No response
The text was updated successfully, but these errors were encountered: