You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
apart from the raw data itself, is there a plan to start creating vector embeddings for every tweet (using different embedding models) and serve it via supabase/ a cloud vector db as well?
is that under the purview of this project, or do you think individual researchers should do that themselves? in my opinion, it makes sense to reduce the inertia for people to participate in semantic research by providing standardized embeddings out of the box so people can build even faster
im asking because i was interested in making something on top of the vector embeddings and thought it makes sense to do this on top of this repo directly
The text was updated successfully, but these errors were encountered:
apart from the raw data itself, is there a plan to start creating vector embeddings for every tweet (using different embedding models) and serve it via supabase/ a cloud vector db as well?
is that under the purview of this project, or do you think individual researchers should do that themselves? in my opinion, it makes sense to reduce the inertia for people to participate in semantic research by providing standardized embeddings out of the box so people can build even faster
im asking because i was interested in making something on top of the vector embeddings and thought it makes sense to do this on top of this repo directly
The text was updated successfully, but these errors were encountered: