-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for ollama RAG providers #1427
Conversation
- Added provider, model and endpoint configuration options for RAG service - Updated RAG service to support both OpenAI and Ollama providers - Added Ollama embedding support and dependencies - Improved environment variable handling for RAG service configuration Signed-off-by: wfhtqp@gmail.com <wfhtqp@gmail.com>
LLM is still needed for parsing, so this should also be configurable. |
LGTM, you might need to fix the py lint error: pre-commit run --color=always --files $(find ./py -type f -name "*.py") |
Thank you for the contribution! We should synchronize the documents and inform everyone on how to customize the RAG provider. Adding a few examples would be even better. Additionally, will switching providers abruptly render the previous provider's embedding invalid? We should address this issue. |
Tell users in the documentation that they need to delete previous data when changing providers? |
Can we do this automatically? |
Due to |
Can you use |
OK, thanks for your help |
#1407