Skip to content

[BUG] Issue with Ollama and context. #1026

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mc9625 opened this issue Feb 2, 2025 · 7 comments
Open

[BUG] Issue with Ollama and context. #1026

mc9625 opened this issue Feb 2, 2025 · 7 comments
Labels
bug Something isn't working

Comments

@mc9625
Copy link

mc9625 commented Feb 2, 2025

Describe the bug
I have installed the latest version (1.8.1) on two brand new setup: ubuntu 24.0.1 and Raspian OS. On both I have installed Ollama as local service (not a docker container). Everything works fine when I chat (eg I get a proper reply from the model) but there is no context information passed to the prompt. The # Context part of the SystemMessage is missing. I have checked in the memory page and I am able to find declarative message.

To Reproduce
Steps to reproduce the behavior (as exampple`):

  1. Install Ollama as local service
  2. Upload a document or Upload via URL
  3. Ask anything related to the uploaded content
  4. Check the system message
@mc9625 mc9625 added the bug Something isn't working label Feb 2, 2025
@AlessandroSpallina
Copy link
Member

what embedder do you have configured?
if you have a proper embedder, check the similarity score you find in the memory tab, the default threshold is 0.7

@AlessandroSpallina
Copy link
Member

any update @mc9625 ?

@digitARTI
Copy link

Hi all, same problem here! can we help debug?

@pieroit
Copy link
Member

pieroit commented Feb 28, 2025

Set a proper embedder and test it in the memory page in the admin

@LucaRainone
Copy link

I had the same problem, and the reason was the score being below 0.7—even though, to a "human" eye, the question was perfectly relevant to the document (and got a score of 0.6).

By adding more details to the input/question, the score increased, and the system correctly included it in the context.

So, it was simply an embedding model issue in my case. (easily debugged in memory panel). I tried mxbai-embed-large and nomic-embed-text.

I could work on a plugin that can include some context regardless the threshold (something like: "include always at least N chunks of declarative memories". High risk of hallucinations, but at least in-topic, i think. Useful if we have few documents as I did in my playground and first tests).

In addition: does it make sense to allow the modification of the threshold for declarative memories in embedding settings? (I can work on it if you agree)

@lucagobbi
Copy link
Collaborator

I had the same problem, and the reason was the score being below 0.7—even though, to a "human" eye, the question was perfectly relevant to the document (and got a score of 0.6).

By adding more details to the input/question, the score increased, and the system correctly included it in the context.

So, it was simply an embedding model issue in my case. (easily debugged in memory panel). I tried mxbai-embed-large and nomic-embed-text.

I could work on a plugin that can include some context regardless the threshold (something like: "include always at least N chunks of declarative memories". High risk of hallucinations, but at least in-topic, i think. Useful if we have few documents as I did in my playground and first tests).

In addition: does it make sense to allow the modification of the threshold for declarative memories in embedding settings? (I can work on it if you agree)

Have you checked the C.A.T. — Cat Advanced Tools plugin? It allows you to configure thresholds and K results for the three main memory collections, plus other useful settings for this type of implementation.

P.S: are you the guy from Gitbar? Big fan of you guys! Keep it up! 🚀

@LucaRainone
Copy link

Have you checked the C.A.T. — Cat Advanced Tools plugin? It allows you to configure thresholds and K results for the three main memory collections, plus other useful settings for this type of implementation.

Yo'h. I was looking for this. Thank you!

P.S: are you the guy from Gitbar? Big fan of you guys! Keep it up! 🚀

❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants