Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-implement token caching for Vercel AI SDK usage. #58

Closed
bhouston opened this issue Mar 3, 2025 · 1 comment
Closed

Re-implement token caching for Vercel AI SDK usage. #58

bhouston opened this issue Mar 3, 2025 · 1 comment

Comments

@bhouston
Copy link
Member

bhouston commented Mar 3, 2025

We used to have token caching implemented for the Anthropic SDK when using it directly but now that we are moving towards the Vercel AI SDK, we need to re-implement token caching using the Vercel AI SDK method. The way to do it is specific per AI provider, so the Anthropic way is described here: https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic

@bhouston
Copy link
Member Author

bhouston commented Mar 3, 2025

There is commented out code in toolAgent.ts which added the proper cache_control data for the direct Anthropic integration. This would have to be redone for the Vercel AI SDK integration, but it should be quite similar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant