Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat (provider/perplexity): rewrite provider and support sources #4864

Merged
merged 26 commits into from
Feb 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/forty-readers-clap.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@ai-sdk/perplexity': major
---

feat (provider/perplexity): rewrite provider and support sources
6 changes: 6 additions & 0 deletions .changeset/rude-actors-fly.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
'@ai-sdk/provider-utils': patch
'ai': patch
---

chore (ai): move mockId test helper into provider utils
35 changes: 18 additions & 17 deletions content/providers/01-ai-sdk-providers/70-perplexity.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,26 @@ const { text } = await generateText({
});
```

Perplexity models can be used in the `streamText` and `streamUI` functions (see
[AI SDK Core](/docs/ai-sdk-core) and [AI SDK RSC](/docs/ai-sdk-rsc)).
### Sources

### Provider Metadata
Websites that have been used to generate the response are included in the `sources` property of the result:

The Perplexity provider includes additional experimental metadata in the response through `providerMetadata`:
```ts
import { perplexity } from '@ai-sdk/perplexity';
import { generateText } from 'ai';

const { text, sources } = await generateText({
model: perplexity('sonar-pro'),
prompt: 'What are the latest developments in quantum computing?',
});

console.log(sources);
```

### Provider Options & Metadata

The Perplexity provider includes additional metadata in the response through `providerMetadata`.
Additional configuration options are available through `providerOptions`.

```ts
const result = await generateText({
Expand All @@ -99,10 +113,6 @@ console.log(result.providerMetadata);
// Example output:
// {
// perplexity: {
// citations: [
// 'https://www.sfchronicle.com',
// 'https://www.cbsnews.com/sanfrancisco/',
// ],
// usage: { citationTokens: 5286, numSearchQueries: 1 },
// images: [
// { imageUrl: "https://example.com/image1.jpg", originUrl: "https://elsewhere.com/page1", height: 1280, width: 720 },
Expand All @@ -114,7 +124,6 @@ console.log(result.providerMetadata);

The metadata includes:

- `citations`: Array of URLs used as sources for the response
- `usage`: Object containing `citationTokens` and `numSearchQueries` metrics
- `images`: Array of image URLs when `return_images` is enabled (Tier-2 users only)

Expand All @@ -132,14 +141,6 @@ You can enable image responses by setting `return_images: true` in the provider
| `sonar-pro` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `sonar` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |

### Key Features

- **Real-time Web Search**: Both models provide grounded responses using real-time web search
- **Citations**: Sonar Pro provides 2x more citations than standard Sonar
- **Data Privacy**: No training on customer data
- **Self-serve API**: Immediate access with scalable pricing
- **Advanced Queries**: Support for complex queries and follow-up questions

<Note>
Please see the [Perplexity docs](https://docs.perplexity.ai) for detailed API
documentation and the latest updates.
Expand Down
11 changes: 4 additions & 7 deletions examples/ai-core/src/e2e/perplexity.test.ts
Original file line number Diff line number Diff line change
@@ -1,28 +1,25 @@
import 'dotenv/config';
import { expect } from 'vitest';
import {
perplexity as provider,
PerplexityErrorData,
} from '@ai-sdk/perplexity';
import { perplexity as provider } from '@ai-sdk/perplexity';
import {
createFeatureTestSuite,
createLanguageModelWithCapabilities,
} from './feature-test-suite';
import { APICallError } from '@ai-sdk/provider';

const createChatModel = (modelId: string) =>
createLanguageModelWithCapabilities(provider.chat(modelId));
createLanguageModelWithCapabilities(provider(modelId));

createFeatureTestSuite({
name: 'perplexity',
models: {
invalidModel: provider.chat('no-such-model'),
invalidModel: provider('no-such-model'),
languageModels: [createChatModel('sonar-pro'), createChatModel('sonar')],
},
timeout: 30000,
customAssertions: {
errorValidator: (error: APICallError) => {
expect((error.data as PerplexityErrorData).code).toBe(
expect((error.data as any).code).toBe(
'Some requested entity was not found',
);
},
Expand Down
31 changes: 31 additions & 0 deletions examples/ai-core/src/generate-object/perplexity.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import 'dotenv/config';
import { perplexity } from '@ai-sdk/perplexity';
import { generateObject, generateText } from 'ai';
import { z } from 'zod';

async function main() {
const result = await generateObject({
model: perplexity('sonar-pro'),
prompt: 'What has happened in San Francisco recently?',
providerOptions: {
perplexity: {
search_recency_filter: 'week',
},
},
output: 'array',
schema: z.object({
title: z.string(),
summary: z.string(),
}),
});

console.log(result.object);
console.log();
console.log('Token usage:', result.usage);
console.log('Finish reason:', result.finishReason);
console.log('Metadata:', result.providerMetadata);
}

main().catch((error: Error) => {
console.error(JSON.stringify(error, null, 2));
});
8 changes: 7 additions & 1 deletion examples/ai-core/src/generate-text/perplexity.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,17 @@ import { generateText } from 'ai';
async function main() {
const result = await generateText({
model: perplexity('sonar-pro'),
prompt: 'Invent a new holiday and describe its traditions.',
prompt: 'What has happened in San Francisco recently?',
providerOptions: {
perplexity: {
search_recency_filter: 'week',
},
},
});

console.log(result.text);
console.log();
console.log('Sources:', result.sources);
console.log('Token usage:', result.usage);
console.log('Finish reason:', result.finishReason);
console.log('Metadata:', result.providerMetadata);
Expand Down
10 changes: 7 additions & 3 deletions examples/ai-core/src/stream-text/perplexity.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,20 @@ import 'dotenv/config';
async function main() {
const result = streamText({
model: perplexity('sonar-pro'),
prompt:
'List the top 5 San Francisco news from the past week.' +
'You must include the date of each article.',
prompt: 'What has happened in San Francisco recently?',
providerOptions: {
perplexity: {
search_recency_filter: 'week',
},
},
});

for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}

console.log();
console.log('Sources:', await result.sources);
console.log('Token usage:', await result.usage);
console.log('Finish reason:', await result.finishReason);
console.log(
Expand Down
4 changes: 3 additions & 1 deletion examples/next-openai/app/api/use-chat-sources/route.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import { vertex } from '@ai-sdk/google-vertex';
import { perplexity } from '@ai-sdk/perplexity';
import { streamText } from 'ai';

// Allow streaming responses up to 30 seconds
Expand All @@ -8,7 +9,8 @@ export async function POST(req: Request) {
const { messages } = await req.json();

const result = streamText({
model: vertex('gemini-1.5-flash', { useSearchGrounding: true }),
// model: vertex('gemini-1.5-flash', { useSearchGrounding: true }),
model: perplexity('sonar-pro'),
messages,
});

Expand Down
43 changes: 22 additions & 21 deletions examples/next-openai/app/use-chat-sources/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -21,27 +21,28 @@ export default function Chat() {
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
}

if (part.type === 'source') {
return (
<span key={`source-${part.source.id}`}>
[
<a
href={part.source.url}
target="_blank"
className="text-sm font-bold text-blue-500 hover:underline"
>
{part.source.title}
</a>
]
</span>
);
}
})}
{message.parts
.filter(part => part.type !== 'source')
.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
}
})}
{message.parts
.filter(part => part.type === 'source')
.map(part => (
<span key={`source-${part.source.id}`}>
[
<a
href={part.source.url}
target="_blank"
className="text-sm font-bold text-blue-500 hover:underline"
>
{part.source.title ?? new URL(part.source.url).hostname}
</a>
]
</span>
))}
</div>
))}

Expand Down
1 change: 1 addition & 0 deletions examples/next-openai/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
"@ai-sdk/deepseek": "0.1.9",
"@ai-sdk/openai": "1.1.10",
"@ai-sdk/google-vertex": "2.1.13",
"@ai-sdk/perplexity": "0.0.8",
"@ai-sdk/ui-utils": "1.1.13",
"@ai-sdk/react": "1.1.13",
"@vercel/blob": "^0.26.0",
Expand Down
2 changes: 1 addition & 1 deletion packages/ai/core/generate-text/generate-text.test.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
import { LanguageModelV1CallOptions } from '@ai-sdk/provider';
import { mockId } from '@ai-sdk/provider-utils/test';
import { jsonSchema } from '@ai-sdk/ui-utils';
import assert from 'node:assert';
import { z } from 'zod';
import { Output } from '.';
import { ToolExecutionError } from '../../errors';
import { mockId } from '../test/mock-id';
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
import { MockTracer } from '../test/mock-tracer';
import { tool } from '../tool/tool';
Expand Down
2 changes: 1 addition & 1 deletion packages/ai/core/generate-text/stream-text.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ import {
convertAsyncIterableToArray,
convertReadableStreamToArray,
convertResponseStreamToArray,
mockId,
} from '@ai-sdk/provider-utils/test';
import { jsonSchema } from '@ai-sdk/ui-utils';
import assert from 'node:assert';
import { z } from 'zod';
import { ToolExecutionError } from '../../errors/tool-execution-error';
import { StreamData } from '../../streams/stream-data';
import { createDataStream } from '../data-stream/create-data-stream';
import { mockId } from '../test/mock-id';
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
import { createMockServerResponse } from '../test/mock-server-response';
import { MockTracer } from '../test/mock-tracer';
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
import {
convertArrayToReadableStream,
convertAsyncIterableToArray,
mockId,
} from '@ai-sdk/provider-utils/test';
import { generateText, streamText } from '../generate-text';
import { wrapLanguageModel } from '../middleware/wrap-language-model';
import { mockId } from '../test/mock-id';
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
import { extractReasoningMiddleware } from './extract-reasoning-middleware';

Expand Down
6 changes: 4 additions & 2 deletions packages/ai/test/index.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
export { convertArrayToReadableStream } from '@ai-sdk/provider-utils/test';
export {
convertArrayToReadableStream,
mockId,
} from '@ai-sdk/provider-utils/test';
export { MockEmbeddingModelV1 } from '../core/test/mock-embedding-model-v1';
export { mockId } from '../core/test/mock-id';
export { MockLanguageModelV1 } from '../core/test/mock-language-model-v1';
export { mockValues } from '../core/test/mock-values';

Expand Down
1 change: 0 additions & 1 deletion packages/perplexity/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
}
},
"dependencies": {
"@ai-sdk/openai-compatible": "0.1.9",
"@ai-sdk/provider": "1.0.7",
"@ai-sdk/provider-utils": "2.1.7"
},
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
// Vitest Snapshot v1, https://vitest.dev/guide/snapshot.html

exports[`convertToPerplexityMessages > assistant messages > should convert an assistant message with text content 1`] = `
[
{
"content": "Assistant reply",
"role": "assistant",
},
]
`;

exports[`convertToPerplexityMessages > system messages > should convert a system message with text content 1`] = `
[
{
"content": "System initialization",
"role": "system",
},
]
`;

exports[`convertToPerplexityMessages > user messages > should convert a user message with text parts 1`] = `
[
{
"content": "Hello World",
"role": "user",
},
]
`;
Loading
Loading