Skip to content

Commit

Permalink
Merge pull request #125 from restackio/productionEndpoints
Browse files Browse the repository at this point in the history
update production demo with endpoints
  • Loading branch information
aboutphilippe authored Jan 11, 2025
2 parents 0def8e1 + 539d5c1 commit 83cec6e
Show file tree
Hide file tree
Showing 11 changed files with 103 additions and 70 deletions.
112 changes: 60 additions & 52 deletions production_demo/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Restack AI - Production Example

This repository contains a simple example project to help you scale with the Restack AI.
This repository contains a simple example project to help you scale with Restack AI.
It demonstrates how to scale reliably to millions of workflows on a local machine with a local LLM provider.

## Walkthrough video
Expand Down Expand Up @@ -60,85 +60,93 @@ And for each child workflow, for each step you can see how long the function sta

## Prerequisites

- Python 3.8 or higher
- Python 3.10 or higher
- Poetry (for dependency management)
- Docker (for running the Restack services)
- Local LLM provider (we use LMStudio and a Meta Llama 3.1 8B Instruct 4bit model in this example)

## Usage
## Start LM stduio for local LLM provider

0. Start LM Studio and start local server with Meta Llama 3.1 8B Instruct 4bit model
Start local server with Meta Llama 3.1 8B Instruct 4bit model

https://lmstudio.ai

1. Run Restack local engine with Docker:
## Prerequisites

```bash
docker run -d --pull always --name restack -p 5233:5233 -p 6233:6233 -p 7233:7233 ghcr.io/restackio/restack:main
```
- Docker (for running Restack)
- Python 3.10 or higher

2. Open the web UI to see the workflows:
## Start Restack

```bash
http://localhost:5233
```
To start the Restack, use the following Docker command:

3. Clone this repository:
```bash
docker run -d --pull always --name restack -p 5233:5233 -p 6233:6233 -p 7233:7233 ghcr.io/restackio/restack:main
```

```bash
git clone https://github.com/restackio/examples-python
cd examples-python/examples/production_demo
```
## Start python shell

4. Install dependencies using Poetry:
```bash
poetry env use 3.10 && poetry shell
```

```bash
poetry env use 3.12
```
## Install dependencies

```bash
poetry install
```

```bash
poetry env info # Optional: copy the interpreter path to use in your IDE (e.g. Cursor, VSCode, etc.)
```

```bash
poetry run dev
```

```bash
poetry shell
```
## Run workflows

```bash
poetry install
```
### from UI

```bash
poetry env info # Optional: copy the interpreter path to use in your IDE (e.g. Cursor, VSCode, etc.)
```
You can run workflows from the UI by clicking the "Run" button.

5. Run the services:
![Run workflows from UI](./ui-endpoints.png)

```bash
poetry run dev
```
### from API

This will start the Restack service with the defined workflows and functions.
You can run one workflow from the API by using the generated endpoint:

6. In a new terminal, schedule the workflow:
`POST http://localhost:6233/api/workflows/ChildWorkflow`

```bash
poetry shell
```
or multiple workflows by using the generated endpoint:

```bash
poetry run workflow
```
`POST http://localhost:6233/api/workflows/ExampleWorkflow`

This will schedule the ExampleWorkflow` and print the result.
### from any client

7. Optionally, schedule the workflow to run on a interval:
You can run workflows with any client connected to Restack, for example:

```bash
poetry run schedule
```

executes `schedule_workflow.py` which will connect to Restack and execute the `ChildWorkflow` workflow.

```bash
poetry run scale
```

executes `schedule_scale.py` which will connect to Restack and execute the `ExampleWorkflow` workflow.

```bash
poetry run interval
```

```bash
poetry run interval
```
executes `schedule_interval.py` which will connect to Restack and execute the `ChildWorkflow` workflow every second.

8. Optionally, schedule a parent workflow to run 50 child workflows all at once:
## Deploy on Restack Cloud

```bash
poetry run scale
```
To deploy the application on Restack, you can create an account at [https://console.restack.io](https://console.restack.io)

## Project Structure

Expand All @@ -149,7 +157,7 @@ https://lmstudio.ai
- `services.py`: Sets up and runs the Restack services
- `schedule_workflow.py`: Example script to schedule and run a workflow
- `schedule_interval.py`: Example script to schedule and a workflow every second
- `schedule_scale.py`: Example script to schedule and run 100 workflows at once
- `schedule_scale.py`: Example script to schedule and run 50 workflows at once

# Deployment

Expand Down
3 changes: 2 additions & 1 deletion production_demo/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,10 @@ packages = [{include = "src"}]

[tool.poetry.dependencies]
python = ">=3.10,<4.0"
restack-ai = "^0.0.48"
restack-ai = "^0.0.52"
watchfiles = "^1.0.0"
openai = "^1.57.2"
pydantic = "^2.10.5"

[tool.poetry.dev-dependencies]
pytest = "6.2" # Optional: Add if you want to include tests in your example
Expand Down
3 changes: 3 additions & 0 deletions production_demo/schedule_scale.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
import time
from restack_ai import Restack

from src.workflows.workflow import ExampleWorkflowInput

async def main():

client = Restack()
Expand All @@ -10,6 +12,7 @@ async def main():
await client.schedule_workflow(
workflow_name="ExampleWorkflow",
workflow_id=workflow_id,
input=ExampleWorkflowInput(amount=50)
)

exit(0)
Expand Down
8 changes: 6 additions & 2 deletions production_demo/src/functions/evaluate.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
from restack_ai.function import function, FunctionFailure, log
from openai import OpenAI
from pydantic import BaseModel

class EvaluateInput(BaseModel):
generated_text: str

@function.defn()
async def llm_evaluate(generated_text: str) -> str:
async def llm_evaluate(input: EvaluateInput) -> str:
try:
client = OpenAI(base_url="http://192.168.4.142:1234/v1/",api_key="llmstudio")
except Exception as e:
Expand All @@ -12,7 +16,7 @@ async def llm_evaluate(generated_text: str) -> str:
prompt = (
f"Evaluate the following joke for humor, creativity, and originality. "
f"Provide a score out of 10 for each category for your score.\n\n"
f"Joke: {generated_text}\n\n"
f"Joke: {input.generated_text}\n\n"
f"Response format:\n"
f"Humor: [score]/10"
f"Creativity: [score]/10"
Expand Down
9 changes: 7 additions & 2 deletions production_demo/src/functions/function.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,13 @@

tries = 0

from pydantic import BaseModel

class ExampleFunctionInput(BaseModel):
name: str

@function.defn()
async def example_function(input: str) -> str:
async def example_function(input: ExampleFunctionInput) -> str:
try:
global tries

Expand All @@ -12,7 +17,7 @@ async def example_function(input: str) -> str:
raise FunctionFailure(message="Simulated failure", non_retryable=False)

log.info("example function started", input=input)
return f"Hello, {input}!"
return f"Hello, {input.name}!"
except Exception as e:
log.error("example function failed", error=e)
raise e
9 changes: 7 additions & 2 deletions production_demo/src/functions/generate.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
from restack_ai.function import function, FunctionFailure, log
from openai import OpenAI

from pydantic import BaseModel

class GenerateInput(BaseModel):
prompt: str

@function.defn()
async def llm_generate(prompt: str) -> str:
async def llm_generate(input: GenerateInput) -> str:

try:
client = OpenAI(base_url="http://192.168.4.142:1234/v1/",api_key="llmstudio")
Expand All @@ -16,7 +21,7 @@ async def llm_generate(prompt: str) -> str:
messages=[
{
"role": "user",
"content": prompt
"content": input.prompt
}
],
temperature=0.5,
Expand Down
3 changes: 2 additions & 1 deletion production_demo/src/services.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from src.functions.evaluate import llm_evaluate

from src.workflows.workflow import ExampleWorkflow, ChildWorkflow

import webbrowser


async def main():
Expand Down Expand Up @@ -43,5 +43,6 @@ def run_services():
def watch_services():
watch_path = os.getcwd()
print(f"Watching {watch_path} and its subdirectories for changes...")
webbrowser.open("http://localhost:5233")
run_process(watch_path, recursive=True, target=run_services)

9 changes: 6 additions & 3 deletions production_demo/src/workflows/child.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,21 @@
from datetime import timedelta

from pydantic import BaseModel, Field
from restack_ai.workflow import workflow, import_functions, log

with import_functions():
from src.functions.function import example_function
from src.functions.generate import llm_generate
from src.functions.evaluate import llm_evaluate

class ChildWorkflowInput(BaseModel):
name: str = Field(default='John Doe')

@workflow.defn()
class ChildWorkflow:
@workflow.run
async def run(self):
async def run(self, input: ChildWorkflowInput):
log.info("ChildWorkflow started")
await workflow.step(example_function, input="first", start_to_close_timeout=timedelta(minutes=2))
await workflow.step(example_function, input=input, start_to_close_timeout=timedelta(minutes=2))

await workflow.sleep(1)

Expand Down
16 changes: 10 additions & 6 deletions production_demo/src/workflows/workflow.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,29 @@
import asyncio
from datetime import timedelta

from pydantic import BaseModel, Field
from restack_ai.workflow import workflow, log, workflow_info, import_functions
from .child import ChildWorkflow
from .child import ChildWorkflow, ChildWorkflowInput

with import_functions():
from src.functions.generate import llm_generate
from src.functions.generate import llm_generate

class ExampleWorkflowInput(BaseModel):
amount: int = Field(default=50)

@workflow.defn()
class ExampleWorkflow:
@workflow.run
async def run(self):
async def run(self, input: ExampleWorkflowInput):
# use the parent run id to create child workflow ids
parent_workflow_id = workflow_info().workflow_id

tasks = []
for i in range(50):
for i in range(input.amount):
log.info(f"Queue ChildWorkflow {i+1} for execution")
task = workflow.child_execute(
ChildWorkflow,
workflow_id=f"{parent_workflow_id}-child-execute-{i+1}"
workflow_id=f"{parent_workflow_id}-child-execute-{i+1}",
input=ChildWorkflowInput(name=f"child workflow {i+1}")
)
tasks.append(task)

Expand Down
Binary file added production_demo/ui-endpoints.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion quickstart/src/services.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
from src.client import client
from src.workflows.workflow import GreetingWorkflow
from watchfiles import run_process
from restack_ai.restack import ServiceOptions
import webbrowser

async def main():
Expand Down

0 comments on commit 83cec6e

Please sign in to comment.