using autogen to set up a multi-agent team of specialized LLMs which mimic a product development and research team ...
- install python
- clone this repo and change directory to come into it
- initiate a python virtual environment:
python -m venv venv
- activate the venv:
. venv/bin/activate
- prompt should show (venv) ...
- install the required python libraries
pip install -r requirements.txt
- (optional) set up jupyter extensions:
- run the notebooks:
cd notebook # to save the notebooks here
jupyter notebook
- tbd (use LM Studio)
- tbd use LM Studio
- need to figure out how to run multiple LLMs together, local-ai didnt work, LM Studio doesn't allow it natively
- this is where we will get the coder LLM (our AI software engineer) to run the code.
- I already have the Dockerfile in the root directory from the CLI run:
- now we build the image (this step takes a while)
docker build -t autogen-project .
(venv) tcm autonomous-product-team$ docker build -t autogen-project .
[+] Building 125.9s (5/9) docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 341B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.9.18-slim 3.6s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [1/4] FROM docker.io/library/python:3.9.18-slim@sha256:96be08c44307e781fd9ce8e05b49c969b4cb902ec23594f904739c58d 122.3s
=> => resolve docker.io/library/python:3.9.18-slim@sha256:96be08c44307e781fd9ce8e05b49c969b4cb902ec23594f904739c58da3 0.0s
=> =>.....
```
- now we run the image:
- `docker run -it -p 3010:3010 --rm --network host autogen-project`
- -it allows you to interact with the container
- --rm removes the container when it stops
- --network host allows the container to connect to localhost where I'm running the actual LLM (much faster on Apple metal than a tiny container)