Important
Beta Notice: This project is in rapid development and may not be stable for production use.
This repository provides a flexible system for building and orchestrating agents and workflows. It offers two modes:
- Client Mode: Where tasks call out to a remote API client (e.g., your
client.py
functions). - Local Mode: Where tasks run directly in the local environment, utilizing
run_agents(...)
and local logic.
It also supports loading YAML or JSON workflows to define multi-step tasks.
The framework has distilled Agents into 3 distinct pieces:
- Agents
- Tasks
- Workflows
The Agent can be configured with:
- Model Provider (e.g., OpenAI, Llama, etc.)
- Tools (e.g., specialized functions)
Users can define tasks (like sentiment
, translate_text
, etc.) in a local or client mode. They can also upload workflows (in YAML or JSON) to orchestrate multiple steps in sequence.
- Install the latest release:
pip install --upgrade iointel
-
Set Required Environment Variable:
OPENAI_API_KEY
orIO_API_KEY
for the default OpenAI-basedChatOpenAI
.
-
Optional Environment Variables:
LOGGING_LEVEL
(optional) to configure logging verbosity:DEBUG
,INFO
, etc.OPENAI_API_BASE_URL
orIO_API_BASE_URL
to point to OpenAI-compatible API implementation, likehttps://api.intelligence.io.solutions/api/v1
OPENAI_API_MODEL
orIO_API_MODEL
to pick specific LLM model as "agent brain", likemeta-llama/Llama-3.3-70B-Instruct
- They can have a custom model (e.g.,
ChatOpenAI
, a Llama-based model, etc.). - Agents can have tools attached, which are specialized functions accessible during execution.
- Agents can have a custom Persona Profile configured.
- A task is a single step in a workflow, e.g.,
schedule_reminder
,sentiment
,translate_text
, etc. - Tasks are managed by the
Workflow
class inworkflow.py
. - Tasks can be chained for multi-step logic into a workflow (e.g.,
Workflow(text="...").translate_text().sentiment().run_tasks()
).
- Local Mode: The system calls
run_agents(...)
directly in your local environment. - Client Mode: The system calls out to remote endpoints in a separate API.
- In
client_mode=True
, each task (e.g.sentiment
) triggers a client function (sentiment_analysis(...)
) instead of local logic.
- In
This allows you to switch between running tasks locally or delegating them to a server.
Note: this part is under active development and might not always function!
- You can define multi-step workflows in YAML or JSON.
- The endpoint
/run-file
accepts a file (via multipart form data).- First tries parsing the payload as JSON.
- If that fails, it tries parsing the payload as YAML.
- The file is validated against a
WorkflowDefinition
Pydantic model. - Each step has a
type
(e.g.,"sentiment"
,"custom"
) and optional parameters (likeagents
,target_language
, etc.).
from iointel import Agent
my_agent = Agent(
name="MyAgent",
instructions="You are a helpful agent.",
# one can also pass custom model via `model=ChatOpenAI(some, args)`
# or pass args to ChatOpenAI() as kwargs to Agent()
)
from iointel import PersonaConfig, Agent
my_persona = PersonaConfig(
name="Elandria the Arcane Scholar",
age=164,
role="an ancient elven mage",
style="formal and slightly archaic",
domain_knowledge=["arcane magic", "elven history", "ancient runes"],
quirks="often references centuries-old events casually",
bio="Once studied at the Grand Academy of Runic Arts",
lore="Elves in this world can live up to 300 years",
personality="calm, wise, but sometimes condescending",
conversation_style="uses 'thee' and 'thou' occasionally",
description="Tall, silver-haired, wearing intricate robes with arcane symbols",
emotional_stability=0.85,
friendliness=0.45,
creativity=0.68,
curiosity=0.95,
formality=0.1,
empathy=0.57,
humor=0.99,
)
agent = Agent(
name="ArcaneScholarAgent",
instructions="You are an assistant specialized in arcane knowledge.",
persona=my_persona
)
print(agent.instructions)
In Python code, you can create tasks by instantiating the Tasks class and chaining methods:
from iointel import Workflow
tasks = Workflow(text="This is the text to analyze", client_mode=False)
(
tasks
.sentiment(agents=[my_agent])
.translate_text(target_language="french") # a second step
)
results = tasks.run_tasks()
print(results)
Because client_mode=False, everything runs locally.
tasks = Workflow(text="Breaking news: local sports team wins!", client_mode=False)
tasks.summarize_text(max_words=50).run_tasks()
tasks = Workflow(text="Breaking news: local sports team wins!", client_mode=True)
tasks.summarize_text(max_words=50).run_tasks()
Now, summarize_text calls the client function (e.g., summarize_task(...)) instead of local logic.
Note: this part is under active development and might not always function!
1. Create a YAML or JSON file specifying workflow:
name: "My YAML Workflow"
text: "Large text to analyze"
workflow:
- type: "sentiment"
- type: "summarize_text"
max_words: 20
- type: "moderation"
threshold: 0.7
- type: "custom"
name: "special-step"
objective: "Analyze the text"
instructions: "Use advanced analysis"
context:
extra_info: "some metadata"
2. Upload via the /run-file endpoint (multipart file upload).
The server reads it as JSON or YAML and runs the tasks sequentially in local mode.
tasks = Workflow("Breaking news: new Python release!", client_mode=False)
tasks.summarize_text(max_words=30).run_tasks()
Returns a summarized result.
tasks = Workflow("Tech giant acquires startup for $2B", client_mode=False)
(tasks
.translate_text(target_language="spanish")
.sentiment()
)
results = tasks.run_tasks()
1. Translate to Spanish,
2. Sentiment analysis.
tasks = Workflow("Analyze this special text", client_mode=False)
tasks.custom(
name="my-unique-step",
objective="Perform advanced analysis",
instructions="Focus on entity extraction and sentiment",
agents=[my_agent],
**{"extra_context": "some_val"}
)
results = tasks.run_tasks()
A "custom" task can reference a custom function in the CUSTOM_WORKFLOW_REGISTRY or fall back to a default behavior.
Note: this part is under active development and might not always function!
curl -X POST "https://api.intelligence.io.solutions/api/v1/workflows/run-file" \
-F "yaml_file=@path/to/workflow.yaml"
Please refer to (IO.net documentation)[https://docs.io.net/docs/exploring-ai-agents] to see particular endpoints and their documentation.
See the LICENSE file for license rights and limitations (Apache 2.0).