This repository introduces a new paradigm for conversational programming using agent functions, which leverage GPT-based reasoning to perform semi-autonomous tasks. These functions enable flexible and intuitive operations that surpass traditional functions, especially in scenarios requiring fuzzy logic, intuition, or handling of incomplete/bad input data.
A conversation_fragment
represents a segment of a GPT-powered conversation, either as input (query) or output (response).
- Further categorization into query and response types.
- Investigate inline tags governing conversation flow (noted by Supratik and Richard).
An agent_function
is a GPT-powered function designed to perform a well-defined task autonomously, with minimal code and high flexibility.
- Accepts
conversation_fragment
as input (alongside other data types). - Parses input types to construct the final query dynamically.
- Produces output in the form of
conversation_fragment
or specific structured responses. - Supports Chain of Thought reasoning, breaking down complex problems into manageable subtasks.
- Automatically logs reasoning processes for debugging and review.
- Flexible Input Handling
Other input types can be parsed together at the function's start to form a structured query. - Structured Output Formatting
Queries can specify output formats, e.g.:"The output should be an integer in brackets, e.g., '[2]', '[0]', etc."
The function then parses the structured response accordingly.
- Enables step-by-step problem-solving.
- Automatically decomposes complex tasks into subtasks.
- Summarizes reasoning and decisions for logging and debugging.
- If parsing fails, retries with an extended query before throwing an exception.
- Explicit handling of queries that are too long and get truncated:
- Attempt to process truncated results.
- Issue a follow-up query for a more concise response.
- Modify the base query to request shorter responses if truncation happens frequently.
- Special failure assessment agent functions can be created for critical tasks.
- Cross-validation across multiple LLMs for comparison.
- Built-in support for both web retrieval and user-defined databases.
- Failure rate tracking built into
agent_function
class. - Automatic failure categorization (potentially GPT-assisted).
- GPT-based auto-correction of queries based on failure rate and type.
- Versioned statistics—failure rates are tracked separately for different function versions.
- Agent functions should generalize across multiple LLMs without requiring code duplication.
Agent functions can:
- Process data and generate results
- Make branching decisions to determine program flow
- Pick an item from a small list (single-step decision-making)
- Correct grammatical or formatting errors
- Reasoning step before decision-making (standardized
conversation_fragment
structure) - Error correction with optional reasoning (asks for clarification if ambiguous)
- Simple True/False decision-making (with reasoning)
- Pick an item from a list with reasoning
- Process a large dataset item-by-item with reasoning, including:
- Categorization
- Summarization
- Other forms of processing
- Intelligent selection of retrieval sources based on short previews.
- Summarization and compression of retrieved content for more efficient use.
BranchingConversation
class for managing multi-step agent-driven dialogues.- Versioning system for conversation fragments and agent functions:
- Every function using conversation fragments should be versioned.
- Support calling specific function versions.
- Ability to compare performance across multiple versions.
- Reduce code duplication while integrating versioning across multiple functions.
- Integrate system messages into generated prompts.
- Add notes/history fields to conversations/fragments:
- Track edits, transformations, agent interactions, and lineage.
- OpenAI Function Calling & Assistants API
Everything You Need to Know - Agenta: Open-Source LLMOps Platform
https://agenta.ai/ - Pezzo: Open-Source AI Developer Platform
https://github.com/pezzolabs/pezzo