Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allows CustomFunction to be set as an Ending Node. #1591

Merged
merged 1 commit into from
Jan 26, 2024

Conversation

mokeyish
Copy link
Contributor

@mokeyish mokeyish commented Jan 23, 2024

closes #1590

With this PR, we can set CustomFunction as an Ending Node.

图片

@HenryHengZJ
Copy link
Contributor

interesting, what is your use case to have custom function as ending node?

@mokeyish
Copy link
Contributor Author

@HenryHengZJ This is the case. My chatflow uses llm to generate SQL, input it to customFunction, then send it to the database, and finally construct the final JSON result from this customFunction.

@dkindlund
Copy link
Contributor

@HenryHengZJ and @mokeyish , could this PR also solve this additional use case?:
#1489

@Jaredude
Copy link
Contributor

Jaredude commented Jan 25, 2024

This is interesting. It allows Flowise to return a custom API output! I can see a number of useful benefits from consumption by other APIs or custom chat handling. Something I can think of for a current project I'm working on would be to return an additional attribute(s) in the payload so that I can then perform additional logic on the client based upon output beyond the "text" value returned to the chat bot.

{
  text: "I just found you a great product! I'm sending you there now!",
  additionalResource: "https://example.com/cool/product",
  savedSearchId: "aie9-nnie4-87ugoi-817"
}

I feel like it might make sense to include an additional API entry point distinct from prediction. Similar to internal-prediction, an additional variable could be added to buildChatflow: isCustomOutput. By doing this, we should be able to simplify the logic in buildChatflow as well; reducing the changes to packages/server/src/index.ts and allowing for additional validation logic to be added in the future (e.g. define a flow as custom output and only allow it to be called from the new entry point and not from chat by mistake <<< not something needed to launch a feature like this, but someone certainly could make the mistake of trying to chat with a custom output).

Perhaps something along the lines of /api/v1/custom-prediction/:id would work?

@mokeyish Minor ts adjustment on the change to packages/components/nodes/utilities/CustomFunction/CustomFunction.ts. The value of isEndingNode on line 131 ends up being iffy (boolean | 0 | undefined) instead of just boolean. While 0 and undefined are both falsey, typecasting to boolean and using ternary logic makes the boolean clear. We can also leave off the check of Object.keys().length since we're already checking that nodeData.outputs exists and it can only be undefined or an Object. An object with no attributes will still return an array to Object.values().

const isEndingNode: boolean = nodeData.outputs && Object.values(nodeData.outputs).includes('EndingNode') ? true : false

Also, I'm happy to help out on this PR if needed.

@mokeyish mokeyish force-pushed the patch-2 branch 2 times, most recently from feefcba to 9ef85e4 Compare January 25, 2024 06:14
@mokeyish
Copy link
Contributor Author

@Jaredude Thanks, I've updated it. But this is more concise and can also infer the boolean type.

图片

@mokeyish
Copy link
Contributor Author

@HenryHengZJ and @mokeyish , could this PR also solve this additional use case?: #1489

It seems like it should be possible. Can you pull the code from this branch and test it?

@HenryHengZJ
Copy link
Contributor

@dkindlund I think for your use case, we cant do that yet, because you cant take the output from agent to custom function, only llmchain at the moment

@mokeyish exact same thought. thats why we are working on a workflow feature which allow more sequential flow. that way you have a seperate endpoint api/v1/workflow/ and you can import chatflow inside workflow to do additional stuff. We want to keep chatflow to chat-based interaction, and workflow for non-chat-based pipeline and processing.

having said that, as a near term solution, we can use custom function as ending node

@Jaredude
Copy link
Contributor

While going through debug, I found one issue: The code runs twice.

run is just a proxy for init, so the end result is that all of the Custom JS code is running twice for every request.

Screenshot 2024-01-25 at 11 21 27 AM

This is only happening with the CustomJS that's an end node. Any Custom JS that's not an end node will not execute twice.

@HenryHengZJ
Copy link
Contributor

@Jaredude good spot, cus we are hitting both init and run functions

Copy link
Contributor

@HenryHengZJ HenryHengZJ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the init function, add a check to see if the output is Ending Node, if yes, dont execute sandbox stuff, this way we can prevent running both init and run twice

@Jaredude
Copy link
Contributor

This should be an easy workaround for the double run issue...
Place the following at the top of init:
if (nodeData?.outputs?.output === 'EndingNode' && !options.isRun) return
and for run, call init like this:
return await this.init(nodeData, input, { ...options, isRun: true })

Example: In the first CustomJS, which is an output, init is called directly without options.isRun. On the second CustomJS, init is called but returns immediately since it's an ending node. The call to run (since it's an ending node) then adds options.isRun, and the ending node code runs.
Screenshot 2024-01-25 at 11 47 54 AM

I have only done basic testing on this with Lorem Ipsum data so far. I'll run this using some real world flows and see if there are any issue that surface.

@Jaredude
Copy link
Contributor

@mokeyish better yet, I think we can dual purpose the logic at the top...

  const isEndingNode = nodeData?.outputs?.output === 'EndingNode'
  if (isEndingNode && !options.isRun) return

Then the logic on line 1659 is already taken care of. I'll add comments on the PR code.

@mokeyish
Copy link
Contributor Author

@HenryHengZJ @Jaredude please review again.

Copy link
Contributor

@Jaredude Jaredude left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran this through two separate LLM Chain flows and received the expected outcome. Here's a screenshot.
Screenshot 2024-01-25 at 11 22 03 PM

I also tested validations such as no end node, multiple JS functions, JS code that errors, and also existing node validation (e.g. LLM Chain with an Output Prediction and no end node). Everything is working as expected!

@HenryHengZJ
Copy link
Contributor

awesome, thanks guys!

@HenryHengZJ HenryHengZJ merged commit 601a4d6 into FlowiseAI:main Jan 26, 2024
2 checks passed
@mokeyish mokeyish deleted the patch-2 branch January 26, 2024 13:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE] Allows any node to be set as the end node.
4 participants