-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python: Passing output of step to next in SequentialChain #1476
Comments
What is the output? Why do you think the output is not passing from the first to the second? What happens if you name both of the variables in your |
Output is : A multinational technology company known for its innovative hardware and software products, including smartphones, laptops, and operating systems. On langchain I get a description that looks more what is approp for the product. Regal Linens is a premium provider of high-quality linens for hotels, restaurants, and home use, offering a wide range of colors and sizes.' No major difference when I use $input as the prompt var. That is one reason I wanted some kind of trace to see what is being sent to the LLM in the 2nd step. |
Hi @lemillermicrosoft - I retried with $input and now seem to get proper result. I was reasonably sure I had tried it. Anyway it does seem like if I use $input in both the prompt it seems to be chaining properly. I do suggest to add this info to the documents to use $input as I dont believe I could find it without going into the library source. I also have a more general SequentialChain I am trying to rewrite it for SK where there is a need to define output context variable so that I can pass things between elements in the chain more flexibly (like pass output of 1st stage to 2,3,4. And pass output of stage 3 to Stage 5 in the sequential chain). Is this supported in Semantic Kernel? If so, can you please share a python sample? In LC it is done by specifying a output_key in the LLMChain. |
I agree we can do a better job here. Here are some links of where this is documented: |
Closing this issue. Thanks for the inputs. Summary: SimpleSequentialChain : Works fine when using $input as the variable |
I want to rewrite a Langchain SimpleSequentialChain (in Python) where I am looking at a chain of semantic functions and call them sequentially. The output of the 1st function needs to be passed as input context in the second function.
How do I do this? Here is my code snippet (tried to use the $input as well as some other name for the context vars in the prompts below):
The code is working and getting some output but likely not passing output of 1st function to the second. Also what is the procedure to debug/trace how these functions are called, the variables passed and what is sent to the LLM (gpt3.5 turbo chat completions in this case).
The text was updated successfully, but these errors were encountered: