langchain How to invoke GraphCypherQAChain.from_llm() with multiple variables

uajslkp6  于 5个月前  发布在  其他
关注(0)|答案(3)|浏览(77)

检查其他资源

  • 我为这个问题添加了一个非常描述性的标题。
  • 我使用集成搜索在LangChain文档中进行了搜索。
  • 我使用GitHub搜索查找了一个类似的问题,但没有找到。
  • 我确信这是LangChain中的一个错误,而不是我的代码。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此错误。

示例代码

prefix = """
Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.

Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
context: 
{context}
Examples: Here are a few examples of generated Cypher statements for particular questions:
"""
FEW_SHOT_PROMPT = FewShotPromptTemplate(
    example_selector = example_selector,
    example_prompt = example_prompt,
    prefix=prefix,
    suffix="Question: {question}, \nCypher Query: ",
    input_variables =["question","query", "context"],
) 

graph_qa = GraphCypherQAChain.from_llm(
    cypher_llm = llm3, #should use gpt-4 for production
    qa_llm = llm3,
    graph=graph,
    verbose=True,
    cypher_prompt = FEW_SHOT_PROMPT,
    )
    
input_variables = {
    "question": args['question'],
    "context": "NA",
    "query": args['question']
}

graph_qa.invoke(input_variables) 

### Error Message and Stack Trace (if applicable)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[41], line 1
----> 1 graph_qa.invoke(input_variables)

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
    150 try:
    151     self._validate_inputs(inputs)
    152     outputs = (
--> 153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )
    161 except BaseException as e:

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py:247, in GraphCypherQAChain._call(self, inputs, run_manager)
    243 question = inputs[self.input_key]
    245 intermediate_steps: List = []
--> 247 generated_cypher = self.cypher_generation_chain.run(
    248     {"question": question, "schema": self.graph_schema}, callbacks=callbacks
    249 )
    251 # Extract Cypher code if it is wrapped in backticks
    252 generated_cypher = extract_cypher(generated_cypher)

File ~/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    146     warned = True
    147     emit_warning()
--> 148 return wrapped(*args, **kwargs)

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:595, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    593     if len(args) != 1:
    594         raise ValueError("`run` supports only one positional argument.")
--> 595     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    596         _output_key
    597     ]
    599 if kwargs and not args:
    600     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    601         _output_key
    602     ]

File ~/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    146     warned = True
    147     emit_warning()
--> 148 return wrapped(*args, **kwargs)

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    346 """Execute the chain.
    347 
    348 Args:
   (...)
    369         `Chain.output_keys`.
    370 """
    371 config = {
    372     "callbacks": callbacks,
    373     "tags": tags,
    374     "metadata": metadata,
    375     "run_name": run_name,
    376 }
--> 378 return self.invoke(
    379     inputs,
    380     cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
    381     return_only_outputs=return_only_outputs,
    382     include_run_info=include_run_info,
    383 )

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:151, in Chain.invoke(self, input, config, **kwargs)
    145 run_manager = callback_manager.on_chain_start(
    146     dumpd(self),
    147     inputs,
    148     name=run_name,
    149 )
    150 try:
--> 151     self._validate_inputs(inputs)
    152     outputs = (
    153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )

File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:279, in Chain._validate_inputs(self, inputs)
    277 missing_keys = set(self.input_keys).difference(inputs)
    278 if missing_keys:
--> 279     raise ValueError(f"Missing some input keys: {missing_keys}")

ValueError: Missing some input keys: {'context'}

### Description

Hello, 
I cannot seem to invoke GraphCypherQAChain.from_llm() properly so that I can format correctly for the FewShotPromptTemplate. 
Especially, in the template I introduced at variable 'context' which I intend to supply at the invoke time. 
However, even I pass 'context' at invoke time, the FewShotPromptTemplate doesn't seem to access this variable. 
I am confused how arguments are passed for prompt vs chain. 
It seems like the argument for QAchain is 'query' only, i.e graph_qa.invoke({'query': 'user question'}). 
In this case, we cannot really have a dynamic few shot prompt template. 
 
Please provide me with some direction here. 
Thank you

### System Info

langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.2.3
langchain-openai==0.1.6
langchain-text-splitters==0.0.1
langchainhub==0.1.15
bxfogqkk

bxfogqkk1#

我也有同样的问题:+1

1szpjjfi

1szpjjfi2#

我建议你使用langgraph。

ercv8c1e

ercv8c1e3#

感谢sgautam666,是的,我现在正在阅读这本书,很久以前就想尝试一下了。尽管我成功地在LCEL和runnables中实现了它,但如果你想使用其他东西,你仍然可以运行。

相关问题