It appears that setting QwenHelper to public does not resolve your issue. Adding paths such as /chat/completions is the internal logic of the DashScope SDK and is not handled within QwenHelper. Here are some suggestions:
Set up a "proxy" that maps the service paths of DashScope to your own paths, and pass the URL of this proxy as the baseUrl to QwenXxxModel.
Alternatively, encapsulate a module, such as langchain4j-your-llm-proxy, that implements the LanguageModel/ChatLanguageModel interfaces of langchain4j. Internally, this module should use your own logic to call the specific OpenAiXxxModel or QwenXxxModel.
4条答案
按热度按时间carvr3hs1#
你好,你为什么要使用它?将其设为公共将使我们更改此类变得非常困难。
1mrurvl12#
你好,你为什么要使用它?将其公开将使我们更改此类变得非常困难
感谢回复。
我们使用llm-proxy rest服务,它是rest api,用于连接不同的模型
URL类似于
https://xxxx//ai-proxy/aliyun/qwen
https://xxxx//ai-proxy/openai/gpt-4
但是langchain4j会自动添加到URL中,例如
***/chat/completions
,***/aigc/text-generation/generation
我们创建了一个OpenAiClient来修改URL并使用java
reflect
修改OpenAiChatModelQwenChatModel无法这样做,还有其他建议吗?
py49o6xq3#
@jiangsier-xyz could you please comment on this?
rsaldnfx4#
你好,为什么想使用它?将其公开将使我们很难更改此类
感谢回复。
我们使用llm-proxy rest服务,它是rest api,用于连接不同的模型
URL类似于
https://xxxx//ai-proxy/aliyun/qwen
https://xxxx//ai-proxy/openai/gpt-4
但是langchain4j会自动添加到URL中,例如
***/chat/completions
,***/aigc/text-generation/generation
我们创建了一个OpenAiClient来修改URL并使用java
reflect
修改OpenAiChatModelQwenChatModel无法这样做,还有其他建议吗?
It appears that setting QwenHelper to public does not resolve your issue. Adding paths such as /chat/completions is the internal logic of the DashScope SDK and is not handled within QwenHelper. Here are some suggestions: