langchain4j [特性] 支持GPU进行进程内嵌入

4dc9hkyq  于 5个月前  发布在  其他
关注(0)|答案(4)|浏览(73)

https://github.com/langchain4j/langchain4j-embeddings中的所有嵌入模型都使用com.microsoft.onnxruntime:onnxruntime依赖项,该依赖项使用CPU计算嵌入。还有com.microsoft.onnxruntime:onnxruntime_gpu,它可以利用GPU。用户应该能够选择是否使用CPU或GPU。

wqnecbli

wqnecbli1#

@langchain4j note that the com.microsoft.onnxruntime:onnxruntime dependency by AllMiniLmL6V2EmbeddingModel breaks Graal and I'm not able to create a native GraalVM binary. If first complains that onnxruntime is already in GraalVM classpath and should be excluded from application classpath using
buildArgs.add('-H:+AllowDeprecatedBuilderClassesOnImageClasspath')
but when I do this the GraalVM image is created but at runtime I get a ClassNotFoundException related to a onnxruntime class. I suppose this comment could be carved out into a separate "Support GraalVM" ticket.

ttvkxqim

ttvkxqim2#

在sjivan的"支持GraalVM"问题上:#1047,你能在那里评论一下吗?

g0czyy6m

g0czyy6m4#

由于llama.cpp API与OpenAI兼容,您可以从langchain4j-open-ai模块中使用OpenAiEmbeddingModel来连接到llama.cpp:

EmbeddingModel model = OpenAiEmbeddingModel.builder()
            .baseUrl("http://localhost:8080/v1")
            .apiKey("does not matter")
            .logRequests(true)
            .logResponses(true)
            .build();

model.embed("hello");

相关问题