text-generation-inference OpenAI格式补全端点不包括完成原因和logprobs,

htzpubme  于 2个月前  发布在  其他
关注(0)|答案(1)|浏览(35)

系统信息

2.1.1

信息

  • Docker
  • CLI直接

任务

  • 一个官方支持的命令
  • 我自己的修改

重现

from text_generation import Client

client_1 = Client(
    "http://localhost:8080",
)
response = client_1.generate(prompt="def helloworld(")
print(response)
# init the client but point it to TGI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="-")
text = ""
result = client.completions.create(
    model="text_generation_inference",
    prompt="def helloworld(",
    logprobs=True,
)
print(result)
# text += chunk.text
# # print(chunk.choices[0].logprobs, end="")
# print(chunk.usage)
print(text)

text_generation的结果是

generated_text='11111111111111111111' details=Details(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=20, seed=None, prefill=[], tokens=[Token(id=16, text='1', logprob=-2.1074219, special=False), Token(id=16, text='1', logprob=-0.8120117, special=False), Token(id=16, text='1', logprob=-0.1920166, special=False), Token(id=16, text='1', logprob=-0.13012695, special=False), Token(id=16, text='1', logprob=-0.08239746, special=False), Token(id=16, text='1', logprob=-0.06756592, special=False), Token(id=16, text='1', logprob=-0.059326172, special=False), Token(id=16, text='1', logprob=-0.045654297, special=False), Token(id=16, text='1', logprob=-0.045135498, special=False), Token(id=16, text='1', logprob=-0.035003662, special=False), Token(id=16, text='1', logprob=-0.031280518, special=False), Token(id=16, text='1', logprob=-0.03213501, special=False), Token(id=16, text='1', logprob=-0.026412964, special=False), Token(id=16, text='1', logprob=-0.025497437, special=False), Token(id=16, text='1', logprob=-0.026626587, special=False), Token(id=16, text='1', logprob=-0.024383545, special=False), Token(id=16, text='1', logprob=-0.025726318, special=False), Token(id=16, text='1', logprob=-0.02684021, special=False), Token(id=16, text='1', logprob=-0.020309448, special=False), Token(id=16, text='1', logprob=-0.019821167, special=False)], top_tokens=None, best_of_sequences=None)

openai客户端的结果是

Completion(id='', choices=[CompletionChoice(finish_reason='length', index=0, logprobs=None, text='LETLET\n )) ( (    \n\n\n\n\n\n\n\n\n\n(#)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n')], created=1720558694, model='deepseek-ai/deepseek-coder-6.7b-base', object='text_completion', system_fingerprint='2.1.1-sha-4dfdb48', usage=CompletionUsage(completion_tokens=100, prompt_tokens=6, total_tokens=106))

预期行为

我希望OpenAI模型也能输出日志概率,以便在完成用例中使用。

8xiog9wr

8xiog9wr1#

你好👋
感谢你的信息!你能在脚本中更具体一点吗?
例如,

client = OpenAI(base_url="http://localhost:8080/v1", api_key="-")

从哪里导入OpenAI客户端?如果你指的是官方的openAI client,我认为在那里提出一个问题将是正确的举动👍

相关问题