ollama 与OpenAI兼容的返回响应问题

zzzyeukh  于 2个月前  发布在  其他
关注(0)|答案(2)|浏览(70)

问题是什么?
响应输出是一个列表:
('The sentiment of "Hi Amit, thanks for the thoughtful birthday card!" is positive. The use of the word "thoughtful" suggests that the speaker appreciates the effort and care put into the card, and the exclamation mark at the end conveys a sense of gratitude and warmth. Overall, the sentiment is friendly, sincere, and appreciative.', 0)

你期望看到什么?

响应的文本....
"Hi Amit, thanks for the thoughtful birthday card!"的情感是积极的。使用“thoughtful”一词表明说话者欣赏卡片中所付出的努力和关心,句末的感叹号传达了感激和温暖的感觉。总体来说,情感是友好、真诚和感激的。
目前我可以通过以下方式访问:

print(response[0]) or
print(response[0].strip())

重现步骤

执行这段代码;

`def llama_openai(prompt, 
          add_inst=True, #By default True, if you use a base model should write it as False
          model="llama2", 
          temperature=0.0, #By default in openai is 1.0 o 0.7 depends of the model, openai from 0.0 to 2.0, llama2 from 0.0 to 1.0
          max_tokens=1024,
          verbose=False
         ):
    
    if add_inst:
        prompt = f"[INST]{prompt}[/INST]"

    if verbose:
        print(f"Prompt:\n{prompt}\n")
        print(f"model: {model}")

    error = 0
  
    try:
        response = client.chat.completions.create(
            messages=[
                        {
                        'role': 'user',
                        'content': prompt,
                        }
                    ],
            model=model,
            max_tokens=max_tokens,
            temperature=temperature
                )

    except openai.APIError as e:
      #Handle API error here, e.g. retry or log
      print(f"Llama2: OpenAI API returned an API Error:{e} ")
      error = 1
      pass
    except openai.APIConnectionError as e:
      #Handle connection error here
      print(f"Llama2:Failed to connect to OpenAI API:{e} ")
      error = 2
      pass
    except openai.RateLimitError as e:
      #Handle rate limit error (we recommend using exponential backoff)
      print(f"Llama2: OpenAI API request exceeded rate limit:{e} ")
      error = 3
      pass
    except ConnectionError as e:
        # Handle ConnectionError here
        print(f"Llama2: Connection Refused, perhaps server not running, network problem or address ip and port incorrect: {e}")
        error = 4
        pass
    except Exception as e:
        # Handle all other exceptions.
        print(f"Llama2: Error general no reconegut: {e}")
        error = 5
        pass

    #Si no salta ninguna excpeción, todo va bien..Vamos a devolver la respuesta del sistema y el error que deberia ser cero
    else:    
        #Si no hay ninguna excepcion se ejecuta else
        return (response.choices[0].message.content, error)
    
    #Si ha saltado alguna excepcion devolvemos algo y el codigo de error
    return ("Default Value", error)
`

然后执行一个测试:

prompt = """
What is the sentiment of:
Hi Amit, thanks for the thoughtful birthday card!
"""
response = llama_openai(prompt)
print(response)

然后不知怎么地返回值:
('The sentiment of "Hi Amit, thanks for the thoughtful birthday card!" is positive. The use of the word "thoughtful" suggests that the speaker appreciates the effort and care put into the card, and the exclamation mark at the end conveys a sense of gratitude and warmth. Overall, the sentiment is friendly, sincere, and appreciative.', 0)

是否最近的更改引入了问题?

我认为是这样,在ollama v1.0.28中没有发生...但不确定。

OS

Linux

架构

amd64

平台

  • 无响应*

Ollama版本

0.1.29

GPU

  • 无响应*

GPU信息

  • 无响应*

CPU

  • 无响应*

其他软件

  • 无响应*
raogr8fs

raogr8fs1#

你好,Amit。Ollama没有发生变化,这应该会导致这种行为。当你使用CLI时,你也看到了这个吗?

hgb9j2n6

hgb9j2n62#

你给我写信吗?你写道:嗨,Amit。我不知道谁是Amit(也许是一些我不理解的英语表达)。我只在课程和测试中使用Jupyter笔记本。

相关问题