例行检查
- 我已确认目前没有类似 issue
- 我已确认我已升级到最新版本
- 我已完整查看过项目 README,已确定现有版本无法满足需求
- 我理解并愿意跟进此 issue,协助测试和提供反馈
- 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
功能描述
官方在开放 gemini-1.5-pro-latest
的同时,也开放了 gemini-1.0-ultra-latest
的权限,希望能够支持!
应用场景
list models返回如下:
{
"models": [
{
"name": "models/chat-bison-001",
"version": "001",
"displayName": "PaLM 2 Chat (Legacy)",
"description": "A legacy text-only model optimized for chat conversations",
"inputTokenLimit": 4096,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateMessage",
"countMessageTokens"
],
"temperature": 0.25,
"topP": 0.95,
"topK": 40
},
{
"name": "models/text-bison-001",
"version": "001",
"displayName": "PaLM 2 (Legacy)",
"description": "A legacy model that understands text and generates text as an output",
"inputTokenLimit": 8196,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateText",
"countTextTokens",
"createTunedTextModel"
],
"temperature": 0.7,
"topP": 0.95,
"topK": 40
},
{
"name": "models/embedding-gecko-001",
"version": "001",
"displayName": "Embedding Gecko",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 1024,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedText",
"countTextTokens"
]
},
{
"name": "models/gemini-1.0-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-001",
"version": "001",
"displayName": "Gemini 1.0 Pro 001 (Tuning)",
"description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens",
"createTunedModel"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Latest",
"description": "The best model for scaling across a wide range of tasks. This is the latest model.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-vision-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-1.0-ultra-latest",
"version": "001",
"displayName": "Gemini 1.0 Ultra",
"description": "The most capable model for highly complex tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-1.5-pro-latest",
"version": "001",
"displayName": "Gemini 1.5 Pro",
"description": "Mid-size multimodal model that supports up to 1 million tokens",
"inputTokenLimit": 1048576,
"outputTokenLimit": 8192,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 2,
"topP": 0.4,
"topK": 32
},
{
"name": "models/gemini-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-pro-vision",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-ultra",
"version": "001",
"displayName": "Gemini 1.0 Ultra",
"description": "The most capable model for highly complex tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 32
},
{
"name": "models/embedding-001",
"version": "001",
"displayName": "Embedding 001",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 2048,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedContent"
]
},
{
"name": "models/aqa",
"version": "001",
"displayName": "Model that performs Attributed Question Answering.",
"description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.",
"inputTokenLimit": 7168,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateAnswer"
],
"temperature": 0.2,
"topP": 1,
"topK": 40
}
]
}
2条答案
按热度按时间jucafojl1#
尝试了一下,似乎只需在模型中添加gemini-ultra,就可以直接使用。
rpppsulh2#
我也尝试了一下,自定义添加确实是可行的。模型名称:gemini-1.0-ultra-latest