Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API , FAQ , Github Issue and AI community to get the answer.Have a nice day!
5条答案
按热度按时间rjee0c151#
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档 、 常见问题 、 历史Issue 、 AI社区 来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API , FAQ , Github Issue and AI community to get the answer.Have a nice day!
ippsafx72#
paddle的embedding没有hash操作,输入的id=x就直接查询2维embeeding表的第x行,
所以单机情况当embedding表过大确实会比较慢。
如果词表范围过大,可以把行数设置成一个相对比较小的值N,自己把id hash到[0,N)之间再查询embedding表
另外 fluid.layers.embedding这个api已经废弃了,
可以尝试paddle.nn.Embedding
uqjltbpv3#
@seemingwang 你好,没太理解【单机情况当embedding表过大确实会比较慢】这句话的含义。如果【输入的id=x就直接查询2维embeeding表的第x行】,时间复杂度应该是O(1)的操作?那么为什么耗时会和emb表规模有关?
t1qtbnec4#
不仅仅是复杂度问题,N太大的话,embedding表也会特别大,那么存储空间不足的时候,就会引入cache命中率低的问题导致性能不足
kfgdxczn5#
@seemingwang 也就是说查embedding表涉及cache查找的操作吗?这里的【N太大的话】有具体参考范围吗?像几十w这样的规模算【太大】吗