apache pig-如何获得多个包之间匹配元素的数量?

uklbhaso  于 2021-06-03  发布在  Hadoop
关注(0)|答案(3)|浏览(372)

我是apachepig的新用户,我有一个问题要解决。
我想用ApachePig做一个小搜索引擎。想法很简单:我有一个文件,它是多个文档(每行一个文档)的串联。下面是一个包含三个文档的示例:

1,word1 word4 word2 word1
2,word2 word6 word1 word5 word3
3,word1 word3 word4 word5

然后,我为每个文档创建一个单词包,使用以下代码行:

docs = LOAD '$documents' USING PigStorage(',') AS (id:int, line:chararray);
B = FOREACH docs GENERATE line;
C = FOREACH B GENERATE TOKENIZE(line) as gu;

然后,我删除行李上的重复条目:

filtered = FOREACH C {
    uniq = DISTINCT gu;
    GENERATE uniq;
}

以下是此代码的结果:

DUMP filtered;

({(word1), (word4),  (word2)})
({(word2), (word6),  (word1), (word5), (word3)})
({(word1), (word3),  (word4), (word5)})

所以我每个文件都有一袋我想要的文字。
现在,我们将用户查询视为一个文件:

word2 word7 word5

我将查询转换为一袋单词:

query = LOAD '$query' AS (line_query:chararray);
bag_query = FOREACH query GENERATE TOKENIZE(line_query) AS quer;

DUMP bag_query;

结果如下:

({(word2), (word7), (word5)})

现在,我的问题是:我想得到查询和每个文档之间的匹配数。在本例中,我希望得到以下输出:

1
2
1

我试着把两个袋子连接起来,但没有成功。
你能帮帮我吗?
谢谢您。

lxkprmvk

lxkprmvk1#

尝试使用setintersect(datafu udf-https://github.com/linkedin/datafu)和大小来获得结果包中元素的数量。

1wnzp6jl

1wnzp6jl2#

正如sneumann指出的,您可以使用datafu的setintersect作为示例。
以您的示例为基础,给出以下文档:

1,word1 word4 word2 word1
2,word2 word6 word1 word5 word3 word7
3,word1 word3 word4 word5

根据这个问题:

word2 word7 word5

然后这个代码给你你想要的:

define SetIntersect datafu.pig.sets.SetIntersect();

docs = LOAD 'docs' USING PigStorage(',') AS (id:int, line:chararray);
B = FOREACH docs GENERATE id, line;
C = FOREACH B GENERATE id, TOKENIZE(line) as gu;

filtered = FOREACH C {
  uniq = DISTINCT gu;
  GENERATE id, uniq;
}

query = LOAD 'query' AS (line_query:chararray);
bag_query = FOREACH query GENERATE TOKENIZE(line_query) AS query;
-- sort the bag of tokens, since SetIntersect requires it
bag_query = FOREACH bag_query {
  query_sorted = ORDER query BY token;
  GENERATE query_sorted;
}

result = FOREACH filtered {
  -- sort the tokens, since SetIntersect requires it
  tokens_sorted = ORDER uniq BY token;
  GENERATE id, 
           SIZE(SetIntersect(tokens_sorted,bag_query.query_sorted)) as cnt;
}

DUMP result;

结果值:

(1,1)
(2,3)
(3,1)

下面是一个完全可以工作的示例,您可以将其粘贴到位于此处的setintersect的datafu单元测试中:

/**
register $JAR_PATH

define SetIntersect datafu.pig.sets.SetIntersect();

docs = LOAD 'docs' USING PigStorage(',') AS (id:int, line:chararray);
B = FOREACH docs GENERATE id, line;
C = FOREACH B GENERATE id, TOKENIZE(line) as gu;

filtered = FOREACH C {
  uniq = DISTINCT gu;
  GENERATE id, uniq;
}

query = LOAD 'query' AS (line_query:chararray);
bag_query = FOREACH query GENERATE TOKENIZE(line_query) AS query;
-- sort the bag of tokens, since SetIntersect requires it
bag_query = FOREACH bag_query {
  query_sorted = ORDER query BY token;
  GENERATE query_sorted;
}

result = FOREACH filtered {
  -- sort the tokens, since SetIntersect requires it
  tokens_sorted = ORDER uniq BY token;
  GENERATE id, 
           SIZE(SetIntersect(tokens_sorted,bag_query.query_sorted)) as cnt;
}

DUMP result;

 */
@Multiline
private String setIntersectTestExample;

@Test
public void setIntersectTestExample() throws Exception
{    
  PigTest test = createPigTestFromString(setIntersectTestExample);    

  writeLinesToFile("docs", 
                   "1,word1 word4 word2 word1",
                   "2,word2 word6 word1 word5 word3 word7",
                   "3,word1 word3 word4 word5");

  writeLinesToFile("query", 
                   "word2 word7 word5");

  test.runScript();

  super.getLinesForAlias(test, "filtered");
  super.getLinesForAlias(test, "query");
  super.getLinesForAlias(test, "result");
}

如果您有任何其他类似的用例,我很乐意听到:)我们总是希望为datafu贡献更多有用的udf。

nafvub8i

nafvub8i3#

如果您可以不使用任何自定义项,那么可以通过旋转包并使用所有sql样式来完成。

docs = LOAD '/input/search.dat' USING PigStorage(',') AS (id:int, line:chararray);
C = FOREACH docs GENERATE id, TOKENIZE(line) as gu;
pivoted = FOREACH C {
    uniq = DISTINCT gu;
        GENERATE id, FLATTEN(uniq) as word;
};
filtered = FILTER pivoted BY word MATCHES '(word2|word7|word5)';
--dump filtered;
count_id_matched = FOREACH (GROUP filtered BY id) GENERATE group as id, COUNT(filtered) as count;

dump count_id_matched;

count_word_matched_in_docs = FOREACH (GROUP filtered BY word) GENERATE group as word, COUNT(filtered) as count;

dump count_word_matched_in_docs;

相关问题