在pig中用相同的值对数据包进行分组

gblwokeq  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(337)

我创建了下面的pig脚本,从提到电影标题的web文档集合(公共爬网)中过滤句子(来自电影标题的预定义数据文件),对这些句子应用情感分析,并按电影对这些情感进行分组。

register ../commoncrawl-examples/lib/*.jar; 
set mapred.task.timeout= 1000;
register ../commoncrawl-examples/dist/lib/commoncrawl-examples-1.0.1-HM.jar;
register ../dist/lib/movierankings-1.jar
register ../lib/piggybank.jar;
register ../lib/stanford-corenlp-full-2014-01-04/stanford-corenlp-3.3.1.jar;
register ../lib/stanford-corenlp-full-2014-01-04/stanford-corenlp-3.3.1-models.jar;
register ../lib/stanford-corenlp-full-2014-01-04/ejml-0.23.jar;
register ../lib/stanford-corenlp-full-2014-01-04/joda-time.jar;
register ../lib/stanford-corenlp-full-2014-01-04/jollyday.jar;
register ../lib/stanford-corenlp-full-2014-01-04/xom.jar;

DEFINE IsNotWord com.moviereviewsentimentrankings.IsNotWord;
DEFINE IsMovieDocument com.moviereviewsentimentrankings.IsMovieDocument;
DEFINE ToSentenceMoviePairs com.moviereviewsentimentrankings.ToSentenceMoviePairs;
DEFINE ToSentiment com.moviereviewsentimentrankings.ToSentiment;
DEFINE MoviesInDocument com.moviereviewsentimentrankings.MoviesInDocument;

DEFINE SequenceFileLoader org.apache.pig.piggybank.storage.SequenceFileLoader();

-- LOAD pages, movies and words
pages = LOAD '../data/textData-*' USING SequenceFileLoader as (url:chararray, content:chararray);
movies_fltr_grp = LOAD '../data/movie_fltr_grp_2/part-*' as (group: chararray,movies_fltr: {(movie: chararray)});

-- FILTER pages containing movie
movie_pages = FILTER pages BY IsMovieDocument(content, movies_fltr_grp.movies_fltr);

-- SPLIT pages containing movie in sentences and create movie-sentence pairs
movie_sentences = FOREACH movie_pages GENERATE flatten(ToSentenceMoviePairs(content, movies_fltr_grp.movies_fltr)) as (content:chararray, movie:chararray);

-- Calculate sentiment for each movie-sentence pair
movie_sentiment = FOREACH movie_sentences GENERATE flatten(ToSentiment(movie, content)) as (movie:chararray, sentiment:int);

-- GROUP movie-sentiment pairs by movie
movie_sentiment_grp_tups = GROUP movie_sentiment BY movie;

-- Reformat and print movie-sentiment pairs
movie_sentiment_grp = FOREACH movie_sentiment_grp_tups GENERATE group, movie_sentiment.sentiment AS sentiments:{(sentiment: int)};
describe movie_sentiment_grp;

在网络爬网的一个小子集上运行的测试显示,它成功地为我提供了一对带有整数数据包的电影标题(从1到5,表示非常负、负、中性、正和非常正)。作为最后一步,我想把这个数据转换成成对的电影标题和一个包含元组的数据包,元组中有这个电影标题及其计数的所有不同整数。脚本末尾的描述电影\u情感\u grp返回:

movie_sentiment_grp: {group: chararray,sentiments: {(sentiment: int)}}

所以基本上,我可能需要对movie\u themation\u grp的每个元素进行遍历,并将情绪数据包分组为具有相同值的组,然后使用count()函数来获得每个组中元素的数量。但是,我找不到任何关于如何将一个整数数据包分组为相同值的组的信息。有人知道怎么做吗?
虚拟解决方案:

movie_sentiment_grp_cnt = FOREACH movie_sentiment_grp{
    sentiments_grp = GROUP sentiments BY ?;
}
cu6pst1q

cu6pst1q1#

你在正确的轨道上。 movie_sentiment_grp 格式正确,并且 FOREACH 是正确的,除非你不能使用 GROUP 在里面。解决方案是使用自定义项。像这样:
我的自定义项.py


# !/usr/bin/python

@outputSchema('sentiments: {(sentiment:int, count:int)}')
def count_sentiments(BAG):
    res = {}
    for s in BAG:
        if s in res:
            res[s] += 1
        else:
            res[s] = 1
    return res.items()

此自定义项的用法如下:

Register 'myudfs.py' using jython as myfuncs;

movie_sentiment_grp_cnt = FOREACH movie_sentiment_grp 
                          GENERATE group, myfuncs.count_sentiments(sentiments) ;
qvsjd97n

qvsjd97n2#

查看apachedatafu中的counteach udf。给定一个包,它将产生一个不同元组的新包,并将计数附加到每个对应的元组。
文件中的示例应说明这一点:

DEFINE CountEachFlatten datafu.pig.bags.CountEach('flatten');

-- input: 
-- ({(A),(A),(C),(B)})
input = LOAD 'input' AS (B: bag {T: tuple(alpha:CHARARRAY, numeric:INT)});

-- output_flatten: 
-- ({(A,2),(C,1),(B,1)})
output_flatten = FOREACH input GENERATE CountEachFlatten(B);

对于您的情况:

DEFINE CountEachFlatten datafu.pig.bags.CountEach('flatten');

movie_sentiment_grp_cnt = FOREACH movie_sentiment_grp GENERATE
     group,
     CountEach(sentiments);

相关问题