pig脚本,用于根据指定的单词将大型txt文件拆分为多个部分

atmip9wb  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(356)

我正在尝试构建一个pig脚本,它接收一个教科书文件并将其分为几个章节,然后比较每个章节中的单词,只返回出现在所有章节中的单词并计算它们。这些章节很容易用第十章来分隔。
以下是我目前掌握的情况:

  1. lines = LOAD '../../Alice.txt' AS (line:chararray);
  2. lineswithoutspecchars = FOREACH lines GENERATE REPLACE(line,'([^a-zA-Z\\s]+)','') as line;
  3. words = FOREACH lineswithoutspecchars GENERATE FLATTEN(TOKENIZE(line)) as word;
  4. grouped = GROUP words BY word;
  5. wordcount = FOREACH grouped GENERATE group, COUNT(words);
  6. DUMP wordcount;

很抱歉,这个问题可能是太简单了,相比我通常在stackoverflow上问,我搜索了一下,但可能我没有使用正确的关键字。我是一个全新的Pig,并试图学习它为新的工作分配。
提前谢谢!

yshpjwxd

yshpjwxd1#

有点长,但你会得到结果的。不过,你可以根据你的档案来减少不必要的关系。在脚本中提供适当的注解。
输入文件:

  1. Pig does not know whether integer values in baseball are stored as ASCII strings, Java
  2. serialized values, binary-coded decimal, or some other format. So it asks the load func-
  3. tion, because it is that functions responsibility to cast bytearrays to other types. In
  4. general this works nicely, but it does lead to a few corner cases where Pig does not know
  5. how to cast a bytearray. In particular, if a UDF returns a bytearray, Pig will not know
  6. how to perform casts on it because that bytearray is not generated by a load function.
  7. CHAPTER - X
  8. In a strongly typed computer language (e.g., Java), the user must declare up front the
  9. type for all variables. In weakly typed languages (e.g., Perl), variables can take on values
  10. of different type and adapt as the occasion demands.
  11. CHAPTER - X
  12. In this example, remember we are pretending that the values for base_on_balls and
  13. ibbs turn out to be represented as integers internally (that is, the load function con-
  14. structed them as integers). If Pig were weakly typed, the output of unintended would
  15. be records with one field typed as an integer. As it is, Pig will output records with one
  16. field typed as a double. Pig will make a guess and then do its best to massage the data
  17. into the types it guessed.

Pig脚本:

  1. A = LOAD 'file' as (line:chararray);
  2. B = FOREACH A GENERATE REPLACE(line,'([^a-zA-Z\\s]+)','') as line;
  3. //we need to split on CHAPTER X but the above load function would give us a tuple for each newline. so
  4. group everything and convert that bag to string which will give a single tuple with _ as delimiter.
  5. C = GROUP B ALL;
  6. D = FOREACH C GENERATE BagToString(B) as (line:chararray);
  7. //now we dont have any commas so convert our delimiter CHAPTER X to comma. We do this becuase if we pass this
  8. to TOKENIZE it would split that into separarte column that would be useful to RANK it.
  9. E = FOREACH D GENERATE REPLACE(line,'_CHAPTER X_',',') AS (line:chararray);
  10. F = FOREACH E GENERATE REPLACE(line,'_',' ') AS (line:chararray); //remove the delimiter created by BagToString
  11. //create separate columns
  12. G = FOREACH F GENERATE FLATTEN(TOKENIZE(line,',')) AS (line:chararray);
  13. //we need to rank each chapter so that would be easy when you are doing the count of each word.
  14. H = RANK G;
  15. J = FOREACH H GENERATE rank_G,FLATTEN(TOKENIZE(line)) as (line:chararray);
  16. J1 = GROUP J BY (rank_G, line);
  17. J2 = FOREACH J1 GENERATE COUNT(J) AS (cnt:long),FLATTEN(group.line) as (word:chararray),FLATTEN(group.rank_G) as (rnk:long);
  18. //So J2 result will not have duplicate word within each chapter now.
  19. //So if we group it by word and then filter teh count of that by 2 we are sure that the word is present in all chapters.
  20. J3 = GROUP J2 BY word;
  21. J4 = FOREACH J3 GENERATE SUM(J2.cnt) AS (sumval:long),COUNT(J2) as (cnt:long),FLATTEN(group) as (word:chararray);
  22. J5 = FILTER J4 BY cnt > 2;
  23. J6 = FOREACH J5 GENERATE word,sumval;
  24. dump J6;
  25. //result in order word,count across chapters

输出:

  1. (a,8)
  2. (In,5)
  3. (as,6)
  4. (the,9)
  5. (values,4)
展开查看全部

相关问题