我需要解析下面的日志文件,在脚本中应该从时间戳的开头考虑 150324-21:06:32:937378
直到下一个时间戳作为一个记录开始。我试着用图书馆
org.apache.pig.piggybank.storage.MyRegExLoader
以自定义格式加载记录。
150324-21:06:32:937378 [mod=STB, lvl=INFO ]
top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72
Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie
Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st
Mem: 317108k total, 232588k used, 84520k free, 25960k buffers
Swap: 0k total, 0k used, 0k free, 110820k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19122 root 20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver
5859 root 20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer
150324-21:06:32:937378 [mod=STB, lvl=INFO ]
top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72
Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie
Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st
Mem: 317108k total, 232588k used, 84520k free, 25960k buffers
Swap: 0k total, 0k used, 0k free, 110820k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19122 root 20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver
5859 root 20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer
下面是我使用的相关代码片段
raw_logs = LOAD './main*/*top_log*' USING org.apache.pig.piggybank.storage.MyRegExLoader('(?m)(?s)\\d*-\\d{2}:\\d{2}:\\d{2}\\:\\d*.*') AS line:chararray ; DUMP raw_logs;
以下是我的输出:
(150325-05:47:26:253050 [mod=STB, lvl=INFO ])
(150325-05:57:27:294069 [mod=STB, lvl=INFO ])
(150325-06:07:28:235302 [mod=STB, lvl=INFO ])
(150325-06:17:29:124282 [mod=STB, lvl=INFO ])
(150325-06:27:30:036264 [mod=STB, lvl=INFO ])
(150325-06:37:30:941804 [mod=STB, lvl=INFO ])
(150325-06:47:31:909712 [mod=STB, lvl=INFO ])
应该是两个元组
(150324-21:06:32:937378 [mod=STB, lvl=INFO ]
top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72
Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie
Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st
Mem: 317108k total, 232588k used, 84520k free, 25960k buffers
Swap: 0k total, 0k used, 0k free, 110820k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19122 root 20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver
5859 root 20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer)
(150324-21:06:32:937378 [mod=STB, lvl=INFO ]
top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72
Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie
Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st
Mem: 317108k total, 232588k used, 84520k free, 25960k buffers
Swap: 0k total, 0k used, 0k free, 110820k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19122 root 20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver
5859 root 20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer)
请让我知道我可以使用的正则表达式,以便我的脚本考虑时间戳的开始,直到下一个时间戳记录的开始。
2条答案
按热度按时间inn6fuwd1#
请尝试以下正则表达式的匹配组:
iklwldmw2#
我认为用Pig是不可能的。您将需要一个自定义的记录读取器,它使用regex按照第一条记录的时间戳来分割文件。
我希望下面的链接能帮助你写一个https://hadoopi.wordpress.com/2013/05/31/custom-recordreader-processing-string-pattern-delimited-records/
您可能需要调整它的一些逻辑来获得每行的时间戳
结果如下所示:top-02:10:39 up 0 min,0 users,load average:2.26,0.54,0.18150323-02:10:37:619962[mod=stb,lvl=info]tasks:133 total,6 running,127 sleeping,0 stopped,0 zombie 150323-02:10:37:619962[mod=stb,lvl=info]