hadoop mapreduce-euler的ToClient/ToClient之和(和其他数学运算)

uubf1zoe  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(275)

作为我学习的一部分,我正在用不同的并行计算语言实现totient的和(euler's totient),老实说,我正在与mapreduce进行斗争。主要目标是在运行时间、效率等方面做一个基准测试。。。
我的代码现在正在运行,我得到了正确的输出,但是它非常慢,我想知道为什么。
是因为我的实现,还是因为hadoopmadruce不是为这个目的而设计的。我还实现了一个组合器,因为据我所知,它应该优化代码,但事实并非如此。抱歉,如果这个问题似乎愚蠢,但我没有找到任何东西在互联网上,我厌倦了尝试一切都没有任何结果。
我的输入文件的值范围是1到15000

1 2 3 4 5 6 ... 14998 14999 15000

我正在研究一个由32个节点组成的集群,我的目标是让每个节点计算我的一部分范围(合路器),然后在归约器中求合路器的所有“子和”。

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class NewTotient {

  public static long hcf(long x, long y)
  {
    long t;

    while (y != 0) {
      t = x % y;
      x = y;
      y = t;
    }
    return x;
  }

  public static boolean relprime(long x, long y)
  {
    return hcf(x, y) == 1;
  }

  public static long euler(long n)
  {
    long length, i;

    length = 0;
    for (i = 1; i < n; i++)
      if (relprime(n, i))
        length++;
    return length;
  }

  public static class TotientMapper extends Mapper<LongWritable, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        for (String val : value.toString().split(" ")) {
            context.write(new Text(), new IntWritable(Integer.valueOf(val)));
        }
    }
  }

  public static class TotientCombiner extends Reducer<Text,IntWritable,Text,IntWritable> {
    //private IntWritable result = new IntWritable();

    protected void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException {
          int sum = 0;
          for (IntWritable val : values) {
              sum += NewTotient.euler(val.get());
          }
      }
  }

  public static class TotientReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
    //private IntWritable result = new IntWritable();

    protected void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException {
          int sum = 1;
          for (IntWritable val : values) {
              sum += val.get();
          }
          context.write(null, new IntWritable(sum));
      }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    System.out.println("\n\n__________________________________________________________\n"+"Starting Job\n"+"__________________________________________________________\n\n");
    final long startTime = System.currentTimeMillis();

    Job job = Job.getInstance(conf, "Sum of Totient");
    job.setJarByClass(NewTotient.class);
    job.setMapperClass(TotientMapper.class);
    job.setCombinerClass(TotientCombiner.class);
    job.setReducerClass(TotientReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    //job.setOutputKeyClass(Text.class);
    //job.setOutputValueClass(Text.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    job.waitForCompletion(true);
    final double duration = (System.currentTimeMillis() - startTime)/1000.0;
    System.out.println("\n\n__________________________________________________________\n"+"Job Finished in " + duration + " seconds\n"+"__________________________________________________________\n\n");
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

如果可以的话,这里是一个从0到10的数据集的输出(所以基本上我只是计算10个第一个总和:

__________________________________________________________
Starting Job
__________________________________________________________

2018-04-02 06:09:27,583 INFO client.RMProxy: Connecting to ResourceManager at bwlf32/137.195.143.132:33312
2018-04-02 06:09:28,377 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2018-04-02 06:09:28,423 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/jo20/.staging/job_1522471222360_0016
2018-04-02 06:09:28,775 INFO input.FileInputFormat: Total input files to process : 1
2018-04-02 06:09:29,029 INFO mapreduce.JobSubmitter: number of splits:1
2018-04-02 06:09:29,101 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-04-02 06:09:29,288 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1522471222360_0016
2018-04-02 06:09:29,290 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-04-02 06:09:29,538 INFO conf.Configuration: resource-types.xml not found
2018-04-02 06:09:29,539 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-04-02 06:09:29,628 INFO impl.YarnClientImpl: Submitted application application_1522471222360_0016
2018-04-02 06:09:29,687 INFO mapreduce.Job: The url to track the job: http://bwlf32:33314/proxy/application_1522471222360_0016/
2018-04-02 06:09:29,688 INFO mapreduce.Job: Running job: job_1522471222360_0016
2018-04-02 06:09:37,849 INFO mapreduce.Job: Job job_1522471222360_0016 running in uber mode : false
2018-04-02 06:09:37,852 INFO mapreduce.Job:  map 0% reduce 0%
2018-04-02 06:09:44,960 INFO mapreduce.Job:  map 100% reduce 0%
2018-04-02 06:09:52,008 INFO mapreduce.Job:  map 100% reduce 100%
2018-04-02 06:09:52,022 INFO mapreduce.Job: Job job_1522471222360_0016 completed successfully
2018-04-02 06:09:52,178 INFO mapreduce.Job: Counters: 53
    File System Counters
        FILE: Number of bytes read=6
        FILE: Number of bytes written=414497
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=123
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=8
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Launched reduce tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=9126
        Total time spent by all reduces in occupied slots (ms)=9688
        Total time spent by all map tasks (ms)=4563
        Total time spent by all reduce tasks (ms)=4844
        Total vcore-milliseconds taken by all map tasks=4563
        Total vcore-milliseconds taken by all reduce tasks=4844
        Total megabyte-milliseconds taken by all map tasks=1168128
        Total megabyte-milliseconds taken by all reduce tasks=1240064
    Map-Reduce Framework
        Map input records=1
        Map output records=10
        Map output bytes=50
        Map output materialized bytes=6
        Input split bytes=102
        Combine input records=10
        Combine output records=0
        Reduce input groups=0
        Reduce shuffle bytes=6
        Reduce input records=0
        Reduce output records=0
        Spilled Records=0
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=157
        CPU time spent (ms)=2220
        Physical memory (bytes) snapshot=507772928
        Virtual memory (bytes) snapshot=3889602560
        Total committed heap usage (bytes)=347078656
        Peak Map Physical memory (bytes)=306073600
        Peak Map Virtual memory (bytes)=1945808896
        Peak Reduce Physical memory (bytes)=201699328
        Peak Reduce Virtual memory (bytes)=1943793664
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=21
    File Output Format Counters
        Bytes Written=0

__________________________________________________________
Job Finished in 26.225 seconds
__________________________________________________________

2018-04-02 06:09:52,182 INFO mapreduce.Job: Running job: job_1522471222360_0016
2018-04-02 06:09:52,188 INFO mapreduce.Job: Job job_1522471222360_0016 running in uber mode : false
2018-04-02 06:09:52,188 INFO mapreduce.Job:  map 100% reduce 100%
2018-04-02 06:09:52,193 INFO mapreduce.Job: Job job_1522471222360_0016 completed successfully
2018-04-02 06:09:52,201 INFO mapreduce.Job: Counters: 53
    File System Counters
        FILE: Number of bytes read=6
        FILE: Number of bytes written=414497
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=123
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=8
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Launched reduce tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=9126
        Total time spent by all reduces in occupied slots (ms)=9688
        Total time spent by all map tasks (ms)=4563
        Total time spent by all reduce tasks (ms)=4844
        Total vcore-milliseconds taken by all map tasks=4563
        Total vcore-milliseconds taken by all reduce tasks=4844
        Total megabyte-milliseconds taken by all map tasks=1168128
        Total megabyte-milliseconds taken by all reduce tasks=1240064
    Map-Reduce Framework
        Map input records=1
        Map output records=10
        Map output bytes=50
        Map output materialized bytes=6
        Input split bytes=102
        Combine input records=10
        Combine output records=0
        Reduce input groups=0
        Reduce shuffle bytes=6
        Reduce input records=0
        Reduce output records=0
        Spilled Records=0
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=157
        CPU time spent (ms)=2220
        Physical memory (bytes) snapshot=507772928
        Virtual memory (bytes) snapshot=3889602560
        Total committed heap usage (bytes)=347078656
        Peak Map Physical memory (bytes)=306073600
        Peak Map Virtual memory (bytes)=1945808896
        Peak Reduce Physical memory (bytes)=201699328
        Peak Reduce Virtual memory (bytes)=1943793664
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=21
    File Output Format Counters
        Bytes Written=0

使用java中的顺序代码会更快:

real    0m0.512s
user    0m0.279s
sys     0m0.142s

为了清楚起见,我必须使用这种计算方法,因为它足够慢,可以在不同的系统之间进行有趣的比较,即使我知道有计算所有素数因子和它们的倍数的想法,并从n中减去这个计数,得到总函数值(素数因子和素数因子的倍数的gcd不等于1),我也无法用更聪明的计算方法来提高系统的速度。

pobjuy32

pobjuy321#

在这里,您将在一行中提供来自文件的输入。Map器中使用的键是新行,因此由于只有一行,它将由单个Map任务处理,因此不会并行处理输入。您可以做的一件事是在新行而不是空格中提供每个输入数字,并相应地更改Map器。此外,组合器在这里也没有多大意义,因为在Map输出中没有使用不同的键

相关问题