本文整理了Java中org.apache.spark.util.Utils.memoryStringToMb()
方法的一些代码示例,展示了Utils.memoryStringToMb()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utils.memoryStringToMb()
方法的具体详情如下:
包路径:org.apache.spark.util.Utils
类名称:Utils
方法名:memoryStringToMb
暂无
代码示例来源:origin: apache/hive
return new ObjectPair<Long, Integer>(-1L, -1);
int executorMemoryInMB = Utils.memoryStringToMb(
sparkConf.get("spark.executor.memory", "512m"));
double memoryFraction = 1.0 - sparkConf.getDouble("spark.storage.memoryFraction", 0.6);
代码示例来源:origin: apache/drill
return new ObjectPair<Long, Integer>(-1L, -1);
int executorMemoryInMB = Utils.memoryStringToMb(
sparkConf.get("spark.executor.memory", "512m"));
double memoryFraction = 1.0 - sparkConf.getDouble("spark.storage.memoryFraction", 0.6);
代码示例来源:origin: com.uber.hoodie/hoodie-client
Utils.memoryStringToMb(SparkEnv.get().conf().get(SPARK_EXECUTOR_MEMORY_PROP,
DEFAULT_SPARK_EXECUTOR_MEMORY_MB)) * 1024
代码示例来源:origin: uber/hudi
long executorMemoryInBytes = Utils.memoryStringToMb(SparkEnv.get().conf().get(SPARK_EXECUTOR_MEMORY_PROP,
DEFAULT_SPARK_EXECUTOR_MEMORY_MB)) * 1024
代码示例来源:origin: org.wso2.carbon.analytics/org.wso2.carbon.analytics.spark.core
/**
* this starts a worker with given parameters. it reads the spark defaults from
* the given properties file and override parameters accordingly. it also adds the port offset
* to all the port configurations
*/
public synchronized void startWorker() {
if (!this.workerActive) {
String workerHost = this.myHost;
int workerPort = this.sparkConf.getInt(AnalyticsConstants.SPARK_WORKER_PORT, 10000 + this.portOffset);
int workerUiPort = this.sparkConf.getInt(AnalyticsConstants.SPARK_WORKER_WEBUI_PORT, 10500 + this.portOffset);
int workerCores = this.sparkConf.getInt(AnalyticsConstants.SPARK_WORKER_CORES, 1);
String workerMemory = getStringFromSparkConf(AnalyticsConstants.SPARK_WORKER_MEMORY, "1g");
String[] masters = this.getSparkMastersFromCluster();
String workerDir = getStringFromSparkConf(AnalyticsConstants.SPARK_WORKER_DIR, "work");
Worker.startRpcEnvAndEndpoint(workerHost, workerPort, workerUiPort, workerCores,
Utils.memoryStringToMb(workerMemory), masters, workerDir,
Option.empty(), this.sparkConf);
log.info("[Spark init - worker] Started SPARK WORKER in " + workerHost + ":" + workerPort + " with webUI port "
+ workerUiPort + " with Masters " + Arrays.toString(masters));
this.workerActive = true;
} else {
logDebug("Worker is already active in this node, therefore ignoring worker startup");
}
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
return new ObjectPair<Long, Integer>(-1L, -1);
int executorMemoryInMB = Utils.memoryStringToMb(
sparkConf.get("spark.executor.memory", "512m"));
double memoryFraction = 1.0 - sparkConf.getDouble("spark.storage.memoryFraction", 0.6);
内容来源于网络,如有侵权,请联系作者删除!