org.apache.spark.sql.hive.HiveContext.load()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(2.2k)|赞(0)|评价(0)|浏览(273)

本文整理了Java中org.apache.spark.sql.hive.HiveContext.load()方法的一些代码示例,展示了HiveContext.load()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HiveContext.load()方法的具体详情如下:
包路径:org.apache.spark.sql.hive.HiveContext
类名称:HiveContext
方法名:load

HiveContext.load介绍

暂无

代码示例

代码示例来源:origin: Impetus/Kundera

/**
 * Register table for csv.
 * 
 * @param tableName
 *            the table name
 * @param dataSourcePath
 *            the data source path
 * @param sqlContext
 *            the sql context
 */
private void registerTableForCsv(String tableName, String dataSourcePath, HiveContext sqlContext)
{
  HashMap<String, String> options = new HashMap<String, String>();
  options.put("header", "true");
  options.put("path", dataSourcePath);
  sqlContext.load(SparkPropertiesConstants.SOURCE_CSV, options).registerTempTable(tableName);
}

代码示例来源:origin: Impetus/Kundera

@Override
public void registerTable(EntityMetadata m, SparkClient sparkClient)
{
  String conn = getConnectionString(m);
  Map<String, String> options = new HashMap<String, String>();
  options.put("url", conn);
  options.put("dbtable", m.getTableName());
  sparkClient.sqlContext.load("jdbc", options).registerTempTable(m.getTableName());
}

代码示例来源:origin: ddf-project/DDF

@Override
public DDF loadFromJDBC(JDBCDataSourceDescriptor dataSource) throws DDFException {
  SparkDDFManager sparkDDFManager = (SparkDDFManager)mDDFManager;
  HiveContext sqlContext = sparkDDFManager.getHiveContext();
  JDBCDataSourceCredentials cred = (JDBCDataSourceCredentials)dataSource.getDataSourceCredentials();
  String fullURL = dataSource.getDataSourceUri().getUri().toString();
  if (cred.getUsername() != null &&  !cred.getUsername().equals("")) {
    fullURL += String.format("?user=%s&password=%s", cred.getUsername(), cred.getPassword());
  }
  Map<String, String> options = new HashMap<String, String>();
  options.put("url", fullURL);
  options.put("dbtable", dataSource.getDbTable());
  DataFrame df = sqlContext.load("jdbc", options);
  DDF ddf = sparkDDFManager.newDDF(sparkDDFManager, df, new Class<?>[]{DataFrame.class},
    null, SparkUtils.schemaFromDataFrame(df));
  // TODO?
  ddf.getRepresentationHandler().get(RDD.class, Row.class);
  ddf.getMetaDataHandler().setDataSourceDescriptor(dataSource);
  return ddf;
}

相关文章