hbase升级spark

1tu0hz3e  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(334)

我有spark流的工作,在这里有些人正在做一些聚合,现在我想插入到hbase记录,但它不是典型的插入我想做upsert如果为rowkey是可用的,而不是在列值总和(newvalue+oldvalue)应该发生。有人共享java中的伪代码吗?我怎样才能做到这一点?

dnph8jn4

dnph8jn41#

像这样的。。。

byte[] rowKey = null; // Provided
Table table = null; // Provided
long newValue = 1000; // Provided
byte[] FAMILY = new byte[]{0}; // Defined
byte[] QUALIFIER = new byte[]{1}; // Defined

try {
    Get get = new Get(rowKey);
    Result result = table.get(get);
    if (!result.isEmpty()) {
        Cell cell = result.getColumnLatestCell(FAMILY, QUALIFIER);
        newValue += Bytes.bytesToLong(cell.getValueArray(),cell.getValueOffset());
    }
    Put put = new Put(rowKey);
    put.addColumn(FAMILY,QUALIFIER,Bytes.toBytes(newValue));
    table.put(put);
} catch (Exception e) {
    // Handle Exceptions...
}

我们(splice machine[open source])有一些非常酷的教程,使用spark streaming在hbase中存储数据。
过来看。可能很有趣。

mwngjboj

mwngjboj2#

我发现下面的路都是假的code:-

===========For UPSERT(Update and Insert)===========

public void HbaseUpsert(JavaRDD < Row > javaRDD) throws IOException, ServiceException {

         JavaPairRDD < ImmutableBytesWritable, Put > hbasePuts1 = javaRDD.mapToPair(

          new PairFunction < Row, ImmutableBytesWritable, Put > () {

            private static final long serialVersionUID = 1L;
        public Tuple2 < ImmutableBytesWritable, Put > call(Row row) throws Exception {

                if(HbaseConfigurationReader.getInstance()!=null)
                {
                HTable table = new HTable(HbaseConfigurationReader.getInstance().initializeHbaseConfiguration(), "TEST");

            try {

               String Column1 = row.getString(1);
               long Column2 = row.getLong(2); 
               Get get = new Get(Bytes.toBytes(row.getString(0)));  
                   Result result = table.get(get);
                   if (!result.isEmpty()) {
                       Cell cell = result.getColumnLatestCell(Bytes.toBytes("cf1"), Bytes.toBytes("Column2"));
                       Column2 += Bytes.toLong(cell.getValueArray(),cell.getValueOffset());
                     }
                Put put = new Put(Bytes.toBytes(row.getString(0)));
                put.add(Bytes.toBytes("cf1"), Bytes.toBytes("Column1"), Bytes.toBytes(Column1));
                put.add(Bytes.toBytes("cf1"), Bytes.toBytes("Column2"), Bytes.toBytes(Column2));
                return new Tuple2 < ImmutableBytesWritable, Put > (new ImmutableBytesWritable(), put);

            } catch (Exception e) {

                e.printStackTrace();
            }
            finally {
                table.close();
            }
                }
            return null;
           }
          });

         hbasePuts1.saveAsNewAPIHadoopDataset(HbaseConfigurationReader.initializeHbaseConfiguration());

        }

==============For Configuration===============
public class HbaseConfigurationReader implements Serializable{

    static Job newAPIJobConfiguration1 =null;
    private static Configuration conf =null;
    private static HTable table= null; 
    private static HbaseConfigurationReader instance= null;

    private static Log logger= LogFactory.getLog(HbaseConfigurationReader.class);

HbaseConfigurationReader() throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException{
    initializeHbaseConfiguration();
}

public static HbaseConfigurationReader getInstance() throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException {

    if (instance == null) {
        instance = new HbaseConfigurationReader();
    }

    return instance;
}
public static Configuration initializeHbaseConfiguration() throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException {
     if(conf==null)
     {
        conf=HBaseConfiguration.create();
        conf.set("hbase.zookeeper.quorum", "localhost");
        conf.set("hbase.zookeeper.property.clientPort", "2181");
        HBaseAdmin.checkHBaseAvailable(conf);
        table = new HTable(conf, "TEST");
         conf.set(org.apache.hadoop.hbase.mapreduce.TableInputFormat.INPUT_TABLE, "TEST");
        try {
            newAPIJobConfiguration1 = Job.getInstance(conf);
            newAPIJobConfiguration1.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "TEST");
            newAPIJobConfiguration1.setOutputFormatClass(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);
        } catch (IOException e) {
            e.printStackTrace();
        }

     }

     else
         logger.info("Configuration comes null"); 

    return newAPIJobConfiguration1.getConfiguration();
 }
}

相关问题