本文整理了Java中org.apache.hadoop.io.file.tfile.Utils.readVLong()
方法的一些代码示例,展示了Utils.readVLong()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utils.readVLong()
方法的具体详情如下:
包路径:org.apache.hadoop.io.file.tfile.Utils
类名称:Utils
方法名:readVLong
[英]Decoding the variable-length integer. Suppose the value of the first byte is FB, and the following bytes are NB[*].
代码示例来源:origin: org.apache.hadoop/hadoop-common
public BlockRegion(DataInput in) throws IOException {
offset = Utils.readVLong(in);
compressedSize = Utils.readVLong(in);
rawSize = Utils.readVLong(in);
}
代码示例来源:origin: org.apache.hadoop/hadoop-common
/**
* Decoding the variable-length integer. Synonymous to
* <code>(int)Utils#readVLong(in)</code>.
*
* @param in
* input stream
* @return the decoded integer
* @throws IOException
*
* @see Utils#readVLong(DataInput)
*/
public static int readVInt(DataInput in) throws IOException {
long ret = readVLong(in);
if ((ret > Integer.MAX_VALUE) || (ret < Integer.MIN_VALUE)) {
throw new RuntimeException(
"Number too large to be represented as Integer");
}
return (int) ret;
}
代码示例来源:origin: org.apache.hadoop/hadoop-common
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: org.apache.hadoop/hadoop-common
public TFileMeta(DataInput in) throws IOException {
version = new Version(in);
if (!version.compatibleWith(TFile.API_VERSION)) {
throw new RuntimeException("Incompatible TFile fileVersion.");
}
recordCount = Utils.readVLong(in);
strComparator = Utils.readString(in);
comparator = makeComparator(strComparator);
}
代码示例来源:origin: io.hops/hadoop-common
public BlockRegion(DataInput in) throws IOException {
offset = Utils.readVLong(in);
compressedSize = Utils.readVLong(in);
rawSize = Utils.readVLong(in);
}
代码示例来源:origin: io.prestosql.hadoop/hadoop-apache
public BlockRegion(DataInput in) throws IOException {
offset = Utils.readVLong(in);
compressedSize = Utils.readVLong(in);
rawSize = Utils.readVLong(in);
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-common
public BlockRegion(DataInput in) throws IOException {
offset = Utils.readVLong(in);
compressedSize = Utils.readVLong(in);
rawSize = Utils.readVLong(in);
}
代码示例来源:origin: ch.cern.hadoop/hadoop-common
public BlockRegion(DataInput in) throws IOException {
offset = Utils.readVLong(in);
compressedSize = Utils.readVLong(in);
rawSize = Utils.readVLong(in);
}
代码示例来源:origin: org.apache.apex/malhar-library
public BlockRegion(DataInput in) throws IOException {
offset = Utils.readVLong(in);
compressedSize = Utils.readVLong(in);
rawSize = Utils.readVLong(in);
}
代码示例来源:origin: org.apache.apex/malhar-library
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: ch.cern.hadoop/hadoop-common
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: io.hops/hadoop-common
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: com.facebook.hadoop/hadoop-core
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: io.prestosql.hadoop/hadoop-apache
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-common
public TFileIndexEntry(DataInput in) throws IOException {
int len = Utils.readVInt(in);
key = new byte[len];
in.readFully(key, 0, len);
kvEntries = Utils.readVLong(in);
}
代码示例来源:origin: io.hops/hadoop-common
public TFileMeta(DataInput in) throws IOException {
version = new Version(in);
if (!version.compatibleWith(TFile.API_VERSION)) {
throw new RuntimeException("Incompatible TFile fileVersion.");
}
recordCount = Utils.readVLong(in);
strComparator = Utils.readString(in);
comparator = makeComparator(strComparator);
}
代码示例来源:origin: org.apache.apex/malhar-library
public TFileMeta(DataInput in) throws IOException {
version = new Version(in);
if (!version.compatibleWith(DTFile.API_VERSION)) {
throw new RuntimeException("Incompatible TFile fileVersion.");
}
recordCount = Utils.readVLong(in);
strComparator = Utils.readString(in);
comparator = makeComparator(strComparator);
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-common
public TFileMeta(DataInput in) throws IOException {
version = new Version(in);
if (!version.compatibleWith(TFile.API_VERSION)) {
throw new RuntimeException("Incompatible TFile fileVersion.");
}
recordCount = Utils.readVLong(in);
strComparator = Utils.readString(in);
comparator = makeComparator(strComparator);
}
代码示例来源:origin: com.facebook.hadoop/hadoop-core
public TFileMeta(DataInput in) throws IOException {
version = new Version(in);
if (!version.compatibleWith(TFile.API_VERSION)) {
throw new RuntimeException("Incompatible TFile fileVersion.");
}
recordCount = Utils.readVLong(in);
strComparator = Utils.readString(in);
comparator = makeComparator(strComparator);
}
代码示例来源:origin: ch.cern.hadoop/hadoop-common
public TFileMeta(DataInput in) throws IOException {
version = new Version(in);
if (!version.compatibleWith(TFile.API_VERSION)) {
throw new RuntimeException("Incompatible TFile fileVersion.");
}
recordCount = Utils.readVLong(in);
strComparator = Utils.readString(in);
comparator = makeComparator(strComparator);
}
内容来源于网络,如有侵权,请联系作者删除!