hbase shell在scanner&count调用中出现“outoforderscannextexception”错误

f4t66c6m  于 2021-06-02  发布在  Hadoop
关注(0)|答案(3)|浏览(395)

无论我运行一个扫描命令或计数,这个错误弹出和错误消息没有意义的我。它说明了什么&如何解决它?
org.apache.hadoop.hbase.exceptions.outoforderscannextexception:预期的nextcallseq为1,但从客户端获取的nextcallseq为0;请求=scanner\u id:788\u行数:100关闭\u scanner:false下一个\u调用\u seq:0
命令:
count'table',5000扫描'table',{column=>['cf:cq'],filter=>“valuefilter=,'binaryprefix:somevalue')"}
编辑:
我在hbase-site.xml中添加了以下设置

<property>
    <name>hbase.rpc.timeout</name>
    <value>1200000</value>
  </property>
  <property>
    <name>hbase.client.scanner.caching</name>
    <value>100</value>
 </property>

无影响
edit2:增加睡眠

Result[] results = scanner.next(100);

                for (int i = 0; i < results.length; i++) {
                    result = results[i];
                    try {
                        ...
                        count++;
                        ...
                        Thread.sleep(10); // ADDED SLEEP
                    } catch (Throwable exception) {
                        System.out.println(exception.getMessage());

                        System.out.println("sleeping");
                    }
                }

edit2后的新错误:

org.apache.hadoop.hbase.client.ScannerTimeoutException: 101761ms passed since the last invocation, timeout is currently set to 60000
...

Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
    ...

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
    ...

FINALLY BLOCK: 9900
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 101766ms passed since the last invocation, timeout is currently set to 60000
    ...

Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 101766ms passed since the last invocation, timeout is currently set to 60000
    ...

Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
    ...

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
    ...
i2byvkas

i2byvkas1#

这有时会发生在你做了大量的删除,你需要合并空区域,并试图平衡你的区域

1szpjjfi

1szpjjfi2#

也可能是由磁盘损坏引起的。在我的例子中,它并没有坏到让ambari、hdfs或我们的监控服务机构注意到它,而是坏到不能服务于一个地区。
使用该磁盘停止regionserver后,扫描工作正常。
我通过在调试模式下运行hbase shell找到regionserver:

hbase shell -d

然后一些RegionServer出现在输出中,其中一个很突出。然后我跑了 dmesg 在主机上查找故障磁盘。

iih3973s

iih3973s3#

编辑:通过使用下载的hbase附带的相同客户端版本(不是maven 0.99),我能够解决这个问题。服务器版本为0.98.6.1,在./lib文件夹中包含客户端jar
别忘了附上Zookeeper图书馆
旧的:
现在我做了两件事,更改了表连接api(0.99)

Configuration conf = HBaseConfiguration.create();
                    TableName name = TableName.valueOf("TABLENAME"); 
                    Connection conn = ConnectionFactory.createConnection(conf);
                    Table table = conn.getTable(name);

然后,当错误弹出时,我尝试重新创建连接

scanner.close();
                    conn.close();
                    conf.clear();
                    conf = HBaseConfiguration.create();
                    conn = ConnectionFactory.createConnection(conf);
                    table = conn.getTable(name);
                    table = ConnectionFactory.createConnection(conf).getTable(name);
                    scanner = table.getScanner(scan);

这可以工作,但在收到第一个错误后可能会变慢。扫描所有行非常慢

相关问题