为什么扫描大块值会导致HBase集群崩溃?

问题描述:

我有一个HBase表,其中有两个列族,'i:*'表示信息,'f:b'表示文件:blob。 我正在存储图像的值,一些图像几乎12MB。为什么扫描大块值会导致HBase集群崩溃?

我可以加载/插入文件在java中没有问题,但只要我尝试通过为F扫描,以获取他们:B族值(斑点),我的扫描仪坐镇,直到超时,每个区域我的群集上的服务器依次死亡(我有一个20个节点的群集)。阻止我的扫描仪以某种方式造成我的无助节点的这种准病毒的唯一方法是完全放弃这个表(或者看起来好像)。

我使用Cloudera的EDH“0.98.6-cdh5.2.0”

不幸的是我的客户只是超时,所以没有有价值的例外存在,所有我可以从节点日​​志中获取低于

2014-10-27 21:47:36,106 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp 
java.io.FileNotFoundException: File hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp does not exist. 
     at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:658) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104) 
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712) 
    at org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath.getChildren(HFileArchiver.java:628) 
     at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:346) 
     at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:347) 
    at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) 
    at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:137) 
    at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75) 
at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333) 
     at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254) 
    at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101) 
     at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
     at java.lang.Thread.run(Thread.java:745) 
2014-10-27 21:47:36,129 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to complete archive of: [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp]. Those files are still in the original location, and they may slow down reads. 
2014-10-27 21:47:36,129 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table 
java.io.IOException: Received error when attempting to archive files ([class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/i, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits]), cannot delete region directory. 
     at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:148) 
     at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75) 
    at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333) 
    at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254) 
     at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101) 
    at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
    at java.lang.Thread.run(Thread.java:745) 
2014-10-27 21:47:36,146 INFO org.apache.hadoop.hbase.master.SplitLogManager: Done splitting /hbase/splitWAL/WALs%2Finsight-staging-slave019.spadac.com%2C60020%2C1414446135179-splitting%2Finsight-staging-slave019.spadac.com%252C60020%252C1414446135179.1414446317771 

这里是我的扫描表

try { 
     if (hBaseConfig == null) { 
     hBaseConfig = HBaseConfiguration.create(); 
     hBaseConfig.setInt("hbase.client.scanner.timeout.period", 1200000); 
     hBaseConfig.set("hbase.client.keyvalue.maxsize", "0"); 
     hBaseConfig.set("hbase.master", PROPS.get().getProperty("hbase.master")); 
     hBaseConfig.set("hbase.zookeeper.quorum", PROPS.get().getProperty("zks")); 
     hBaseConfig.set("zks.port", "2181"); 
     table = new HTable(hBaseConfig, "RASTER"); 
     } 

     Scan scan = new Scan(); 
     scan.addColumn("f".getBytes(), "b".getBytes()); 
     scan.addColumn("i".getBytes(), "name".getBytes()); 
     ResultScanner scanner = table.getScanner(scan); 

     for (Result rr = scanner.next(); rr != null; rr = scanner.next()) { 
/*I NEVER EVEN GET HERE IF I SCAN FOR 'f:b'*/ 
     CellScanner cs = rr.cellScanner(); 
     String name = ""; 
     byte[] fileBs = null; 
     while (cs.advance()) { 

      Cell current = cs.current(); 

      byte[] cloneValue = CellUtil.cloneValue(current); 
      byte[] cloneFamily = CellUtil.cloneFamily(current); 
      byte[] qualBytes = CellUtil.cloneQualifier(current); 
      String fam = Bytes.toString(cloneFamily); 
      String qual = Bytes.toString(qualBytes); 
      if (fam.equals("i")) { 

      if (qual.equals("name")) { 
       name = Bytes.toString(cloneValue); 
      } 
      } else if (fam.equals("f") && qual.equals("b")) { 
      fileBs = cloneValue; 
      } 

     } 

     OutputStream bof = new FileOutputStream("c:\\temp\\" + name); 
     bof.write(fileBs); 
     break; 

     } 
    } catch (IOException ex) { 
     //removed 
    } 

感谢 有谁知道为什么对于大型Blob扫描可能消灭我的群集的代码?我相信这是愚蠢的,只是无法弄清楚。

+0

顺便说一句,我可以扫描其他列上面的代码没有问题 – markg 2014-10-27 23:27:31

看起来这是问题

hBaseConfig.set("hbase.client.keyvalue.maxsize", "0"); 

我改成了“50”和现在的工作。