Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

encounter Error: (state=08000,code=101), when we create index on bigtable #730

Open
wangxianbin1987 opened this issue Apr 26, 2014 · 19 comments

Comments

@wangxianbin1987
Copy link

we use performance.py script to generate a bigtable with 50,000,000 rows, and then we create index on it using SQL: CREATE index iPERFORMANCE_50000000 on PERFORMANCE_50000000(core) include(db, ACTIVE_VISITOR); after about 10 mins, it end with Error: (state=08000,code=101), any suggest, thanks!

@charlesb
Copy link

I had the same issue here.
You get this error because the default property phoenix.query.timeoutMs is set to 10 minutes (https://github.com/forcedotcom/phoenix/wiki/Tuning)
What you can do is to edit (or create) your hbase-site.xml in /usr/lib/phoenix/bin/ and add the phoenix.query.timeoutMs parameter as follow: (1 hour in my config)

<property>
  <name>phoenix.query.timeoutMs</name>
  <value>3600000</value>
</property>

@wangxianbin1987
Copy link
Author

ok,thanks,i will try and get back to you
2014-6-24 下午9:04于 "Charles Bernard" notifications@github.com写道:

I had the same issue here.
You get this error because the default property phoenix.query.timeoutMs
is set to 10 minutes (https://github.com/forcedotcom/phoenix/wiki/Tuning)
What you can do is to edit (or create) your hbase-site.xml in
/usr/lib/phoenix/bin/ and add the phoenix.query.timeoutMs parameter as
follow: (1 hour in my config)

phoenix.query.timeoutMs 3600000


Reply to this email directly or view it on GitHub.

@haridsv
Copy link

haridsv commented Sep 11, 2014

Thanks @charlesb your solution worked for me.

@manojoati
Copy link

same error i am gacing, i have set the phoenix.query.timeoutMs but cout not resolve it
below is error help
Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Fri Jan 09 09:21:07 CST 2015, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=62318: row '' on table 'PJM_DATASET' at region=PJM_DATASET,,1420633295836.4394a3aa2721f87f3e6216d20ebeec44., hostname=hadoopm1,60020,1420815633410, seqNum=34326 (state=08000,code=101)

@charlesb
Copy link

charlesb commented Jan 9, 2015

Can you paste your hbase-site.xml?
What version of hadoop (hadoop stack maybe? CDH, HDP,...) are you using?

@manojoati
Copy link

Hi i am getting same error Please help.

@manojoati
Copy link

ok below is my hbase configuration .

<configuration>

<property>
  <name>dfs.domain.socket.path</name>
  <value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>

<property>
  <name>hbase.client.keyvalue.maxsize</name>
  <value>10485760</value>
</property>

<property>
  <name>hbase.client.scanner.caching</name>
  <value>100</value>
</property>

<property>
  <name>hbase.client.scanner.timeout.period</name>
  <value>60000000</value>
</property>

<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>

<property>
  <name>hbase.coprocessor.master.classes</name>
  <value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>
</property>

<property>
  <name>hbase.coprocessor.region.classes</name>
  <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
</property>

<property>
  <name>hbase.defaults.for.version.skip</name>
  <value>true</value>
</property>

<property>
  <name>hbase.hregion.majorcompaction</name>
  <value>604800000</value>
</property>

<property>
  <name>hbase.hregion.max.filesize</name>
  <value>10737418240</value>
</property>

<property>
  <name>hbase.hregion.memstore.block.multiplier</name>
  <value>4</value>
</property>

<property>
  <name>hbase.hregion.memstore.flush.size</name>
  <value>536870912</value>
</property>

<property>
  <name>hbase.hregion.memstore.mslab.enabled</name>
  <value>true</value>
</property>

<property>
  <name>hbase.hstore.blockingStoreFiles</name>
  <value>10</value>
</property>

<property>
  <name>hbase.hstore.compactionThreshold</name>
  <value>3</value>
</property>

<property>
  <name>hbase.local.dir</name>
  <value>${hbase.tmp.dir}/local</value>
</property>

<property>
  <name>hbase.master.info.bindAddress</name>
  <value>0.0.0.0</value>
</property>

<property>
  <name>hbase.master.info.port</name>
  <value>60010</value>
</property>

<property>
  <name>hbase.master.port</name>
  <value>60000</value>
</property>

<property>
  <name>hbase.regionserver.global.memstore.lowerLimit</name>
  <value>0.38</value>
</property>

<property>
  <name>hbase.regionserver.global.memstore.upperLimit</name>
  <value>0.4</value>
</property>

<property>
  <name>hbase.regionserver.handler.count</name>
  <value>60</value>
</property>

<property>
  <name>hbase.regionserver.info.port</name>
  <value>60030</value>
</property>

<property>
  <name>hbase.regionserver.wal.codec</name>
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>

<property>
  <name>hbase.rootdir</name>
  <value>hdfs://hadoopctrl.dev.oati.local:8020/apps/hbase/data</value>
</property>

<property>
  <name>hbase.rpc.timeout</name>
  <value>1500000</value>
</property>

<property>
  <name>hbase.security.authentication</name>
  <value>simple</value>
</property>

<property>
  <name>hbase.security.authorization</name>
  <value>false</value>
</property>

<property>
  <name>hbase.superuser</name>
  <value>hbase</value>
</property>

<property>
  <name>hbase.tmp.dir</name>
  <value>/hadoop/hbase</value>
</property>

<property>
  <name>hbase.zookeeper.property.clientPort</name>
  <value>2181</value>
</property>

<property>
  <name>hbase.zookeeper.quorum</name>
  <value>hadoopm1.dev.oati.local,hadoopm2.dev.oati.local,hadoopm3.dev.oati.local</value>
</property>

<property>
  <name>hbase.zookeeper.useMulti</name>
  <value>true</value>
</property>

<property>
  <name>hfile.block.cache.size</name>
  <value>0.40</value>
</property>

<property>
  <name>phoenix.client.maxMetaDataCacheSize</name>
  <value>10240000</value>
</property>

<property>
  <name>phoenix.clock.skew.interval</name>
  <value>2000</value>
</property>

<property>
  <name>phoenix.connection.autoCommit</name>
  <value>false</value>
</property>

<property>
  <name>phoenix.coprocessor.maxMetaDataCacheSize</name>
  <value>20480000</value>
</property>

<property>
  <name>phoenix.coprocessor.maxMetaDataCacheTimeToLiveM</name>
  <value>180000</value>
</property>

<property>
  <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
  <value>30000</value>
</property>

<property>
  <name>phoenix.distinct.value.compress.threshold</name>
  <value>1024000</value>
</property>

<property>
  <name>phoenix.groupby.estimatedDistinctValues</name>
  <value>1000</value>
</property>

<property>
  <name>phoenix.groupby.maxCacheSize</name>
  <value>102400000</value>
</property>

<property>
  <name>phoenix.groupby.spillable</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.groupby.spillFiles</name>
  <value>2</value>
</property>

<property>
  <name>phoenix.index.failure.handling.rebuild</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.index.failure.handling.rebuild.interval</name>
  <value>10000</value>
</property>

<property>
  <name>phoenix.index.failure.handling.rebuild.overlap.time</name>
  <value>300000</value>
</property>

<property>
  <name>phoenix.index.maxDataFileSizePerc</name>
  <value>50</value>
</property>

<property>
  <name>phoenix.index.mutableBatchSizeThreshold</name>
  <value>5</value>
</property>

<property>
  <name>phoenix.mutate.batchSize</name>
  <value>1000</value>
</property>

<property>
  <name>phoenix.mutate.maxSize</name>
  <value>500000</value>
</property>

<property>
  <name>phoenix.query.dateFormat</name>
  <value>yyyy-MM-dd HH:mm:ss</value>
</property>

<property>
  <name>phoenix.query.maxGlobalMemoryPercentage</name>
  <value>15</value>
</property>

<property>
  <name>phoenix.query.maxGlobalMemorySize</name>
  <value>2147483648</value>
</property>

<property>
  <name>phoenix.query.maxGlobalMemoryWaitMs</name>
  <value>10000</value>
</property>

<property>
  <name>phoenix.query.maxServerCacheBytes</name>
  <value>104857600</value>
</property>

<property>
  <name>phoenix.query.maxSpoolToDiskBytes</name>
  <value>1024000000</value>
</property>

<property>
  <name>phoenix.query.maxTenantMemoryPercentage</name>
  <value>100</value>
</property>

<property>
  <name>phoenix.query.numberFormat</name>
  <value>#,##0.###</value>
</property>

<property>
  <name>phoenix.query.rowKeyOrderSaltedTable</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.query.spoolThresholdBytes</name>
  <value>20971520</value>
</property>

<property>
  <name>phoenix.query.useIndexes</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.schema.dropMetaData</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.sequence.cacheSize</name>
  <value>100</value>
</property>

<property>
  <name>phoenix.stats.guidepost.per.region</name>
  <value>None</value>
</property>

<property>
  <name>phoenix.stats.minUpdateFrequency</name>
  <value>450000</value>
</property>

<property>
  <name>phoenix.stats.updateFrequency</name>
  <value>900000</value>
</property>

<property>
  <name>phoenix.stats.useCurrentTime</name>
  <value>true</value>
</property>

<property>
  <name>zookeeper.session.timeout</name>
  <value>120000000</value>
</property>

<property>
  <name>zookeeper.znode.parent</name>
  <value>/hbase</value>
</property>

and i am using hdp 2.2
i have 4000000000 rows in my phoenix table .
please help asap.

@manojoati
Copy link

i have setup the phoenix.query.timeoutMs property from ambari web UI but it is not reflecting in hbase-site.xml file.

@manojoati
Copy link

<configuration>

<property>
  <name>dfs.domain.socket.path</name>
  <value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>

<property>
  <name>hbase.client.keyvalue.maxsize</name>
  <value>10485760</value>
</property>

<property>
  <name>hbase.client.scanner.caching</name>
  <value>100</value>
</property>

<property>
  <name>hbase.client.scanner.timeout.period</name>
  <value>60000000</value>
</property>

<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>

<property>
  <name>hbase.coprocessor.master.classes</name>
  <value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>
</property>

<property>
  <name>hbase.coprocessor.region.classes</name>
  <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
</property>

<property>
  <name>hbase.defaults.for.version.skip</name>
  <value>true</value>
</property>

<property>
  <name>hbase.hregion.majorcompaction</name>
  <value>604800000</value>
</property>

<property>
  <name>hbase.hregion.majorcompaction.jitter</name>
  <value>0.50</value>
</property>

<property>
  <name>hbase.hregion.max.filesize</name>
  <value>10737418240</value>
</property>

<property>
  <name>hbase.hregion.memstore.block.multiplier</name>
  <value>4</value>
</property>

<property>
  <name>hbase.hregion.memstore.flush.size</name>
  <value>536870912</value>
</property>

<property>
  <name>hbase.hregion.memstore.mslab.enabled</name>
  <value>true</value>
</property>

<property>
  <name>hbase.hstore.blockingStoreFiles</name>
  <value>10</value>
</property>

<property>
  <name>hbase.hstore.compactionThreshold</name>
  <value>3</value>
</property>

<property>
  <name>hbase.local.dir</name>
  <value>${hbase.tmp.dir}/local</value>
</property>

<property>
  <name>hbase.master.info.bindAddress</name>
  <value>0.0.0.0</value>
</property>

<property>
  <name>hbase.master.info.port</name>
  <value>60010</value>
</property>

<property>
  <name>hbase.master.loadbalancer.class</name>
  <value>org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer</value>
</property>

<property>
  <name>hbase.master.port</name>
  <value>60000</value>
</property>

<property>
  <name>hbase.region.server.rpc.scheduler.factory.class</name>
  <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
</property>

<property>
  <name>hbase.regionserver.global.memstore.lowerLimit</name>
  <value>0.38</value>
</property>

<property>
  <name>hbase.regionserver.global.memstore.upperLimit</name>
  <value>0.4</value>
</property>

<property>
  <name>hbase.regionserver.handler.count</name>
  <value>60</value>
</property>

<property>
  <name>hbase.regionserver.info.port</name>
  <value>60030</value>
</property>

<property>
  <name>hbase.regionserver.wal.codec</name>
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>

<property>
  <name>hbase.rootdir</name>
  <value>hdfs://hadoopctrl.dev.oati.local:8020/apps/hbase/data</value>
</property>

<property>
  <name>hbase.rpc.timeout</name>
  <value>1500000</value>
</property>

<property>
  <name>hbase.security.authentication</name>
  <value>simple</value>
</property>

<property>
  <name>hbase.security.authorization</name>
  <value>false</value>
</property>

<property>
  <name>hbase.superuser</name>
  <value>hbase</value>
</property>

<property>
  <name>hbase.tmp.dir</name>
  <value>/hadoop/hbase</value>
</property>

<property>
  <name>hbase.zookeeper.property.clientPort</name>
  <value>2181</value>
</property>

<property>
  <name>hbase.zookeeper.quorum</name>
  <value>hadoopm1.dev.oati.local,hadoopm2.dev.oati.local,hadoopm3.dev.oati.local</value>
</property>

<property>
  <name>hbase.zookeeper.useMulti</name>
  <value>true</value>
</property>

<property>
  <name>hfile.block.cache.size</name>
  <value>0.40</value>
</property>

<property>
  <name>phoenix.client.maxMetaDataCacheSize</name>
  <value>10240000</value>
</property>

<property>
  <name>phoenix.clock.skew.interval</name>
  <value>2000</value>
</property>

<property>
  <name>phoenix.connection.autoCommit</name>
  <value>false</value>
</property>

<property>
  <name>phoenix.coprocessor.maxMetaDataCacheSize</name>
  <value>20480000</value>
</property>

<property>
  <name>phoenix.coprocessor.maxMetaDataCacheTimeToLiveM</name>
  <value>180000</value>
</property>

<property>
  <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
  <value>30000</value>
</property>

<property>
  <name>phoenix.distinct.value.compress.threshold</name>
  <value>1024000</value>
</property>

<property>
  <name>phoenix.groupby.estimatedDistinctValues</name>
  <value>1000</value>
</property>

<property>
  <name>phoenix.groupby.maxCacheSize</name>
  <value>102400000</value>
</property>

<property>
  <name>phoenix.groupby.spillable</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.groupby.spillFiles</name>
  <value>2</value>
</property>

<property>
  <name>phoenix.index.failure.handling.rebuild</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.index.failure.handling.rebuild.interval</name>
  <value>10000</value>
</property>

<property>
  <name>phoenix.index.failure.handling.rebuild.overlap.time</name>
  <value>300000</value>
</property>

<property>
  <name>phoenix.index.maxDataFileSizePerc</name>
  <value>50</value>
</property>

<property>
  <name>phoenix.index.mutableBatchSizeThreshold</name>
  <value>5</value>
</property>

<property>
  <name>phoenix.mutate.batchSize</name>
  <value>1000</value>
</property>

<property>
  <name>phoenix.mutate.maxSize</name>
  <value>500000</value>
</property>

<property>
  <name>phoenix.query.dateFormat</name>
  <value>yyyy-MM-dd HH:mm:ss</value>
</property>

<property>
  <name>phoenix.query.maxGlobalMemoryPercentage</name>
  <value>15</value>
</property>

<property>
  <name>phoenix.query.maxGlobalMemorySize</name>
  <value>2147483648</value>
</property>

<property>
  <name>phoenix.query.maxGlobalMemoryWaitMs</name>
  <value>10000</value>
</property>

<property>
  <name>phoenix.query.maxServerCacheBytes</name>
  <value>104857600</value>
</property>

<property>
  <name>phoenix.query.maxSpoolToDiskBytes</name>
  <value>1024000000</value>
</property>

<property>
  <name>phoenix.query.maxTenantMemoryPercentage</name>
  <value>100</value>
</property>

<property>
  <name>phoenix.query.numberFormat</name>
  <value>#,##0.###</value>
</property>

<property>
  <name>phoenix.query.rowKeyOrderSaltedTable</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.query.spoolThresholdBytes</name>
  <value>20971520</value>
</property>

<property>
  <name>phoenix.query.timeoutMs</name>
  <value>60000000</value>
</property>

<property>
  <name>phoenix.query.useIndexes</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.schema.dropMetaData</name>
  <value>true</value>
</property>

<property>
  <name>phoenix.sequence.cacheSize</name>
  <value>100</value>
</property>

<property>
  <name>phoenix.stats.guidepost.per.region</name>
  <value>None</value>
</property>

<property>
  <name>phoenix.stats.minUpdateFrequency</name>
  <value>450000</value>
</property>

<property>
  <name>phoenix.stats.updateFrequency</name>
  <value>900000</value>
</property>

<property>
  <name>phoenix.stats.useCurrentTime</name>
  <value>true</value>
</property>

<property>
  <name>zookeeper.session.timeout</name>
  <value>120000000</value>
</property>

<property>
  <name>zookeeper.znode.parent</name>
  <value>/hbase</value>
</property>

ok here is final hbase-site.configuration

@manojoati
Copy link

jdbc:phoenix:hadoopm1> Select count(*) from PJM_DATASET;
here is query and below is the execption.
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Sat Jan 10 05:06:02 CST 2015, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=62320: row '' on table 'PJM_DATASET' at region=PJM_DATASET,,1420633295836.4394a3aa2721f87f3e6216d20ebeec44., hostname=hadoopm1,60020,1420887782278, seqNum=34350

    at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2440)
    at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
    at sqlline.SqlLine.print(SqlLine.java:1735)
    at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
    at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
    at sqlline.SqlLine.dispatch(SqlLine.java:821)
    at sqlline.SqlLine.begin(SqlLine.java:699)
    at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
    at sqlline.SqlLine.main(SqlLine.java:424)

0: jdbc:phoenix:hadoopm1>

@manojoati
Copy link

please help what wrong i am doing.

@charlesb
Copy link

Seems like something else times out. Have you tried to scan this table from your hbase client (hbase shell)?
Check this: http://hbase.apache.org/book/ch15s15.html (pragraph titled Connection Timeouts)

@manojoati
Copy link

i have one master and three slaves .. i uninstall and reinstall the hbase and phoenix at master and install the hbase at other slave machin but now i am even not able to start the hbase master from ambari.. web UI

@manojoati
Copy link

RegionServer
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2486)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2501)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2484)
... 5 more
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2076)
at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:617)
... 10 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1982)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074)
... 11 more

@manojoati
Copy link

above is the current error i got from hbase regionserver log.

@manojoati
Copy link

ok one question do we need to installed phoenix at the same machine where we have region server. ?

@jtaylor-sfdc
Copy link
Contributor

Phoenix moved to Apache over a year ago, so this site is no longer
active nor maintained. Please post your question on our Apache mailing
list and you'll likely get more help:
http://phoenix.apache.org/mailing_list.html

@manojoati
Copy link

ok thanks i will

@manojoati
Copy link

2015-01-14 02:27:57,512 WARN [DataStreamer for file /apps/hbase/data/WALs/hadoopm2.dev.oati.local,60020,1421221188209/hadoopm2.dev.oati.local%2C60020%2C1421221188209.1421223957430 block BP-337983189-10.100.227.107-1397418605845:blk_1073948934_216462] hdfs.DFSClient: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.100.227.107:50010, 10.100.227.104:50010], original=[10.100.227.107:50010, 10.100.227.104:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1041)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1107)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1254)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1005)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:549)
Wed Jan 14 04:26:19 CST 2015 Terminating regionserver
2015-01-14 04:26:19,509 INFO [Thread-11] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@48a9bc2b
Wed Jan 14 04:30:46 CST 2015 Terminating regionserver
Wed Jan 14 04:34:13 CST 2015 Terminating regionserver
2015-01-14 04:34:32,959 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker: tasks arrived or departed
Wed Jan 14 04:58:39 CST 2015 Terminating regionserver
Wed Jan 14 05:34:36 CST 2015 Terminating regionserver

now i am getting this error on region server .

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants