diff --git a/pom.xml b/pom.xml
index 911b75643065..552e82d61666 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1753,8 +1753,6 @@
**/*.svg
**/src/main/resources/META-INF/LEGAL
-
- **/src/main/asciidoc/hbase.css
**/jquery.min.js
**/jquery.tablesorter.min.js
diff --git a/src/main/asciidoc/_chapters/amv2.adoc b/src/main/asciidoc/_chapters/amv2.adoc
deleted file mode 100644
index 49841ce32557..000000000000
--- a/src/main/asciidoc/_chapters/amv2.adoc
+++ /dev/null
@@ -1,173 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-[[amv2]]
-= AMv2 Description for Devs
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-The AssignmentManager (AM) in HBase Master manages assignment of Regions over a cluster of RegionServers.
-
-The AMv2 project is a redo of Assignment in an attempt at addressing the root cause of many of our operational issues in production, namely slow assignment and problematic accounting such that Regions are misplaced stuck offline in the notorious _Regions-In-Transition (RIT)_ limbo state.
-
-Below are notes for devs on key aspects of AMv2 in no particular order.
-
-== Background
-
-Assignment in HBase 1.x has been problematic in operation. It is not hard to see why. Region state is kept at the other end of an RPC in ZooKeeper (Terminal states -- i.e. OPEN or CLOSED -- are published to the _hbase:meta_ table). In HBase-1.x.x, state has multiple writers with Master and RegionServers all able to make state edits concurrently (in _hbase:meta_ table and out on ZooKeeper). If clocks are awry or watchers missed, state changes can be skipped or overwritten. Locking of HBase Entities -- tables, regions -- is not comprehensive so a table operation -- disable/enable -- could clash with a region-level operation; a split or merge. Region state is distributed and hard to reason about and test. Assignment is slow in operation because each assign involves moving remote znodes through transitions. Cluster size tends to top out at a couple of hundred thousand regions; beyond this, cluster start/stop takes hours and is prone to corruption.
-
-AMv2 (AssignmentManager Version 2) is a refactor (https://issues.apache.org/jira/browse/HBASE-14350[HBASE-14350]) of the hbase-1.x AssignmentManager putting it up on a https://issues.apache.org/jira/browse/HBASE-12439[ProcedureV2 (HBASE-12439)] basis. ProcedureV2 (Pv2)__,__ is an awkwardly named system that allows describing and running multi-step state machines. It is performant and persists all state to a Store which is recoverable post crash. See the companion chapter on <>, to learn more about the ProcedureV2 system.
-
-In AMv2, all assignment, crash handling, splits and merges are recast as Procedures(v2). ZooKeeper is purged from the mix. As before, the final assignment state gets published to _hbase:meta_ for non-Master participants to read (all-clients) with intermediate state kept in the local Pv2 WAL-based ‘store’ but only the active Master, a single-writer, evolves state. The Master’s in-memory cluster image is the authority and if disagreement, RegionServers are forced to comply. Pv2 adds shared/exclusive locking of all core HBase Entities -- namespace, tables, and regions -- to ensure one actor at a time access and to prevent operations contending over resources (move/split, disable/assign, etc.).
-
-This redo of AM atop of a purposed, performant state machine with all operations taking on the common Procedure form with a single state writer only moves our AM to a new level of resilience and scale.
-
-== New System
-
-Each Region Assign or Unassign of a Region is now a Procedure. A Move (Region) Procedure is a compound of Procedures; it is the running of an Unassign Procedure followed by an Assign Procedure. The Move Procedure spawns the Assign and Unassign in series and then waits on their completions.
-
-And so on. ServerCrashProcedure spawns the WAL splitting tasks and then the reassign of all regions that were hosted on the crashed server as subprocedures.
-
-AMv2 Procedures are run by the Master in a ProcedureExecutor instance. All Procedures make use of utility provided by the Pv2 framework.
-
-For example, Procedures persist each state transition to the frameworks’ Procedure Store. The default implementation is done as a WAL kept on HDFS. On crash, we reopen the Store and rerun all WALs of Procedure transitions to put the Assignment State Machine back into the attitude it had just before crash. We then continue Procedure execution.
-
-In the new system, the Master is the Authority on all things Assign. Previous we were ambiguous; e.g. the RegionServer was in charge of Split operations. Master keeps an in-memory image of Region states and servers. If disagreement, the Master always prevails; at an extreme it will kill the RegionServer that is in disagreement.
-
-A new RegionStateStore class takes care of publishing the terminal Region state, whether OPEN or CLOSED, out to the _hbase:meta _table__.__
-
-RegionServers now report their run version on Connection. This version is available inside the AM for use running migrating rolling restarts.
-
-== Procedures Detail
-
-=== Assign/Unassign
-
-Assign and Unassign subclass a common RegionTransitionProcedure. There can only be one RegionTransitionProcedure per region running at a time since the RTP instance takes a lock on the region. The RTP base Procedure has three steps; a store the procedure step (REGION_TRANSITION_QUEUE); a dispatch of the procedure open or close followed by a suspend waiting on the remote regionserver to report successful open or fail (REGION_TRANSITION_DISPATCH) or notification that the server fielding the request crashed; and finally registration of the successful open/close in hbase:meta (REGION_TRANSITION_FINISH).
-
-Here is how the assign of a region 56f985a727afe80a184dac75fbf6860c looks in the logs. The assign was provoked by a Server Crash (Process ID 1176 or pid=1176 which when it is the parent of a procedure, it is identified as ppid=1176). The assign is pid=1179, the second region of the two being assigned by this Server Crash.
-
-[source]
-----
-2017-05-23 12:04:24,175 INFO [ProcExecWrkr-30] procedure2.ProcedureExecutor: Initialized subprocedures=[{pid=1178, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=bfd57f0b72fd3ca77e9d3c5e3ae48d76, target=ve0540.halxg.example.org,16020,1495525111232}, {pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232}]
-----
-
-Next we start the assign by queuing (‘registering’) the Procedure with the framework.
-
-[source]
-----
-2017-05-23 12:04:24,241 INFO [ProcExecWrkr-30] assignment.AssignProcedure: Start pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OFFLINE, location=ve0540.halxg.example.org,16020,1495525111232; forceNewPlan=false, retain=false
-----
-
-Track the running of Procedures in logs by tracing their process id -- here pid=1179.
-
-Next we move to the dispatch phase where we update hbase:meta table setting the region state as OPENING on server ve540. We then dispatch an rpc to ve540 asking it to open the region. Thereafter we suspend the Assign until we get a message back from ve540 on whether it has opened the region successfully (or not).
-
-[source]
-----
-2017-05-23 12:04:24,494 INFO [ProcExecWrkr-38] assignment.RegionStateStore: pid=1179 updating hbase:meta row=IntegrationTestBigLinkedList,H\xE3@\x8D\x964\x9D\xDF\x8F@9\x0F\xC8\xCC\xC2,1495566261066.56f985a727afe80a184dac75fbf6860c., regionState=OPENING, regionLocation=ve0540.halxg.example.org,16020,1495525111232
-2017-05-23 12:04:24,498 INFO [ProcExecWrkr-38] assignment.RegionTransitionProcedure: Dispatch pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OPENING, location=ve0540.halxg.example.org,16020,1495525111232
-----
-
-Below we log the incoming report that the region opened successfully on ve540. The Procedure is woken up (you can tell it the procedure is running by the name of the thread, its a ProcedureExecutor thread, ProcExecWrkr-9). The woken up Procedure updates state in hbase:meta to denote the region as open on ve0540. It then reports finished and exits.
-
-[source]
-----
-2017-05-23 12:04:26,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] assignment.RegionTransitionProcedure: Received report OPENED seqId=11984985, pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OPENING, location=ve0540.halxg.example.org,16020,1495525111232 2017-05-23 12:04:26,643 INFO [ProcExecWrkr-9] assignment.RegionStateStore: pid=1179 updating hbase:meta row=IntegrationTestBigLinkedList,H\xE3@\x8D\x964\x9D\xDF\x8F@9\x0F\xC8\xCC\xC2,1495566261066.56f985a727afe80a184dac75fbf6860c., regionState=OPEN, openSeqNum=11984985, regionLocation=ve0540.halxg.example.org,16020,1495525111232
-2017-05-23 12:04:26,836 INFO [ProcExecWrkr-9] procedure2.ProcedureExecutor: Finish suprocedure pid=1179, ppid=1176, state=SUCCESS; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232
-----
-Unassign looks similar given it is based on the base RegionTransitionProcedure. It has the same state transitions and does basically the same steps but with different state name (CLOSING, CLOSED).
-
-Most other procedures are subclasses of a Pv2 StateMachine implementation. We have both Table and Region focused StateMachines types.
-
-== UI
-
-Along the top-bar on the Master, you can now find a ‘Procedures&Locks’ tab which takes you to a page that is ugly but useful. It dumps currently running procedures and framework locks. Look at this when you can’t figure what stuff is stuck; it will at least identify problematic procedures (take the pid and grep the logs…). Look for ROLLEDBACK or pids that have been RUNNING for a long time.
-
-== Logging
-
-Procedures log their process ids as pid= and their parent ids (ppid=) everywhere. Work has been done so you can grep the pid and see history of a procedure operation.
-
-== Implementation Notes
-
-In this section we note some idiosyncrasies of operation as an attempt at saving you some head-scratching.
-
-=== Region Transition RPC and RS Heartbeat can arrive at ~same time on Master
-
-Reporting Region Transition on a RegionServer is now a RPC distinct from RS heartbeating (‘RegionServerServices’ Service). An heartbeat and a status update can arrive at the Master at about the same time. The Master will update its internal state for a Region but this same state is checked when heartbeat processing. We may find the unexpected; i.e. a Region just reported as CLOSED so heartbeat is surprised to find region OPEN on the back of the RS report. In the new system, all slaves must cow to the Masters’ understanding of cluster state; the Master will kill/close any misaligned entities.
-
-To address the above, we added a lastUpdate for in-memory Master state. Let a region state have some vintage before we act on it (one second currently).
-
-=== Master as RegionServer or as RegionServer that just does system tables
-
-AMv2 enforces current master branch default of HMaster carrying system tables only; i.e. the Master in an HBase cluster acts also as a RegionServer only it is the exclusive host for tables such as _hbase:meta_, _hbase:namespace_, etc., the core system tables. This is causing a couple of test failures as AMv1, though it is not supposed to, allows moving hbase:meta off Master while AMv2 does not.
-
-== New Configs
-
-These configs all need doc on when you’d change them.
-
-=== hbase.procedure.remote.dispatcher.threadpool.size
-
-Defaults 128
-
-=== hbase.procedure.remote.dispatcher.delay.msec
-
-Default 150ms
-
-=== hbase.procedure.remote.dispatcher.max.queue.size
-
-Default 32
-
-=== hbase.regionserver.rpc.startup.waittime
-
-Default 60 seconds.
-
-== Tools
-
-HBASE-15592 Print Procedure WAL Content
-
-Patch in https://issues.apache.org/jira/browse/HBASE-18152[HBASE-18152] [AMv2] Corrupt Procedure WAL file; procedure data stored out of order https://issues.apache.org/jira/secure/attachment/12871066/reading_bad_wal.patch[https://issues.apache.org/jira/secure/attachment/12871066/reading_bad_wal.patch]
-
-=== MasterProcedureSchedulerPerformanceEvaluation
-
-Tool to test performance of locks and queues in procedure scheduler independently from other framework components. Run this after any substantial changes in proc system. Prints nice output:
-
-----
-******************************************
-Time - addBack : 5.0600sec
-Ops/sec - addBack : 1.9M
-Time - poll : 19.4590sec
-Ops/sec - poll : 501.9K
-Num Operations : 10000000
-
-Completed : 10000006
-Yield : 22025876
-
-Num Tables : 5
-Regions per table : 10
-Operations type : both
-Threads : 10
-******************************************
-Raw format for scripts
-
-RESULT [num_ops=10000000, ops_type=both, num_table=5, regions_per_table=10, threads=10, num_yield=22025876, time_addback_ms=5060, time_poll_ms=19459]
-----
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
deleted file mode 100644
index cb17346d42c4..000000000000
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ /dev/null
@@ -1,181 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[appendix]
-[[appendix_acl_matrix]]
-== Access Control Matrix
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-:toc: left
-:source-language: java
-
-The following matrix shows the permission set required to perform operations in HBase.
-Before using the table, read through the information about how to interpret it.
-
-.Interpreting the ACL Matrix Table
-The following conventions are used in the ACL Matrix table:
-
-=== Scopes
-Permissions are evaluated starting at the widest scope and working to the narrowest scope.
-
-A scope corresponds to a level of the data model. From broadest to narrowest, the scopes are as follows:
-
-.Scopes
-* Global
-* Namespace (NS)
-* Table
-* Column Family (CF)
-* Column Qualifier (CQ)
-* Cell
-
-For instance, a permission granted at table level dominates any grants done at the Column Family, Column Qualifier, or cell level. The user can do what that grant implies at any location in the table. A permission granted at global scope dominates all: the user is always allowed to take that action everywhere.
-
-=== Permissions
-Possible permissions include the following:
-
-.Permissions
-* Superuser - a special user that belongs to group "supergroup" and has unlimited access
-* Admin (A)
-* Create \(C)
-* Write (W)
-* Read \(R)
-* Execute (X)
-
-For the most part, permissions work in an expected way, with the following caveats:
-
-Having Write permission does not imply Read permission.::
- It is possible and sometimes desirable for a user to be able to write data that same user cannot read. One such example is a log-writing process.
-The [systemitem]+hbase:meta+ table is readable by every user, regardless of the user's other grants or restrictions.::
- This is a requirement for HBase to function correctly.
-`CheckAndPut` and `CheckAndDelete` operations will fail if the user does not have both Write and Read permission.::
-`Increment` and `Append` operations do not require Read access.::
-The `superuser`, as the name suggests has permissions to perform all possible operations.::
-And for the operations marked with *, the checks are done in post hook and only subset of results satisfying access checks are returned back to the user.::
-
-The following table is sorted by the interface that provides each operation.
-In case the table goes out of date, the unit tests which check for accuracy of permissions can be found in _hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java_, and the access controls themselves can be examined in _hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java_.
-
-.ACL Matrix
-[cols="1,1,1", frame="all", options="header"]
-|===
-| Interface | Operation | Permissions
-| Master | createTable | superuser\|global\(C)\|NS\(C)
-| | modifyTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | deleteTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | truncateTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | addColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | modifyColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)\|column(A)\|column\(C)
-| | deleteColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)\|column(A)\|column\(C)
-| | enableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | disableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | disableAclTable | Not allowed
-| | move | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | assign | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | unassign | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | regionOffline | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | balance | superuser\|global(A)
-| | balanceSwitch | superuser\|global(A)
-| | shutdown | superuser\|global(A)
-| | stopMaster | superuser\|global(A)
-| | snapshot | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | listSnapshot | superuser\|global(A)\|SnapshotOwner
-| | cloneSnapshot | superuser\|global(A)\|(SnapshotOwner & TableName matches)
-| | restoreSnapshot | superuser\|global(A)\|SnapshotOwner & (NS(A)\|TableOwner\|table(A))
-| | deleteSnapshot | superuser\|global(A)\|SnapshotOwner
-| | createNamespace | superuser\|global(A)
-| | deleteNamespace | superuser\|global(A)
-| | modifyNamespace | superuser\|global(A)
-| | getNamespaceDescriptor | superuser\|global(A)\|NS(A)
-| | listNamespaceDescriptors* | superuser\|global(A)\|NS(A)
-| | flushTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | getTableDescriptors* | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
-| | getTableNames* | superuser\|TableOwner\|Any global or table perm
-| | setUserQuota(global level) | superuser\|global(A)
-| | setUserQuota(namespace level) | superuser\|global(A)
-| | setUserQuota(Table level) | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | setTableQuota | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
-| | setNamespaceQuota | superuser\|global(A)
-| | addReplicationPeer | superuser\|global(A)
-| | removeReplicationPeer | superuser\|global(A)
-| | enableReplicationPeer | superuser\|global(A)
-| | disableReplicationPeer | superuser\|global(A)
-| | getReplicationPeerConfig | superuser\|global(A)
-| | updateReplicationPeerConfig | superuser\|global(A)
-| | listReplicationPeers | superuser\|global(A)
-| | getClusterStatus | any user
-| Region | openRegion | superuser\|global(A)
-| | closeRegion | superuser\|global(A)
-| | flush | superuser\|global(A)\|global\(C)\|TableOwner\|table(A)\|table\(C)
-| | split | superuser\|global(A)\|TableOwner\|TableOwner\|table(A)
-| | compact | superuser\|global(A)\|global\(C)\|TableOwner\|table(A)\|table\(C)
-| | getClosestRowBefore | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | getOp | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | exists | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | put | superuser\|global(W)\|NS(W)\|table(W)\|TableOwner\|CF(W)\|CQ(W)
-| | delete | superuser\|global(W)\|NS(W)\|table(W)\|TableOwner\|CF(W)\|CQ(W)
-| | batchMutate | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
-| | checkAndPut | superuser\|global(RW)\|NS(RW)\|TableOwner\|table(RW)\|CF(RW)\|CQ(RW)
-| | checkAndPutAfterRowLock | superuser\|global\(R)\|NS\(R)\|TableOwner\|Table\(R)\|CF\(R)\|CQ\(R)
-| | checkAndDelete | superuser\|global(RW)\|NS(RW)\|TableOwner\|table(RW)\|CF(RW)\|CQ(RW)
-| | checkAndDeleteAfterRowLock | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | incrementColumnValue | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
-| | append | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
-| | appendAfterRowLock | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
-| | increment | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
-| | incrementAfterRowLock | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
-| | scannerOpen | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | scannerNext | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | scannerClose | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
-| | bulkLoadHFile | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
-| | prepareBulkLoad | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
-| | cleanupBulkLoad | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
-| Endpoint | invoke | superuser\|global(X)\|NS(X)\|TableOwner\|table(X)
-| AccessController | grant(global level) | global(A)
-| | grant(namespace level) | global(A)\|NS(A)
-| | grant(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
-| | revoke(global level) | global(A)
-| | revoke(namespace level) | global(A)\|NS(A)
-| | revoke(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
-| | getUserPermissions(global level) | global(A)
-| | getUserPermissions(namespace level) | global(A)\|NS(A)
-| | getUserPermissions(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
-| | hasPermission(table level) | global(A)\|SelfUserCheck
-| RegionServer | stopRegionServer | superuser\|global(A)
-| | mergeRegions | superuser\|global(A)
-| | rollWALWriterRequest | superuser\|global(A)
-| | replicateLogEntries | superuser\|global(W)
-|RSGroup |addRSGroup |superuser\|global(A)
-| |balanceRSGroup |superuser\|global(A)
-| |getRSGroupInfo |superuser\|global(A)
-| |getRSGroupInfoOfTable|superuser\|global(A)
-| |getRSGroupOfServer |superuser\|global(A)
-| |listRSGroups |superuser\|global(A)
-| |moveServers |superuser\|global(A)
-| |moveServersAndTables |superuser\|global(A)
-| |moveTables |superuser\|global(A)
-| |removeRSGroup |superuser\|global(A)
-| |removeServers |superuser\|global(A)
-|===
-
-:numbered:
diff --git a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
deleted file mode 100644
index a603c16f42b7..000000000000
--- a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
+++ /dev/null
@@ -1,441 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[appendix]
-[[appendix_contributing_to_documentation]]
-== Contributing to Documentation
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-:toc: left
-:source-language: java
-
-The Apache HBase project welcomes contributions to all aspects of the project,
-including the documentation.
-
-In HBase, documentation includes the following areas, and probably some others:
-
-* The link:https://hbase.apache.org/book.html[HBase Reference
- Guide] (this book)
-* The link:https://hbase.apache.org/[HBase website]
-* API documentation
-* Command-line utility output and help text
-* Web UI strings, explicit help text, context-sensitive strings, and others
-* Log messages
-* Comments in source files, configuration files, and others
-* Localization of any of the above into target languages other than English
-
-No matter which area you want to help out with, the first step is almost always
-to download (typically by cloning the Git repository) and familiarize yourself
-with the HBase source code. For information on downloading and building the source,
-see <>.
-
-=== Contributing to Documentation or Other Strings
-
-If you spot an error in a string in a UI, utility, script, log message, or elsewhere,
-or you think something could be made more clear, or you think text needs to be added
-where it doesn't currently exist, the first step is to file a JIRA. Be sure to set
-the component to `Documentation` in addition to any other involved components. Most
-components have one or more default owners, who monitor new issues which come into
-those queues. Regardless of whether you feel able to fix the bug, you should still
-file bugs where you see them.
-
-If you want to try your hand at fixing your newly-filed bug, assign it to yourself.
-You will need to clone the HBase Git repository to your local system and work on
-the issue there. When you have developed a potential fix, submit it for review.
-If it addresses the issue and is seen as an improvement, one of the HBase committers
-will commit it to one or more branches, as appropriate.
-
-[[submit_doc_patch_procedure]]
-.Procedure: Suggested Work flow for Submitting Patches
-This procedure goes into more detail than Git pros will need, but is included
-in this appendix so that people unfamiliar with Git can feel confident contributing
-to HBase while they learn.
-
-. If you have not already done so, clone the Git repository locally.
- You only need to do this once.
-. Fairly often, pull remote changes into your local repository by using the
-`git pull` command, while your tracking branch is checked out.
-. For each issue you work on, create a new branch.
- One convention that works well for naming the branches is to name a given branch
- the same as the JIRA it relates to:
-+
-----
-$ git checkout -b HBASE-123456
-----
-
-. Make your suggested changes on your branch, committing your changes to your
-local repository often. If you need to switch to working on a different issue,
-remember to check out the appropriate branch.
-. When you are ready to submit your patch, first be sure that HBase builds cleanly
-and behaves as expected in your modified branch.
-. If you have made documentation changes, be sure the documentation and website
-builds by running `mvn clean site`.
-. If it takes you several days or weeks to implement your fix, or you know that
-the area of the code you are working in has had a lot of changes lately, make
-sure you rebase your branch against the remote master and take care of any conflicts
-before submitting your patch.
-+
-----
-$ git checkout HBASE-123456
-$ git rebase origin/master
-----
-
-. Generate your patch against the remote master. Run the following command from
-the top level of your git repository (usually called `hbase`):
-+
-----
-$ git format-patch --stdout origin/master > HBASE-123456.patch
-----
-+
-The name of the patch should contain the JIRA ID.
-. Look over the patch file to be sure that you did not change any additional files
-by accident and that there are no other surprises.
-. When you are satisfied, attach the patch to the JIRA and click the
-btn:[Patch Available] button. A reviewer will review your patch.
-. If you need to submit a new version of the patch, leave the old one on the
-JIRA and add a version number to the name of the new patch.
-. After a change has been committed, there is no need to keep your local branch around.
-
-=== Editing the HBase Website
-
-The source for the HBase website is in the HBase source, in the _src/site/_ directory.
-Within this directory, source for the individual pages is in the _xdocs/_ directory,
-and images referenced in those pages are in the _resources/images/_ directory.
-This directory also stores images used in the HBase Reference Guide.
-
-The website's pages are written in an HTML-like XML dialect called xdoc, which
-has a reference guide at
-https://maven.apache.org/archives/maven-1.x/plugins/xdoc/reference/xdocs.html.
-You can edit these files in a plain-text editor, an IDE, or an XML editor such
-as XML Mind XML Editor (XXE) or Oxygen XML Author.
-
-To preview your changes, build the website using the `mvn clean site -DskipTests`
-command. The HTML output resides in the _target/site/_ directory.
-When you are satisfied with your changes, follow the procedure in
-<> to submit your patch.
-
-[[website_publish]]
-=== Publishing the HBase Website and Documentation
-
-HBase uses the ASF's `gitpubsub` mechanism. A Jenkins job runs the
-`dev-support/jenkins-scripts/generate-hbase-website.sh` script, which runs the
-`mvn clean site site:stage` against the `master` branch of the `hbase`
-repository and commits the built artifacts to the `asf-site` branch of the
-`hbase-site` repository. When the commit is pushed, the website is redeployed
-automatically. If the script encounters an error, an email is sent to the
-developer mailing list. You can run the script manually or examine it to see the
-steps involved.
-
-[[website_check_links]]
-=== Checking the HBase Website for Broken Links
-
-A Jenkins job runs periodically to check HBase website for broken links, using
-the `dev-support/jenkins-scripts/check-website-links.sh` script. This script
-uses a tool called `linklint` to check for bad links and create a report. If
-broken links are found, an email is sent to the developer mailing list. You can
-run the script manually or examine it to see the steps involved.
-
-=== HBase Reference Guide Style Guide and Cheat Sheet
-
-The HBase Reference Guide is written in Asciidoc and built using link:http://asciidoctor.org[AsciiDoctor].
-The following cheat sheet is included for your reference. More nuanced and comprehensive documentation
-is available at http://asciidoctor.org/docs/user-manual/.
-
-.AsciiDoc Cheat Sheet
-[cols="1,1,a",options="header"]
-|===
-| Element Type | Desired Rendering | How to do it
-| A paragraph | a paragraph | Just type some text with a blank line at the top and bottom.
-| Add line breaks within a paragraph without adding blank lines | Manual line breaks | This will break + at the plus sign. Or prefix the whole paragraph with a line containing '[%hardbreaks]'
-| Give a title to anything | Colored italic bold differently-sized text | .MyTitle (no space between the period and the words) on the line before the thing to be titled
-| In-Line Code or commands | monospace | \`text`
-| In-line literal content (things to be typed exactly as shown) | bold mono | \*\`typethis`*
-| In-line replaceable content (things to substitute with your own values) | bold italic mono | \*\_typesomething_*
-| Code blocks with highlighting | monospace, highlighted, preserve space |
-........
-[source,java]
-----
- myAwesomeCode() {
-}
-----
-........
-| Code block included from a separate file | included just as though it were part of the main file |
-................
-[source,ruby]
-----
-include\::path/to/app.rb[]
-----
-................
-| Include only part of a separate file | Similar to Javadoc
-| See http://asciidoctor.org/docs/user-manual/#by-tagged-regions
-| Filenames, directory names, new terms | italic | \_hbase-default.xml_
-| External naked URLs | A link with the URL as link text |
-----
-link:http://www.google.com
-----
-
-| External URLs with text | A link with arbitrary link text |
-----
-link:http://www.google.com[Google]
-----
-
-| Create an internal anchor to cross-reference | not rendered |
-----
-[[anchor_name]]
-----
-| Cross-reference an existing anchor using its default title| an internal hyperlink using the element title if available, otherwise using the anchor name |
-----
-<>
-----
-| Cross-reference an existing anchor using custom text | an internal hyperlink using arbitrary text |
-----
-<>
-----
-| A block image | The image with alt text |
-----
-image::sunset.jpg[Alt Text]
-----
-(put the image in the src/site/resources/images directory)
-| An inline image | The image with alt text, as part of the text flow |
-----
-image:sunset.jpg [Alt Text]
-----
-(only one colon)
-| Link to a remote image | show an image hosted elsewhere |
-----
-image::http://inkscape.org/doc/examples/tux.svg[Tux,250,350]
-----
-(or `image:`)
-| Add dimensions or a URL to the image | depends | inside the brackets after the alt text, specify width, height and/or link="http://my_link.com"
-| A footnote | subscript link which takes you to the footnote |
-----
-Some text.footnote:[The footnote text.]
-----
-| A note or warning with no title | The admonition image followed by the admonition |
-----
-NOTE:My note here
-----
-
-----
-WARNING:My warning here
-----
-| A complex note | The note has a title and/or multiple paragraphs and/or code blocks or lists, etc |
-........
-.The Title
-[NOTE]
-====
-Here is the note text. Everything until the second set of four equals signs is part of the note.
-----
-some source code
-----
-====
-........
-| Bullet lists | bullet lists |
-----
-* list item 1
-----
-(see http://asciidoctor.org/docs/user-manual/#unordered-lists)
-| Numbered lists | numbered list |
-----
-. list item 2
-----
-(see http://asciidoctor.org/docs/user-manual/#ordered-lists)
-| Checklists | Checked or unchecked boxes |
-Checked:
-----
-- [*]
-----
-Unchecked:
-----
-- [ ]
-----
-| Multiple levels of lists | bulleted or numbered or combo |
-----
-. Numbered (1), at top level
-* Bullet (2), nested under 1
-* Bullet (3), nested under 1
-. Numbered (4), at top level
-* Bullet (5), nested under 4
-** Bullet (6), nested under 5
-- [x] Checked (7), at top level
-----
-| Labelled lists / variablelists | a list item title or summary followed by content |
-----
-Title:: content
-
-Title::
- content
-----
-| Sidebars, quotes, or other blocks of text
-| a block of text, formatted differently from the default
-| Delimited using different delimiters,
-see http://asciidoctor.org/docs/user-manual/#built-in-blocks-summary.
-Some of the examples above use delimiters like \...., ----,====.
-........
-[example]
-====
-This is an example block.
-====
-
-[source]
-----
-This is a source block.
-----
-
-[note]
-====
-This is a note block.
-====
-
-[quote]
-____
-This is a quote block.
-____
-........
-
-If you want to insert literal Asciidoc content that keeps being interpreted, when in doubt, use eight dots as the delimiter at the top and bottom.
-| Nested Sections | chapter, section, sub-section, etc |
-----
-= Book (or chapter if the chapter can be built alone, see the leveloffset info below)
-
-== Chapter (or section if the chapter is standalone)
-
-=== Section (or subsection, etc)
-
-==== Subsection
-----
-
-and so on up to 6 levels (think carefully about going deeper than 4 levels, maybe you can just titled paragraphs or lists instead). Note that you can include a book inside another book by adding the `:leveloffset:+1` macro directive directly before your include, and resetting it to 0 directly after. See the _book.adoc_ source for examples, as this is how this guide handles chapters. *Don't do it for prefaces, glossaries, appendixes, or other special types of chapters.*
-
-| Include one file from another | Content is included as though it were inline |
-
-----
-include::[/path/to/file.adoc]
-----
-
-For plenty of examples. see _book.adoc_.
-| A table | a table | See http://asciidoctor.org/docs/user-manual/#tables. Generally rows are separated by newlines and columns by pipes
-| Comment out a single line | A line is skipped during rendering |
-`+//+ This line won't show up`
-| Comment out a block | A section of the file is skipped during rendering |
-----
-////
-Nothing between the slashes will show up.
-////
-----
-| Highlight text for review | text shows up with yellow background |
-----
-Test between #hash marks# is highlighted yellow.
-----
-|===
-
-
-=== Auto-Generated Content
-
-Some parts of the HBase Reference Guide, most notably <>,
-are generated automatically, so that this area of the documentation stays in
-sync with the code. This is done by means of an XSLT transform, which you can examine
-in the source at _src/main/xslt/configuration_to_asciidoc_chapter.xsl_. This
-transforms the _hbase-common/src/main/resources/hbase-default.xml_ file into an
-Asciidoc output which can be included in the Reference Guide.
-
-Sometimes, it is necessary to add configuration parameters or modify their descriptions.
-Make the modifications to the source file, and they will be included in the
-Reference Guide when it is rebuilt.
-
-It is possible that other types of content can and will be automatically generated
-from HBase source files in the future.
-
-=== Images in the HBase Reference Guide
-
-You can include images in the HBase Reference Guide. It is important to include
-an image title if possible, and alternate text always. This allows screen readers
-to navigate to the image and also provides alternative text for the image.
-The following is an example of an image with a title and alternate text. Notice
-the double colon.
-
-[source,asciidoc]
-----
-.My Image Title
-image::sunset.jpg[Alt Text]
-----
-
-Here is an example of an inline image with alternate text. Notice the single colon.
-Inline images cannot have titles. They are generally small images like GUI buttons.
-
-[source,asciidoc]
-----
-image:sunset.jpg[Alt Text]
-----
-
-When doing a local build, save the image to the _src/site/resources/images/_ directory.
-When you link to the image, do not include the directory portion of the path.
-The image will be copied to the appropriate target location during the build of the output.
-
-When you submit a patch which includes adding an image to the HBase Reference Guide,
-attach the image to the JIRA. If the committer asks where the image should be
-committed, it should go into the above directory.
-
-=== Adding a New Chapter to the HBase Reference Guide
-
-If you want to add a new chapter to the HBase Reference Guide, the easiest way
-is to copy an existing chapter file, rename it, and change the ID (in double
-brackets) and title. Chapters are located in the _src/main/asciidoc/_chapters/_
-directory.
-
-Delete the existing content and create the new content. Then open the
-_src/main/asciidoc/book.adoc_ file, which is the main file for the HBase Reference
-Guide, and copy an existing `include` element to include your new chapter in the
-appropriate location. Be sure to add your new file to your Git repository before
-creating your patch.
-
-When in doubt, check to see how other files have been included.
-
-=== Common Documentation Issues
-
-The following documentation issues come up often. Some of these are preferences,
-but others can create mysterious build errors or other problems.
-
-[qanda]
-Isolate Changes for Easy Diff Review.::
- Be careful with pretty-printing or re-formatting an entire XML file, even if
- the formatting has degraded over time. If you need to reformat a file, do that
- in a separate JIRA where you do not change any content. Be careful because some
- XML editors do a bulk-reformat when you open a new file, especially if you use
- GUI mode in the editor.
-
-Syntax Highlighting::
- The HBase Reference Guide uses `coderay` for syntax highlighting. To enable
- syntax highlighting for a given code listing, use the following type of syntax:
-+
-........
-[source,xml]
-----
-My Name
-----
-........
-+
-Several syntax types are supported. The most interesting ones for the HBase
-Reference Guide are `java`, `xml`, `sql`, and `bash`.
-
diff --git a/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc b/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc
deleted file mode 100644
index dfdd1362803c..000000000000
--- a/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc
+++ /dev/null
@@ -1,714 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[appendix]
-== Known Incompatibilities Among HBase Versions
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-:toc: left
-:source-language: java
-
-== HBase 2.0 Incompatible Changes
-
-This appendix describes incompatible changes from earlier versions of HBase against HBase 2.0.
-This list is not meant to be wholly encompassing of all possible incompatibilities.
-Instead, this content is intended to give insight into some obvious incompatibilities which most
-users will face coming from HBase 1.x releases.
-
-=== List of Major Changes for HBase 2.0
-* HBASE-1912- HBCK is a HBase database checking tool for capturing the inconsistency. As an HBase administrator, you should not use HBase version 1.0 hbck tool to check the HBase 2.0 database. Doing so will break the database and throw an exception error.
-* HBASE-16189 and HBASE-18945- You cannot open the HBase 2.0 hfiles through HBase 1.0 version. If you are an admin or an HBase user who is using HBase version 1.x, you must first do a rolling upgrade to the latest version of HBase 1.x and then upgrade to HBase 2.0.
-* HBASE-18240 - Changed the ReplicationEndpoint Interface. It also introduces a new hbase-third party 1.0 that packages all the third party utilities, which are expected to run in the hbase cluster.
-
-=== Coprocessor API changes
-
-* HBASE-16769 - Deprecated PB references from MasterObserver and RegionServerObserver.
-* HBASE-17312 - [JDK8] Use default method for Observer Coprocessors. The interface classes of BaseMasterAndRegionObserver, BaseMasterObserver, BaseRegionObserver, BaseRegionServerObserver and BaseWALObserver uses JDK8's 'default' keyword to provide empty and no-op implementations.
-* Interface HTableInterface
- HBase 2.0 introduces following changes to the methods listed below:
-
-==== [−] interface CoprocessorEnvironment changes (2)
-
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method getTable ( TableName ) has been removed. | A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getTable ( TableName, ExecutorService ) has been removed. | A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* Public Audience
-
-The following tables describes the coprocessor changes.
-
-===== [−] class CoprocessorRpcChannel (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| This class has become interface.| A client program may be interrupted by IncompatibleClassChangeError or InstantiationError exception depending on the usage of this class.
-|===
-
-===== Class CoprocessorHost
-Classes that were Audience Private but were removed.
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Type of field coprocessors has been changed from java.util.SortedSet to org.apache.hadoop.hbase.util.SortedList.| A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-
-==== MasterObserver
-HBase 2.0 introduces following changes to the MasterObserver interface.
-
-===== [−] interface MasterObserver (14)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method voidpostCloneSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostCreateTable ( ObserverContext, HTableDescriptor, HRegionInfo[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpostDeleteSnapshot (ObserverContext, HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpostGetTableDescriptors ( ObserverContext, List ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpostModifyTable ( ObserverContext, TableName, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpostRestoreSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpostSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreCloneSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreCreateTable ( ObserverContext, HTableDescriptor, HRegionInfo[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreDeleteSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreGetTableDescriptors ( ObserverContext, List, List ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreModifyTable ( ObserverContext, TableName, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreRestoreSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-| Abstract method voidpreSnapshot ( ObserverContext, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
-|===
-
-==== RegionObserver
-HBase 2.0 introduces following changes to the RegionObserver interface.
-
-===== [−] interface RegionObserver (13)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method voidpostCloseRegionOperation ( ObserverContext, HRegion.Operation ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostCompactSelection ( ObserverContext, Store, ImmutableList ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostCompactSelection ( ObserverContext, Store, ImmutableList, CompactionRequest ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostGetClosestRowBefore ( ObserverContext, byte[ ], byte[ ], Result ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method DeleteTrackerpostInstantiateDeleteTracker ( ObserverContext, DeleteTracker ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostSplit ( ObserverContext, HRegion, HRegion ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostStartRegionOperation ( ObserverContext, HRegion.Operation ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method StoreFile.ReaderpostStoreFileReaderOpen ( ObserverContext, FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Reference, StoreFile.Reader ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpostWALRestore ( ObserverContext, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method InternalScannerpreFlushScannerOpen ( ObserverContext, Store, KeyValueScanner, InternalScanner ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpreGetClosestRowBefore ( ObserverContext, byte[ ], byte[ ], Result ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method StoreFile.ReaderpreStoreFileReaderOpen ( ObserverContext, FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Reference, StoreFile.Reader ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method voidpreWALRestore ( ObserverContext, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== WALObserver
-HBase 2.0 introduces following changes to the WALObserver interface.
-
-===== [−] interface WALObserver
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method voidpostWALWrite ( ObserverContext, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method booleanpreWALWrite ( ObserverContext, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== Miscellaneous
-HBase 2.0 introduces changes to the following classes:
-
-hbase-server-1.0.0.jar, OnlineRegions.class package org.apache.hadoop.hbase.regionserver
-[cols="1,1", frame="all"]
-===== [−] OnlineRegions.getFromOnlineRegions ( String p1 ) [abstract] : HRegion
-org/apache/hadoop/hbase/regionserver/OnlineRegions.getFromOnlineRegions:(Ljava/lang/String;)Lorg/apache/hadoop/hbase/regionserver/HRegion;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from Region to Region.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-hbase-server-1.0.0.jar, RegionCoprocessorEnvironment.class package org.apache.hadoop.hbase.coprocessor
-
-===== [−] RegionCoprocessorEnvironment.getRegion ( ) [abstract] : HRegion
-org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.getRegion:()Lorg/apache/hadoop/hbase/regionserver/HRegion;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.regionserver.HRegion to org.apache.hadoop.hbase.regionserver.Region.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-hbase-server-1.0.0.jar, RegionCoprocessorHost.class package org.apache.hadoop.hbase.regionserver
-
-===== [−] RegionCoprocessorHost.postAppend ( Append append, Result result ) : void
-org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.postAppend:(Lorg/apache/hadoop/hbase/client/Append;Lorg/apache/hadoop/hbase/client/Result;)V
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from void to org.apache.hadoop.hbase.client.Result.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] RegionCoprocessorHost.preStoreFileReaderOpen ( FileSystem fs, Path p, FSDataInputStreamWrapper in, long size,CacheConfig cacheConf, Reference r ) : StoreFile.Reader
-org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.preStoreFileReaderOpen:(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/hbase/io/FSDataInputStreamWrapper;JLorg/apache/hadoop/hbase/io/hfile/CacheConfig;Lorg/apache/hadoop/hbase/io/Reference;)Lorg/apache/hadoop/hbase/regionserver/StoreFile$Reader;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from StoreFile.Reader to StoreFileReader.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== IPC
-==== Scheduler changes:
-1. Following methods became abstract:
-
-package org.apache.hadoop.hbase.ipc
-
-===== [−]class RpcScheduler (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method void dispatch ( CallRunner ) has been removed from this class.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-hbase-server-1.0.0.jar, RpcScheduler.class package org.apache.hadoop.hbase.ipc
-
-===== [−] RpcScheduler.dispatch ( CallRunner p1 ) [abstract] : void 1
-org/apache/hadoop/hbase/ipc/RpcScheduler.dispatch:(Lorg/apache/hadoop/hbase/ipc/CallRunner;)V
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from void to boolean.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-1. Following abstract methods have been removed:
-
-===== [−]interface PriorityFunction (2)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method longgetDeadline ( RPCProtos.RequestHeader, Message ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method int getPriority ( RPCProtos.RequestHeader, Message ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== Server API changes:
-
-===== [−] class RpcServer (12)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Type of field CurCall has been changed from java.lang.ThreadLocal to java.lang.ThreadLocal.| A client program may be interrupted by NoSuchFieldError exception.
-| This class became abstract.| A client program may be interrupted by InstantiationError exception.
-| Abstract method int getNumOpenConnections ( ) has been added to this class.| This class became abstract and a client program may be interrupted by InstantiationError exception.
-| Field callQueueSize of type org.apache.hadoop.hbase.util.Counter has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field connectionList of type java.util.List has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field maxIdleTime of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field numConnections of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field port of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field purgeTimeout of type long has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field responder of type RpcServer.Responder has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field socketSendBufferSize of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field thresholdIdleConnections of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-Following abstract method has been removed:
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method Paircall ( BlockingService, Descriptors.MethodDescriptor, Message, CellScanner, long, MonitoredRPCHandler ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== Replication and WAL changes:
-HBASE-18733: WALKey has been purged completely in HBase 2.0.
-Following are the changes to the WALKey:
-
-===== [−] classWALKey (8)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Access level of field clusterIds has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
-| Access level of field compressionContext has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
-| Access level of field encodedRegionName has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
-| Access level of field tablename has been changed from protectedto private.| A client program may be interrupted by IllegalAccessError exception.
-| Access level of field writeTime has been changed from protectedto private.| A client program may be interrupted by IllegalAccessError exception.
-|===
-
-Following fields have been removed:
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Field LOG of type org.apache.commons.logging.Log has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field VERSION of type WALKey.Version has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field logSeqNum of type long has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-Following are the changes to the WALEdit.class:
-hbase-server-1.0.0.jar, WALEdit.class package org.apache.hadoop.hbase.regionserver.wal
-
-===== WALEdit.getCompaction ( Cell kv ) [static] : WALProtos.CompactionDescriptor (1)
-org/apache/hadoop/hbase/regionserver/wal/WALEdit.getCompaction:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$CompactionDescriptor;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.CompactionDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== WALEdit.getFlushDescriptor ( Cell cell ) [static] : WALProtos.FlushDescriptor (1)
-org/apache/hadoop/hbase/regionserver/wal/WALEdit.getFlushDescriptor:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$FlushDescriptor;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.FlushDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== WALEdit.getRegionEventDescriptor ( Cell cell ) [static] : WALProtos.RegionEventDescriptor (1)
-org/apache/hadoop/hbase/regionserver/wal/WALEdit.getRegionEventDescriptor:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$RegionEventDescriptor;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.RegionEventDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-Following is the change to the WALKey.class:
-package org.apache.hadoop.hbase.wal
-
-===== WALKey.getBuilder ( WALCellCodec.ByteStringCompressor compressor ) : WALProtos.WALKey.Builder 1
-org/apache/hadoop/hbase/wal/WALKey.getBuilder:(Lorg/apache/hadoop/hbase/regionserver/wal/WALCellCodec$ByteStringCompressor;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$WALKey$Builder;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey.Builder to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.WALKey.Builder.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== Deprecated APIs or coprocessor:
-
-HBASE-16769 - PB references from MasterObserver and RegionServerObserver has been removed.
-
-==== Admin Interface API changes:
-You cannot administer an HBase 2.0 cluster with an HBase 1.0 client that includes RelicationAdmin, ACC, Thrift and REST usage of Admin ops. Methods returning protobufs have been changed to return POJOs instead. pb is not used in the APIs anymore. Returns have changed from void to Future for async methods.
-HBASE-18106 - Admin.listProcedures and Admin.listLocks were renamed to getProcedures and getLocks.
-MapReduce makes use of Admin doing following admin.getClusterStatus() to calcluate Splits.
-
-* Thrift usage of Admin API:
-compact(ByteBuffer)
-createTable(ByteBuffer, List)
-deleteTable(ByteBuffer)
-disableTable(ByteBuffer)
-enableTable(ByteBuffer)
-getTableNames()
-majorCompact(ByteBuffer)
-
-* REST usage of Admin API:
-hbase-rest
-org.apache.hadoop.hbase.rest
-RootResource
-getTableList()
- TableName[] tableNames = servlet.getAdmin().listTableNames();
-SchemaResource
-delete(UriInfo)
- Admin admin = servlet.getAdmin();
-update(TableSchemaModel, boolean, UriInfo)
- Admin admin = servlet.getAdmin();
-StorageClusterStatusResource
-get(UriInfo)
- ClusterStatus status = servlet.getAdmin().getClusterStatus();
-StorageClusterVersionResource
-get(UriInfo)
- model.setVersion(servlet.getAdmin().getClusterStatus().getHBaseVersion());
-TableResource
-exists()
- return servlet.getAdmin().tableExists(TableName.valueOf(table));
-
-Following are the changes to the Admin interface:
-
-===== [−] interface Admin (9)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method createTableAsync ( HTableDescriptor, byte[ ][ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method disableTableAsync ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method enableTableAsync ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getCompactionState ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getCompactionStateForRegion ( byte[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method isSnapshotFinished ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method snapshot ( String, TableName, HBaseProtos.SnapshotDescription.Type ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method snapshot ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method takeSnapshotAsync ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-Following are the changes to the Admin.class:
-hbase-client-1.0.0.jar, Admin.class package org.apache.hadoop.hbase.client
-
-===== [−] Admin.createTableAsync ( HTableDescriptor p1, byte[ ][ ] p2 ) [abstract] : void 1
-org/apache/hadoop/hbase/client/Admin.createTableAsync:(Lorg/apache/hadoop/hbase/HTableDescriptor;[[B)V
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from void to java.util.concurrent.Future.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] Admin.disableTableAsync ( TableName p1 ) [abstract] : void 1
-org/apache/hadoop/hbase/client/Admin.disableTableAsync:(Lorg/apache/hadoop/hbase/TableName;)V
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from void to java.util.concurrent.Future.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== Admin.enableTableAsync ( TableName p1 ) [abstract] : void 1
-org/apache/hadoop/hbase/client/Admin.enableTableAsync:(Lorg/apache/hadoop/hbase/TableName;)V
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from void to java.util.concurrent.Future.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] Admin.getCompactionState ( TableName p1 ) [abstract] : AdminProtos.GetRegionInfoResponse.CompactionState 1
-org/apache/hadoop/hbase/client/Admin.getCompactionState:(Lorg/apache/hadoop/hbase/TableName;)Lorg/apache/hadoop/hbase/protobuf/generated/AdminProtos$GetRegionInfoResponse$CompactionState;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState to CompactionState.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] Admin.getCompactionStateForRegion ( byte[ ] p1 ) [abstract] : AdminProtos.GetRegionInfoResponse.CompactionState 1
-org/apache/hadoop/hbase/client/Admin.getCompactionStateForRegion:([B)Lorg/apache/hadoop/hbase/protobuf/generated/AdminProtos$GetRegionInfoResponse$CompactionState;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState to CompactionState.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== HTableDescriptor and HColumnDescriptor changes
-HTableDescriptor and HColumnDescriptor has become interfaces and you can create it through Builders. HCD has become CFD. It no longer implements writable interface.
-package org.apache.hadoop.hbase
-
-===== [−] class HColumnDescriptor (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Removed super-interface org.apache.hadoop.io.WritableComparable.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-HColumnDescriptor in 1.0.0
-{code}
-@InterfaceAudience.Public
-@InterfaceStability.Evolving
-public class HColumnDescriptor implements WritableComparable {
-{code}
-
-HColumnDescriptor in 2.0
-{code}
-@InterfaceAudience.Public
-@Deprecated // remove it in 3.0
-public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable {
-{code}
-
-For META_TABLEDESC, the maker method had been deprecated already in HTD in 1.0.0. OWNER_KEY is still in HTD.
-
-===== class HTableDescriptor (3)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Removed super-interface org.apache.hadoop.io.WritableComparable.| A client program may be interrupted by NoSuchMethodError exception.
-| Field META_TABLEDESC of type HTableDescriptor has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-hbase-client-1.0.0.jar, HTableDescriptor.class package org.apache.hadoop.hbase
-
-===== [−] HTableDescriptor.getColumnFamilies ( ) : HColumnDescriptor[ ] (1)
-org/apache/hadoop/hbase/HTableDescriptor.getColumnFamilies:()[Lorg/apache/hadoop/hbase/HColumnDescriptor;
-
-===== [−] class HColumnDescriptor (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from HColumnDescriptor[]to client.ColumnFamilyDescriptor[].| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] HTableDescriptor.getCoprocessors ( ) : List (1)
-org/apache/hadoop/hbase/HTableDescriptor.getCoprocessors:()Ljava/util/List;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from java.util.List to java.util.Collection.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* HBASE-12990 MetaScanner is removed and it is replaced by MetaTableAccessor.
-
-===== HTableWrapper changes:
-hbase-server-1.0.0.jar, HTableWrapper.class package org.apache.hadoop.hbase.client
-
-===== [−] HTableWrapper.createWrapper ( List openTables, TableName tableName, CoprocessorHost.Environment env, ExecutorService pool ) [static] : HTableInterface 1
-org/apache/hadoop/hbase/client/HTableWrapper.createWrapper:(Ljava/util/List;Lorg/apache/hadoop/hbase/TableName;Lorg/apache/hadoop/hbase/coprocessor/CoprocessorHost$Environment;Ljava/util/concurrent/ExecutorService;)Lorg/apache/hadoop/hbase/client/HTableInterface;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from HTableInterface to Table.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* HBASE-12586: Delete all public HTable constructors and delete ConnectionManager#{delete,get}Connection.
-* HBASE-9117: Remove HTablePool and all HConnection pooling related APIs.
-* HBASE-13214: Remove deprecated and unused methods from HTable class
-Following are the changes to the Table interface:
-
-===== [−] interface Table (4)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method batch ( List> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method batchCallback ( List>, Batch.Callback )has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getWriteBufferSize ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method setWriteBufferSize ( long ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== Deprecated buffer methods in Table (in 1.0.1) and removed in 2.0.0
-
-* HBASE-13298- Clarify if Table.{set|get}WriteBufferSize() is deprecated or not.
-
-* LockTimeoutException and OperationConflictException classes have been removed.
-
-==== class OperationConflictException (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| This class has been removed.| A client program may be interrupted by NoClassDefFoundErrorexception.
-|===
-
-==== class class LockTimeoutException (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| This class has been removed.| A client program may be interrupted by NoClassDefFoundErrorexception.
-|===
-
-==== Filter API changes:
-Following methods have been removed:
-package org.apache.hadoop.hbase.filter
-
-===== [−] class Filter (2)
-|===
-| Change | Result
-| Abstract method getNextKeyHint ( KeyValue ) has been removed from this class.|A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method transform ( KeyValue ) has been removed from this class.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* HBASE-12296 Filters should work with ByteBufferedCell.
-* HConnection is removed in HBase 2.0.
-* RegionLoad and ServerLoad internally moved to shaded PB.
-
-===== [−] class RegionLoad (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Type of field regionLoadPB has been changed from protobuf.generated.ClusterStatusProtos.RegionLoad to shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.|A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-* HBASE-15783:AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST is not used any more.
-package org.apache.hadoop.hbase.security.access
-
-===== [−] interface AccessControlConstants (3)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Field OP_ATTRIBUTE_ACL_STRATEGY of type java.lang.Stringhas been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
-| Field OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST of type byte[] has been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
-| Field OP_ATTRIBUTE_ACL_STRATEGY_DEFAULT of type byte[] has been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-===== ServerLoad returns long instead of int 1
-hbase-client-1.0.0.jar, ServerLoad.class package org.apache.hadoop.hbase
-
-===== [−] ServerLoad.getNumberOfRequests ( ) : int 1
-org/apache/hadoop/hbase/ServerLoad.getNumberOfRequests:()I
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] ServerLoad.getReadRequestsCount ( ) : int 1
-org/apache/hadoop/hbase/ServerLoad.getReadRequestsCount:()I
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−] ServerLoad.getTotalNumberOfRequests ( ) : int 1
-org/apache/hadoop/hbase/ServerLoad.getTotalNumberOfRequests:()I
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from int to long.|This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-===== [−]ServerLoad.getWriteRequestsCount ( ) : int 1
-org/apache/hadoop/hbase/ServerLoad.getWriteRequestsCount:()I
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* HBASE-13636 Remove deprecation for HBASE-4072 (Reading of zoo.cfg)
-* HConstants are removed. HBASE-16040 Remove configuration "hbase.replication"
-
-===== [−]class HConstants (6)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Field DEFAULT_HBASE_CONFIG_READ_ZOOKEEPER_CONFIG of type boolean has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field HBASE_CONFIG_READ_ZOOKEEPER_CONFIG of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field REPLICATION_ENABLE_DEFAULT of type boolean has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field REPLICATION_ENABLE_KEY of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field ZOOKEEPER_CONFIG_NAME of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-| Field ZOOKEEPER_USEMULTI of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
-|===
-
-* HBASE-18732: [compat 1-2] HBASE-14047 removed Cell methods without deprecation cycle.
-
-===== [−]interface Cell 5
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method getFamily ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getMvccVersion ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getQualifier ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getRow ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-| Abstract method getValue ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* HBASE-18795:Expose KeyValue.getBuffer() for tests alone. Allows KV#getBuffer in tests only that was deprecated previously.
-
-==== Region scanner changes:
-===== [−]interface RegionScanner (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Abstract method boolean nextRaw ( List, int ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== StoreFile changes:
-===== [−] class StoreFile (1)
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| This class became interface.| A client program may be interrupted by IncompatibleClassChangeError or InstantiationError exception dependent on the usage of this class.
-|===
-
-==== Mapreduce changes:
-HFile*Format has been removed in HBase 2.0.
-
-==== ClusterStatus changes:
-HBASE-15843: Replace RegionState.getRegionInTransition() Map with a Set
-hbase-client-1.0.0.jar, ClusterStatus.class package org.apache.hadoop.hbase
-
-===== [−] ClusterStatus.getRegionsInTransition ( ) : Map 1
-org/apache/hadoop/hbase/ClusterStatus.getRegionsInTransition:()Ljava/util/Map;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-|Return value type has been changed from java.util.Map to java.util.List.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-Other changes in ClusterStatus include removal of convert methods that were no longer necessary after purge of PB from API.
-
-==== Purge of PBs from API
-PBs have been deprecated in APIs in HBase 2.0.
-
-===== [−] HBaseSnapshotException.getSnapshotDescription ( ) : HBaseProtos.SnapshotDescription 1
-org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.getSnapshotDescription:()Lorg/apache/hadoop/hbase/protobuf/generated/HBaseProtos$SnapshotDescription;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription to org.apache.hadoop.hbase.client.SnapshotDescription.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-* HBASE-15609: Remove PB references from Result, DoubleColumnInterpreter and any such public facing class for 2.0.
-hbase-client-1.0.0.jar, Result.class package org.apache.hadoop.hbase.client
-
-===== [−] Result.getStats ( ) : ClientProtos.RegionLoadStats 1
-org/apache/hadoop/hbase/client/Result.getStats:()Lorg/apache/hadoop/hbase/protobuf/generated/ClientProtos$RegionLoadStats;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats to RegionLoadStats.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== REST changes:
-hbase-rest-1.0.0.jar, Client.class package org.apache.hadoop.hbase.rest.client
-
-===== [−] Client.getHttpClient ( ) : HttpClient 1
-org/apache/hadoop/hbase/rest/client/Client.getHttpClient:()Lorg/apache/commons/httpclient/HttpClient
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.commons.httpclient.HttpClient to org.apache.http.client.HttpClient.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-hbase-rest-1.0.0.jar, Response.class package org.apache.hadoop.hbase.rest.client
-
-===== [−] Response.getHeaders ( ) : Header[ ] 1
-org/apache/hadoop/hbase/rest/client/Response.getHeaders:()[Lorg/apache/commons/httpclient/Header;
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from org.apache.commons.httpclient.Header[] to org.apache.http.Header[].| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== PrettyPrinter changes:
-hbase-server-1.0.0.jar, HFilePrettyPrinter.class package org.apache.hadoop.hbase.io.hfile
-
-===== [−]HFilePrettyPrinter.processFile ( Path file ) : void 1
-org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.processFile:(Lorg/apache/hadoop/fs/Path;)V
-[cols="1,1", frame="all"]
-|===
-| Change | Result
-| Return value type has been changed from void to int.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
-|===
-
-==== AccessControlClient changes:
-HBASE-13171 Change AccessControlClient methods to accept connection object to reduce setup time. Parameters have been changed in the following methods:
-
-* hbase-client-1.2.7-SNAPSHOT.jar, AccessControlClient.class
-package org.apache.hadoop.hbase.security.access
-AccessControlClient.getUserPermissions ( Configuration conf, String tableRegex ) [static] : List *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.getUserPermissions:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)Ljava/util/List;
-
-* AccessControlClient.grant ( Configuration conf, String namespace, String userName, Permission.Action... actions )[static] : void *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
-
-* AccessControlClient.grant ( Configuration conf, String userName, Permission.Action... actions ) [static] : void *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
-
-* AccessControlClient.grant ( Configuration conf, TableName tableName, String userName, byte[ ] family, byte[ ] qual,Permission.Action... actions ) [static] : void *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/hbase/TableName;Ljava/lang/String;[B[B[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
-
-* AccessControlClient.isAccessControllerRunning ( Configuration conf ) [static] : boolean *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.isAccessControllerRunning:(Lorg/apache/hadoop/conf/Configuration;)Z
-
-* AccessControlClient.revoke ( Configuration conf, String namespace, String userName, Permission.Action... actions )[static] : void *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
-
-* AccessControlClient.revoke ( Configuration conf, String userName, Permission.Action... actions ) [static] : void *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
-
-* AccessControlClient.revoke ( Configuration conf, TableName tableName, String username, byte[ ] family, byte[ ] qualifier,Permission.Action... actions ) [static] : void *DEPRECATED*
-org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/hbase/TableName;Ljava/lang/String;[B[B[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
-* HBASE-18731: [compat 1-2] Mark protected methods of QuotaSettings that touch Protobuf internals as IA.Private
diff --git a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
deleted file mode 100644
index 98659c26ccf2..000000000000
--- a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
+++ /dev/null
@@ -1,361 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[appendix]
-== HFile format
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-:toc: left
-:source-language: java
-
-This appendix describes the evolution of the HFile format.
-
-[[hfilev1]]
-=== HBase File Format (version 1)
-
-As we will be discussing changes to the HFile format, it is useful to give a short overview of the original (HFile version 1) format.
-
-[[hfilev1.overview]]
-==== Overview of Version 1
-
-An HFile in version 1 format is structured as follows:
-
-.HFile V1 Format
-image::hfile.png[HFile Version 1]
-
-==== Block index format in version 1
-
-The block index in version 1 is very straightforward.
-For each entry, it contains:
-
-. Offset (long)
-. Uncompressed size (int)
-. Key (a serialized byte array written using Bytes.writeByteArray)
-.. Key length as a variable-length integer (VInt)
-.. Key bytes
-
-
-The number of entries in the block index is stored in the fixed file trailer, and has to be passed in to the method that reads the block index.
-One of the limitations of the block index in version 1 is that it does not provide the compressed size of a block, which turns out to be necessary for decompression.
-Therefore, the HFile reader has to infer this compressed size from the offset difference between blocks.
-We fix this limitation in version 2, where we store on-disk block size instead of uncompressed size, and get uncompressed size from the block header.
-
-[[hfilev2]]
-=== HBase file format with inline blocks (version 2)
-
-Note: this feature was introduced in HBase 0.92
-
-==== Motivation
-
-We found it necessary to revise the HFile format after encountering high memory usage and slow startup times caused by large Bloom filters and block indexes in the region server.
-Bloom filters can get as large as 100 MB per HFile, which adds up to 2 GB when aggregated over 20 regions.
-Block indexes can grow as large as 6 GB in aggregate size over the same set of regions.
-A region is not considered opened until all of its block index data is loaded.
-Large Bloom filters produce a different performance problem: the first get request that requires a Bloom filter lookup will incur the latency of loading the entire Bloom filter bit array.
-
-To speed up region server startup we break Bloom filters and block indexes into multiple blocks and write those blocks out as they fill up, which also reduces the HFile writer's memory footprint.
-In the Bloom filter case, "filling up a block" means accumulating enough keys to efficiently utilize a fixed-size bit array, and in the block index case we accumulate an "index block" of the desired size.
-Bloom filter blocks and index blocks (we call these "inline blocks") become interspersed with data blocks, and as a side effect we can no longer rely on the difference between block offsets to determine data block length, as it was done in version 1.
-
-HFile is a low-level file format by design, and it should not deal with application-specific details such as Bloom filters, which are handled at StoreFile level.
-Therefore, we call Bloom filter blocks in an HFile "inline" blocks.
-We also supply HFile with an interface to write those inline blocks.
-
-Another format modification aimed at reducing the region server startup time is to use a contiguous "load-on-open" section that has to be loaded in memory at the time an HFile is being opened.
-Currently, as an HFile opens, there are separate seek operations to read the trailer, data/meta indexes, and file info.
-To read the Bloom filter, there are two more seek operations for its "data" and "meta" portions.
-In version 2, we seek once to read the trailer and seek again to read everything else we need to open the file from a contiguous block.
-
-[[hfilev2.overview]]
-==== Overview of Version 2
-
-The version of HBase introducing the above features reads both version 1 and 2 HFiles, but only writes version 2 HFiles.
-A version 2 HFile is structured as follows:
-
-.HFile Version 2 Structure
-image::hfilev2.png[HFile Version 2]
-
-==== Unified version 2 block format
-
-In the version 2 every block in the data section contains the following fields:
-
-. 8 bytes: Block type, a sequence of bytes equivalent to version 1's "magic records". Supported block types are:
-.. DATA – data blocks
-.. LEAF_INDEX – leaf-level index blocks in a multi-level-block-index
-.. BLOOM_CHUNK – Bloom filter chunks
-.. META – meta blocks (not used for Bloom filters in version 2 anymore)
-.. INTERMEDIATE_INDEX – intermediate-level index blocks in a multi-level blockindex
-.. ROOT_INDEX – root-level index blocks in a multi-level block index
-.. FILE_INFO – the ''file info'' block, a small key-value map of metadata
-.. BLOOM_META – a Bloom filter metadata block in the load-on-open section
-.. TRAILER – a fixed-size file trailer.
- As opposed to the above, this is not an HFile v2 block but a fixed-size (for each HFile version) data structure
-.. INDEX_V1 – this block type is only used for legacy HFile v1 block
-. Compressed size of the block's data, not including the header (int).
-+
-Can be used for skipping the current data block when scanning HFile data.
-. Uncompressed size of the block's data, not including the header (int)
-+
-This is equal to the compressed size if the compression algorithm is NONE
-. File offset of the previous block of the same type (long)
-+
-Can be used for seeking to the previous data/index block
-. Compressed data (or uncompressed data if the compression algorithm is NONE).
-
-The above format of blocks is used in the following HFile sections:
-
-Scanned block section::
- The section is named so because it contains all data blocks that need to be read when an HFile is scanned sequentially.
- Also contains Leaf index blocks and Bloom chunk blocks.
-Non-scanned block section::
- This section still contains unified-format v2 blocks but it does not have to be read when doing a sequential scan.
- This section contains "meta" blocks and intermediate-level index blocks.
-
-We are supporting "meta" blocks in version 2 the same way they were supported in version 1, even though we do not store Bloom filter data in these blocks anymore.
-
-==== Block index in version 2
-
-There are three types of block indexes in HFile version 2, stored in two different formats (root and non-root):
-
-. Data index -- version 2 multi-level block index, consisting of:
-.. Version 2 root index, stored in the data block index section of the file
-.. Optionally, version 2 intermediate levels, stored in the non-root format in the data index section of the file. Intermediate levels can only be present if leaf level blocks are present
-.. Optionally, version 2 leaf levels, stored in the non-root format inline with data blocks
-. Meta index -- version 2 root index format only, stored in the meta index section of the file
-. Bloom index -- version 2 root index format only, stored in the ''load-on-open'' section as part of Bloom filter metadata.
-
-==== Root block index format in version 2
-
-This format applies to:
-
-. Root level of the version 2 data index
-. Entire meta and Bloom indexes in version 2, which are always single-level.
-
-A version 2 root index block is a sequence of entries of the following format, similar to entries of a version 1 block index, but storing on-disk size instead of uncompressed size.
-
-. Offset (long)
-+
-This offset may point to a data block or to a deeper-level index block.
-
-. On-disk size (int)
-. Key (a serialized byte array stored using Bytes.writeByteArray)
-+
-. Key (VInt)
-. Key bytes
-
-
-A single-level version 2 block index consists of just a single root index block.
-To read a root index block of version 2, one needs to know the number of entries.
-For the data index and the meta index the number of entries is stored in the trailer, and for the Bloom index it is stored in the compound Bloom filter metadata.
-
-For a multi-level block index we also store the following fields in the root index block in the load-on-open section of the HFile, in addition to the data structure described above:
-
-. Middle leaf index block offset
-. Middle leaf block on-disk size (meaning the leaf index block containing the reference to the ''middle'' data block of the file)
-. The index of the mid-key (defined below) in the middle leaf-level block.
-
-
-
-These additional fields are used to efficiently retrieve the mid-key of the HFile used in HFile splits, which we define as the first key of the block with a zero-based index of (n – 1) / 2, if the total number of blocks in the HFile is n.
-This definition is consistent with how the mid-key was determined in HFile version 1, and is reasonable in general, because blocks are likely to be the same size on average, but we don't have any estimates on individual key/value pair sizes.
-
-
-
-When writing a version 2 HFile, the total number of data blocks pointed to by every leaf-level index block is kept track of.
-When we finish writing and the total number of leaf-level blocks is determined, it is clear which leaf-level block contains the mid-key, and the fields listed above are computed.
-When reading the HFile and the mid-key is requested, we retrieve the middle leaf index block (potentially from the block cache) and get the mid-key value from the appropriate position inside that leaf block.
-
-==== Non-root block index format in version 2
-
-This format applies to intermediate-level and leaf index blocks of a version 2 multi-level data block index.
-Every non-root index block is structured as follows.
-
-. numEntries: the number of entries (int).
-. entryOffsets: the "secondary index" of offsets of entries in the block, to facilitate
- a quick binary search on the key (`numEntries + 1` int values). The last value
- is the total length of all entries in this index block. For example, in a non-root
- index block with entry sizes 60, 80, 50 the "secondary index" will contain the
- following int array: `{0, 60, 140, 190}`.
-. Entries.
- Each entry contains:
-+
-.. Offset of the block referenced by this entry in the file (long)
-.. On-disk size of the referenced block (int)
-.. Key.
- The length can be calculated from entryOffsets.
-
-
-==== Bloom filters in version 2
-
-In contrast with version 1, in a version 2 HFile Bloom filter metadata is stored in the load-on-open section of the HFile for quick startup.
-
-. A compound Bloom filter.
-+
-. Bloom filter version = 3 (int). There used to be a DynamicByteBloomFilter class that had the Bloom filter version number 2
-. The total byte size of all compound Bloom filter chunks (long)
-. Number of hash functions (int)
-. Type of hash functions (int)
-. The total key count inserted into the Bloom filter (long)
-. The maximum total number of keys in the Bloom filter (long)
-. The number of chunks (int)
-. Comparator class used for Bloom filter keys, a UTF>8 encoded string stored using Bytes.writeByteArray
-. Bloom block index in the version 2 root block index format
-
-
-==== File Info format in versions 1 and 2
-
-The file info block is a serialized map from byte arrays to byte arrays, with the following keys, among others.
-StoreFile-level logic adds more keys to this.
-
-[cols="1,1", frame="all"]
-|===
-|hfile.LASTKEY| The last key of the file (byte array)
-|hfile.AVG_KEY_LEN| The average key length in the file (int)
-|hfile.AVG_VALUE_LEN| The average value length in the file (int)
-|===
-
-In version 2, we did not change the file format, but we moved the file info to
-the final section of the file, which can be loaded as one block when the HFile
-is being opened.
-
-Also, we do not store the comparator in the version 2 file info anymore.
-Instead, we store it in the fixed file trailer.
-This is because we need to know the comparator at the time of parsing the load-on-open section of the HFile.
-
-==== Fixed file trailer format differences between versions 1 and 2
-
-The following table shows common and different fields between fixed file trailers in versions 1 and 2.
-Note that the size of the trailer is different depending on the version, so it is ''fixed'' only within one version.
-However, the version is always stored as the last four-byte integer in the file.
-
-.Differences between HFile Versions 1 and 2
-[cols="1,1", frame="all"]
-|===
-| Version 1 | Version 2
-| |File info offset (long)
-| Data index offset (long)
-| loadOnOpenOffset (long) /The offset of the section that we need to load when opening the file./
-| | Number of data index entries (int)
-| metaIndexOffset (long) /This field is not being used by the version 1 reader, so we removed it from version 2./ | uncompressedDataIndexSize (long) /The total uncompressed size of the whole data block index, including root-level, intermediate-level, and leaf-level blocks./
-| | Number of meta index entries (int)
-| | Total uncompressed bytes (long)
-| numEntries (int) | numEntries (long)
-| Compression codec: 0 = LZO, 1 = GZ, 2 = NONE (int) | Compression codec: 0 = LZO, 1 = GZ, 2 = NONE (int)
-| | The number of levels in the data block index (int)
-| | firstDataBlockOffset (long) /The offset of the first data block. Used when scanning./
-| | lastDataBlockEnd (long) /The offset of the first byte after the last key/value data block. We don't need to go beyond this offset when scanning./
-| Version: 1 (int) | Version: 2 (int)
-|===
-
-
-
-==== getShortMidpointKey(an optimization for data index block)
-
-Note: this optimization was introduced in HBase 0.95+
-
-HFiles contain many blocks that contain a range of sorted Cells.
-Each cell has a key.
-To save IO when reading Cells, the HFile also has an index that maps a Cell's start key to the offset of the beginning of a particular block.
-Prior to this optimization, HBase would use the key of the first cell in each data block as the index key.
-
-In HBASE-7845, we generate a new key that is lexicographically larger than the last key of the previous block and lexicographically equal or smaller than the start key of the current block.
-While actual keys can potentially be very long, this "fake key" or "virtual key" can be much shorter.
-For example, if the stop key of previous block is "the quick brown fox", the start key of current block is "the who", we could use "the r" as our virtual key in our hfile index.
-
-There are two benefits to this:
-
-* having shorter keys reduces the hfile index size, (allowing us to keep more indexes in memory), and
-* using something closer to the end key of the previous block allows us to avoid a potential extra IO when the target key lives in between the "virtual key" and the key of the first element in the target block.
-
-This optimization (implemented by the getShortMidpointKey method) is inspired by LevelDB's ByteWiseComparatorImpl::FindShortestSeparator() and FindShortSuccessor().
-
-[[hfilev3]]
-=== HBase File Format with Security Enhancements (version 3)
-
-Note: this feature was introduced in HBase 0.98
-
-[[hfilev3.motivation]]
-==== Motivation
-
-Version 3 of HFile makes changes needed to ease management of encryption at rest and cell-level metadata (which in turn is needed for cell-level ACLs and cell-level visibility labels). For more information see <>, <>, <>, and <>.
-
-[[hfilev3.overview]]
-==== Overview
-
-The version of HBase introducing the above features reads HFiles in versions 1, 2, and 3 but only writes version 3 HFiles.
-Version 3 HFiles are structured the same as version 2 HFiles.
-For more information see <>.
-
-[[hvilev3.infoblock]]
-==== File Info Block in Version 3
-
-Version 3 added two additional pieces of information to the reserved keys in the file info block.
-
-[cols="1,1", frame="all"]
-|===
-| hfile.MAX_TAGS_LEN | The maximum number of bytes needed to store the serialized tags for any single cell in this hfile (int)
- | hfile.TAGS_COMPRESSED | Does the block encoder for this hfile compress tags? (boolean). Should only be present if hfile.MAX_TAGS_LEN is also present.
-|===
-
-When reading a Version 3 HFile the presence of `MAX_TAGS_LEN` is used to determine how to deserialize the cells within a data block.
-Therefore, consumers must read the file's info block prior to reading any data blocks.
-
-When writing a Version 3 HFile, HBase will always include `MAX_TAGS_LEN` when flushing the memstore to underlying filesystem.
-
-When compacting extant files, the default writer will omit `MAX_TAGS_LEN` if all of the files selected do not themselves contain any cells with tags.
-
-See <> for details on the compaction file selection algorithm.
-
-[[hfilev3.datablock]]
-==== Data Blocks in Version 3
-
-Within an HFile, HBase cells are stored in data blocks as a sequence of KeyValues (see <>, or link:http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html[Lars George's
- excellent introduction to HBase Storage]). In version 3, these KeyValue optionally will include a set of 0 or more tags:
-
-[cols="1,1", frame="all"]
-|===
-| Version 1 & 2, Version 3 without MAX_TAGS_LEN | Version 3 with MAX_TAGS_LEN
-2+| Key Length (4 bytes)
-2+| Value Length (4 bytes)
-2+| Key bytes (variable)
-2+| Value bytes (variable)
-| | Tags Length (2 bytes)
-| | Tags bytes (variable)
-|===
-
-If the info block for a given HFile contains an entry for `MAX_TAGS_LEN` each cell will have the length of that cell's tags included, even if that length is zero.
-The actual tags are stored as a sequence of tag length (2 bytes), tag type (1 byte), tag bytes (variable). The format an individual tag's bytes depends on the tag type.
-
-Note that the dependence on the contents of the info block implies that prior to reading any data blocks you must first process a file's info block.
-It also implies that prior to writing a data block you must know if the file's info block will include `MAX_TAGS_LEN`.
-
-[[hfilev3.fixedtrailer]]
-==== Fixed File Trailer in Version 3
-
-The fixed file trailers written with HFile version 3 are always serialized with protocol buffers.
-Additionally, it adds an optional field to the version 2 protocol buffer named encryption_key.
-If HBase is configured to encrypt HFiles this field will store a data encryption key for this particular HFile, encrypted with the current cluster master key using AES.
-For more information see <>.
-
-:numbered:
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
deleted file mode 100644
index 48ecc9962815..000000000000
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ /dev/null
@@ -1,3200 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-= Architecture
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-:toc: left
-:source-language: java
-
-[[arch.overview]]
-== Overview
-
-[[arch.overview.nosql]]
-=== NoSQL?
-
-HBase is a type of "NoSQL" database.
-"NoSQL" is a general term meaning that the database isn't an RDBMS which supports SQL as its primary access language, but there are many types of NoSQL databases: BerkeleyDB is an example of a local NoSQL database, whereas HBase is very much a distributed database.
-Technically speaking, HBase is really more a "Data Store" than "Data Base" because it lacks many of the features you find in an RDBMS, such as typed columns, secondary indexes, triggers, and advanced query languages, etc.
-
-However, HBase has many features which supports both linear and modular scaling.
-HBase clusters expand by adding RegionServers that are hosted on commodity class servers.
-If a cluster expands from 10 to 20 RegionServers, for example, it doubles both in terms of storage and as well as processing capacity.
-An RDBMS can scale well, but only up to a point - specifically, the size of a single database
-server - and for the best performance requires specialized hardware and storage devices.
-HBase features of note are:
-
-* Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore.
- This makes it very suitable for tasks such as high-speed counter aggregation.
-* Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are automatically split and re-distributed as your data grows.
-* Automatic RegionServer failover
-* Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system.
-* MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase as both source and sink.
-* Java Client API: HBase supports an easy to use Java API for programmatic access.
-* Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends.
-* Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization.
-* Operational Management: HBase provides build-in web-pages for operational insight as well as JMX metrics.
-
-[[arch.overview.when]]
-=== When Should I Use HBase?
-
-HBase isn't suitable for every problem.
-
-First, make sure you have enough data.
-If you have hundreds of millions or billions of rows, then HBase is a good candidate.
-If you only have a few thousand/million rows, then using a traditional RDBMS might be a better choice due to the fact that all of your data might wind up on a single node (or two) and the rest of the cluster may be sitting idle.
-
-Second, make sure you can live without all the extra features that an RDBMS provides (e.g., typed columns, secondary indexes, transactions, advanced query languages, etc.) An application built against an RDBMS cannot be "ported" to HBase by simply changing a JDBC driver, for example.
-Consider moving from an RDBMS to HBase as a complete redesign as opposed to a port.
-
-Third, make sure you have enough hardware.
-Even HDFS doesn't do well with anything less than 5 DataNodes (due to things such as HDFS block replication which has a default of 3), plus a NameNode.
-
-HBase can run quite well stand-alone on a laptop - but this should be considered a development configuration only.
-
-[[arch.overview.hbasehdfs]]
-=== What Is The Difference Between HBase and Hadoop/HDFS?
-
-link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html[HDFS] is a distributed file system that is well suited for the storage of large files.
-Its documentation states that it is not, however, a general purpose file system, and does not provide fast individual record lookups in files.
-HBase, on the other hand, is built on top of HDFS and provides fast record lookups (and updates) for large tables.
-This can sometimes be a point of conceptual confusion.
-HBase internally puts your data in indexed "StoreFiles" that exist on HDFS for high-speed lookups.
-See the <> and the rest of this chapter for more information on how HBase achieves its goals.
-
-[[arch.catalog]]
-== Catalog Tables
-
-The catalog table `hbase:meta` exists as an HBase table and is filtered out of the HBase shell's `list` command, but is in fact a table just like any other.
-
-[[arch.catalog.meta]]
-=== hbase:meta
-
-The `hbase:meta` table (previously called `.META.`) keeps a list of all regions in the system, and the location of `hbase:meta` is stored in ZooKeeper.
-
-The `hbase:meta` table structure is as follows:
-
-.Key
-
-* Region key of the format (`[table],[region start key],[region id]`)
-
-.Values
-
-* `info:regioninfo` (serialized link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HRegionInfo.html[HRegionInfo] instance for this region)
-* `info:server` (server:port of the RegionServer containing this region)
-* `info:serverstartcode` (start-time of the RegionServer process containing this region)
-
-When a table is in the process of splitting, two other columns will be created, called `info:splitA` and `info:splitB`.
-These columns represent the two daughter regions.
-The values for these columns are also serialized HRegionInfo instances.
-After the region has been split, eventually this row will be deleted.
-
-.Note on HRegionInfo
-[NOTE]
-====
-The empty key is used to denote table start and table end.
-A region with an empty start key is the first region in a table.
-If a region has both an empty start and an empty end key, it is the only region in the table
-====
-
-In the (hopefully unlikely) event that programmatic processing of catalog metadata
-is required, see the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/RegionInfo.html#parseFrom-byte:A-[RegionInfo.parseFrom] utility.
-
-[[arch.catalog.startup]]
-=== Startup Sequencing
-
-First, the location of `hbase:meta` is looked up in ZooKeeper.
-Next, `hbase:meta` is updated with server and startcode values.
-
-For information on region-RegionServer assignment, see <>.
-
-[[architecture.client]]
-== Client
-
-The HBase client finds the RegionServers that are serving the particular row range of interest.
-It does this by querying the `hbase:meta` table.
-See <> for details.
-After locating the required region(s), the client contacts the RegionServer serving that region, rather than going through the master, and issues the read or write request.
-This information is cached in the client so that subsequent requests need not go through the lookup process.
-Should a region be reassigned either by the master load balancer or because a RegionServer has died, the client will requery the catalog tables to determine the new location of the user region.
-
-See <> for more information about the impact of the Master on HBase Client communication.
-
-Administrative functions are done via an instance of link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html[Admin]
-
-[[client.connections]]
-=== Cluster Connections
-
-The API changed in HBase 1.0. For connection configuration information, see <>.
-
-==== API as of HBase 1.0.0
-
-It's been cleaned up and users are returned Interfaces to work against rather than particular types.
-In HBase 1.0, obtain a `Connection` object from `ConnectionFactory` and thereafter, get from it instances of `Table`, `Admin`, and `RegionLocator` on an as-need basis.
-When done, close the obtained instances.
-Finally, be sure to cleanup your `Connection` instance before exiting.
-`Connections` are heavyweight objects but thread-safe so you can create one for your application and keep the instance around.
-`Table`, `Admin` and `RegionLocator` instances are lightweight.
-Create as you go and then let go as soon as you are done by closing them.
-See the link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/package-summary.html[Client Package Javadoc Description] for example usage of the new HBase 1.0 API.
-
-==== API before HBase 1.0.0
-
-Instances of `HTable` are the way to interact with an HBase cluster earlier than 1.0.0. _link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table] instances are not thread-safe_. Only one thread can use an instance of Table at any given time.
-When creating Table instances, it is advisable to use the same link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration[HBaseConfiguration] instance.
-This will ensure sharing of ZooKeeper and socket instances to the RegionServers which is usually what you want.
-For example, this is preferred:
-
-[source,java]
-----
-HBaseConfiguration conf = HBaseConfiguration.create();
-HTable table1 = new HTable(conf, "myTable");
-HTable table2 = new HTable(conf, "myTable");
-----
-
-as opposed to this:
-
-[source,java]
-----
-HBaseConfiguration conf1 = HBaseConfiguration.create();
-HTable table1 = new HTable(conf1, "myTable");
-HBaseConfiguration conf2 = HBaseConfiguration.create();
-HTable table2 = new HTable(conf2, "myTable");
-----
-
-For more information about how connections are handled in the HBase client, see link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html[ConnectionFactory].
-
-[[client.connection.pooling]]
-===== Connection Pooling
-
-For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads in a single JVM), you can pre-create a `Connection`, as shown in the following example:
-
-.Pre-Creating a `Connection`
-====
-[source,java]
-----
-// Create a connection to the cluster.
-Configuration conf = HBaseConfiguration.create();
-try (Connection connection = ConnectionFactory.createConnection(conf);
- Table table = connection.getTable(TableName.valueOf(tablename))) {
- // use table as needed, the table returned is lightweight
-}
-----
-====
-
-.`HTablePool` is Deprecated
-[WARNING]
-====
-Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6580], or `HConnection`, which is deprecated in HBase 1.0 by `Connection`.
-Please use link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection] instead.
-====
-
-[[client.writebuffer]]
-=== WriteBuffer and Batch Methods
-
-In HBase 1.0 and later, link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html[HTable] is deprecated in favor of link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table]. `Table` does not use autoflush. To do buffered writes, use the BufferedMutator class.
-
-In HBase 2.0 and later, link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html[HTable] does not use BufferedMutator to execute the ``Put`` operation. Refer to link:https://issues.apache.org/jira/browse/HBASE-18500[HBASE-18500] for more information.
-
-For additional information on write durability, review the link:/acid-semantics.html[ACID semantics] page.
-
-For fine-grained control of batching of ``Put``s or ``Delete``s, see the link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch-java.util.List-java.lang.Object:A-[batch] methods on Table.
-
-[[async.client]]
-=== Asynchronous Client ===
-
-It is a new API introduced in HBase 2.0 which aims to provide the ability to access HBase asynchronously.
-
-You can obtain an `AsyncConnection` from `ConnectionFactory`, and then get a asynchronous table instance from it to access HBase. When done, close the `AsyncConnection` instance(usually when your program exits).
-
-For the asynchronous table, most methods have the same meaning with the old `Table` interface, expect that the return value is wrapped with a CompletableFuture usually. We do not have any buffer here so there is no close method for asynchronous table, you do not need to close it. And it is thread safe.
-
-There are several differences for scan:
-
-* There is still a `getScanner` method which returns a `ResultScanner`. You can use it in the old way and it works like the old `ClientAsyncPrefetchScanner`.
-* There is a `scanAll` method which will return all the results at once. It aims to provide a simpler way for small scans which you want to get the whole results at once usually.
-* The Observer Pattern. There is a scan method which accepts a `ScanResultConsumer` as a parameter. It will pass the results to the consumer.
-
-Notice that `AsyncTable` interface is templatized. The template parameter specifies the type of `ScanResultConsumerBase` used by scans, which means the observer style scan APIs are different. The two types of scan consumers are - `ScanResultConsumer` and `AdvancedScanResultConsumer`.
-
-`ScanResultConsumer` needs a separate thread pool which is used to execute the callbacks registered to the returned CompletableFuture. Because the use of separate thread pool frees up RPC threads, callbacks are free to do anything. Use this if the callbacks are not quick, or when in doubt.
-
-`AdvancedScanResultConsumer` executes callbacks inside the framework thread. It is not allowed to do time consuming work in the callbacks else it will likely block the framework threads and cause very bad performance impact. As its name, it is designed for advanced users who want to write high performance code. See `org.apache.hadoop.hbase.client.example.HttpProxyExample` for how to write fully asynchronous code with it.
-
-[[async.admin]]
-=== Asynchronous Admin ===
-
-You can obtain an `AsyncConnection` from `ConnectionFactory`, and then get a `AsyncAdmin` instance from it to access HBase. Notice that there are two `getAdmin` methods to get a `AsyncAdmin` instance. One method has one extra thread pool parameter which is used to execute callbacks. It is designed for normal users. Another method doesn't need a thread pool and all the callbacks are executed inside the framework thread so it is not allowed to do time consuming works in the callbacks. It is designed for advanced users.
-
-The default `getAdmin` methods will return a `AsyncAdmin` instance which use default configs. If you want to customize some configs, you can use `getAdminBuilder` methods to get a `AsyncAdminBuilder` for creating `AsyncAdmin` instance. Users are free to only set the configs they care about to create a new `AsyncAdmin` instance.
-
-For the `AsyncAdmin` interface, most methods have the same meaning with the old `Admin` interface, expect that the return value is wrapped with a CompletableFuture usually.
-
-For most admin operations, when the returned CompletableFuture is done, it means the admin operation has also been done. But for compact operation, it only means the compact request was sent to HBase and may need some time to finish the compact operation. For `rollWALWriter` method, it only means the rollWALWriter request was sent to the region server and may need some time to finish the `rollWALWriter` operation.
-
-For region name, we only accept `byte[]` as the parameter type and it may be a full region name or a encoded region name. For server name, we only accept `ServerName` as the parameter type. For table name, we only accept `TableName` as the parameter type. For `list*` operations, we only accept `Pattern` as the parameter type if you want to do regex matching.
-
-[[client.external]]
-=== External Clients
-
-Information on non-Java clients and custom protocols is covered in <>
-
-[[client.masterregistry]]
-
-=== Master Registry (new as of 2.3.0)
-Client internally works with a _connection registry_ to fetch the metadata needed by connections.
-This connection registry implementation is responsible for fetching the following metadata.
-
-* Active master address
-* Current meta region(s) locations
-* Cluster ID (unique to this cluster)
-
-This information is needed as a part of various client operations like connection set up, scans,
-gets, etc. Traditionally, the connection registry implementation has been based on ZooKeeper as the
-source of truth and clients fetched the metadata directly from the ZooKeeper quorum. HBase 2.3.0
-introduces a new connection registry implementation based on direct communication with the Masters.
-With this implementation, clients now fetch required metadata via master RPC end points instead of
-maintaining connections to ZooKeeper. This change was done for the following reasons.
-
-* Reduce load on ZooKeeper since that is critical for cluster operation.
-* Holistic client timeout and retry configurations since the new registry brings all the client
-operations under HBase rpc framework.
-* Remove the ZooKeeper client dependency on HBase client library.
-
-This means:
-
-* At least a single active or stand by master is needed for cluster connection setup. Refer to
-<> for more details.
-* Master can be in a critical path of read/write operations, especially if the client metadata cache
-is empty or stale.
-* There is higher connection load on the masters that before since the clients talk directly to
-HMasters instead of ZooKeeper ensemble`
-
-To reduce hot-spotting on a single master, all the masters (active & stand-by) expose the needed
-service to fetch the connection metadata. This lets the client connect to any master (not just active).
-Both ZooKeeper- and Master-based connection registry implementations are available in 2.3+. For
-2.3 and earlier, the ZooKeeper-based implementation remains the default configuration.
-The Master-based implementation becomes the default in 3.0.0.
-
-Change the connection registry implementation by updating the value configured for
-`hbase.client.registry.impl`. To explicitly enable the ZooKeeper-based registry, use
-
-[source, xml]
-
- hbase.client.registry.impl
- org.apache.hadoop.hbase.client.ZKConnectionRegistry
-
-
-To explicitly enable the Master-based registry, use
-
-[source, xml]
-
- hbase.client.registry.impl
- org.apache.hadoop.hbase.client.MasterRegistry
-
-
-==== MasterRegistry RPC hedging
-
-MasterRegistry implements hedging of connection registry RPCs across active and stand-by masters.
-This lets the client make the same request to multiple servers and which ever responds first is
-returned back to the client immediately. This improves performance, especially when a subset of
-servers are under load. The hedging fan out size is configurable, meaning the number of requests
-that are hedged in a single attempt, using the configuration key
-_hbase.client.master_registry.hedged.fanout_ in the client configuration. It defaults to 2. With
-this default, the RPCs are tried in batches of 2. The hedging policy is still primitive and does not
-adapt to any sort of live rpc performance metrics.
-
-==== Additional Notes
-
-* Clients hedge the requests in a randomized order to avoid hot-spotting a single master.
-* Cluster internal connections (masters <-> regionservers) still use ZooKeeper based connection
-registry.
-* Cluster internal state is still tracked in Zookeeper, hence ZK availability requirements are same
-as before.
-* Inter cluster replication still uses ZooKeeper based connection registry to simplify configuration
-management.
-
-For more implementation details, please refer to the https://github.com/apache/hbase/tree/master/dev-support/design-docs[design doc] and
-https://issues.apache.org/jira/browse/HBASE-18095[HBASE-18095].
-
-'''
-NOTE: (Advanced) In case of any issues with the master based registry, use the following
-configuration to fallback to the ZooKeeper based connection registry implementation.
-[source, xml]
-
- hbase.client.registry.impl
- org.apache.hadoop.hbase.client.ZKConnectionRegistry
-
-
-[[client.filter]]
-== Client Request Filters
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html[Get] and link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan] instances can be optionally configured with link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html[filters] which are applied on the RegionServer.
-
-Filters can be confusing because there are many different types, and it is best to approach them by understanding the groups of Filter functionality.
-
-[[client.filter.structural]]
-=== Structural
-
-Structural Filters contain other Filters.
-
-[[client.filter.structural.fl]]
-==== FilterList
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FilterList.html[FilterList] represents a list of Filters with a relationship of `FilterList.Operator.MUST_PASS_ALL` or `FilterList.Operator.MUST_PASS_ONE` between the Filters.
-The following example shows an 'or' between two Filters (checking for either 'my value' or 'my other value' on the same attribute).
-
-[source,java]
-----
-FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ONE);
-SingleColumnValueFilter filter1 = new SingleColumnValueFilter(
- cf,
- column,
- CompareOperator.EQUAL,
- Bytes.toBytes("my value")
- );
-list.add(filter1);
-SingleColumnValueFilter filter2 = new SingleColumnValueFilter(
- cf,
- column,
- CompareOperator.EQUAL,
- Bytes.toBytes("my other value")
- );
-list.add(filter2);
-scan.setFilter(list);
-----
-
-[[client.filter.cv]]
-=== Column Value
-
-[[client.filter.cv.scvf]]
-==== SingleColumnValueFilter
-
-A SingleColumnValueFilter (see:
-https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html)
-can be used to test column values for equivalence (`CompareOperaor.EQUAL`),
-inequality (`CompareOperaor.NOT_EQUAL`), or ranges (e.g., `CompareOperaor.GREATER`). The following is an
-example of testing equivalence of a column to a String value "my value"...
-
-[source,java]
-----
-SingleColumnValueFilter filter = new SingleColumnValueFilter(
- cf,
- column,
- CompareOperaor.EQUAL,
- Bytes.toBytes("my value")
- );
-scan.setFilter(filter);
-----
-
-[[client.filter.cv.cvf]]
-==== ColumnValueFilter
-
-Introduced in HBase-2.0.0 version as a complementation of SingleColumnValueFilter, ColumnValueFilter
-gets matched cell only, while SingleColumnValueFilter gets the entire row
-(has other columns and values) to which the matched cell belongs. Parameters of constructor of
-ColumnValueFilter are the same as SingleColumnValueFilter.
-[source,java]
-----
-ColumnValueFilter filter = new ColumnValueFilter(
- cf,
- column,
- CompareOperaor.EQUAL,
- Bytes.toBytes("my value")
- );
-scan.setFilter(filter);
-----
-
-Note. For simple query like "equals to a family:qualifier:value", we highly recommend to use the
-following way instead of using SingleColumnValueFilter or ColumnValueFilter:
-[source,java]
-----
-Scan scan = new Scan();
-scan.addColumn(Bytes.toBytes("family"), Bytes.toBytes("qualifier"));
-ValueFilter vf = new ValueFilter(CompareOperator.EQUAL,
- new BinaryComparator(Bytes.toBytes("value")));
-scan.setFilter(vf);
-...
-----
-This scan will restrict to the specified column 'family:qualifier', avoiding scan unrelated
-families and columns, which has better performance, and `ValueFilter` is the condition used to do
-the value filtering.
-
-But if query is much more complicated beyond this book, then please make your good choice case by case.
-
-[[client.filter.cvp]]
-=== Column Value Comparators
-
-There are several Comparator classes in the Filter package that deserve special mention.
-These Comparators are used in concert with other Filters, such as <>.
-
-[[client.filter.cvp.rcs]]
-==== RegexStringComparator
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/RegexStringComparator.html[RegexStringComparator] supports regular expressions for value comparisons.
-
-[source,java]
-----
-RegexStringComparator comp = new RegexStringComparator("my."); // any value that starts with 'my'
-SingleColumnValueFilter filter = new SingleColumnValueFilter(
- cf,
- column,
- CompareOperaor.EQUAL,
- comp
- );
-scan.setFilter(filter);
-----
-
-See the Oracle JavaDoc for link:http://download.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html[supported RegEx patterns in Java].
-
-[[client.filter.cvp.substringcomparator]]
-==== SubstringComparator
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SubstringComparator.html[SubstringComparator] can be used to determine if a given substring exists in a value.
-The comparison is case-insensitive.
-
-[source,java]
-----
-
-SubstringComparator comp = new SubstringComparator("y val"); // looking for 'my value'
-SingleColumnValueFilter filter = new SingleColumnValueFilter(
- cf,
- column,
- CompareOperaor.EQUAL,
- comp
- );
-scan.setFilter(filter);
-----
-
-[[client.filter.cvp.bfp]]
-==== BinaryPrefixComparator
-
-See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.html[BinaryPrefixComparator].
-
-[[client.filter.cvp.bc]]
-==== BinaryComparator
-
-See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/BinaryComparator.html[BinaryComparator].
-
-[[client.filter.cvp.bcc]]
-==== BinaryComponentComparator
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/BinaryComponentComparator.html[BinaryComponentComparator] can be used to compare specific value at specific location with in the cell value. The comparison can be done for both ascii and binary data.
-
-[source,java]
-----
-byte[] partialValue = Bytes.toBytes("partial_value");
- int partialValueOffset =
- Filter partialValueFilter = new ValueFilter(CompareFilter.CompareOp.GREATER,
- new BinaryComponentComparator(partialValue,partialValueOffset));
-----
-See link:https://issues.apache.org/jira/browse/HBASE-22969[HBASE-22969] for other use cases and details.
-
-[[client.filter.kvm]]
-=== KeyValue Metadata
-
-As HBase stores data internally as KeyValue pairs, KeyValue Metadata Filters evaluate the existence of keys (i.e., ColumnFamily:Column qualifiers) for a row, as opposed to values the previous section.
-
-[[client.filter.kvm.ff]]
-==== FamilyFilter
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FamilyFilter.html[FamilyFilter] can be used to filter on the ColumnFamily.
-It is generally a better idea to select ColumnFamilies in the Scan than to do it with a Filter.
-
-[[client.filter.kvm.qf]]
-==== QualifierFilter
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/QualifierFilter.html[QualifierFilter] can be used to filter based on Column (aka Qualifier) name.
-
-[[client.filter.kvm.cpf]]
-==== ColumnPrefixFilter
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.html[ColumnPrefixFilter] can be used to filter based on the lead portion of Column (aka Qualifier) names.
-
-A ColumnPrefixFilter seeks ahead to the first column matching the prefix in each row and for each involved column family.
-It can be used to efficiently get a subset of the columns in very wide rows.
-
-Note: The same column qualifier can be used in different column families.
-This filter returns all matching columns.
-
-Example: Find all columns in a row and family that start with "abc"
-
-[source,java]
-----
-Table t = ...;
-byte[] row = ...;
-byte[] family = ...;
-byte[] prefix = Bytes.toBytes("abc");
-Scan scan = new Scan(row, row); // (optional) limit to one row
-scan.addFamily(family); // (optional) limit to one family
-Filter f = new ColumnPrefixFilter(prefix);
-scan.setFilter(f);
-scan.setBatch(10); // set this if there could be many columns returned
-ResultScanner rs = t.getScanner(scan);
-for (Result r = rs.next(); r != null; r = rs.next()) {
- for (Cell cell : result.listCells()) {
- // each cell represents a column
- }
-}
-rs.close();
-----
-
-[[client.filter.kvm.mcpf]]
-==== MultipleColumnPrefixFilter
-
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.html[MultipleColumnPrefixFilter] behaves like ColumnPrefixFilter but allows specifying multiple prefixes.
-
-Like ColumnPrefixFilter, MultipleColumnPrefixFilter efficiently seeks ahead to the first column matching the lowest prefix and also seeks past ranges of columns between prefixes.
-It can be used to efficiently get discontinuous sets of columns from very wide rows.
-
-Example: Find all columns in a row and family that start with "abc" or "xyz"
-
-[source,java]
-----
-Table t = ...;
-byte[] row = ...;
-byte[] family = ...;
-byte[][] prefixes = new byte[][] {Bytes.toBytes("abc"), Bytes.toBytes("xyz")};
-Scan scan = new Scan(row, row); // (optional) limit to one row
-scan.addFamily(family); // (optional) limit to one family
-Filter f = new MultipleColumnPrefixFilter(prefixes);
-scan.setFilter(f);
-scan.setBatch(10); // set this if there could be many columns returned
-ResultScanner rs = t.getScanner(scan);
-for (Result r = rs.next(); r != null; r = rs.next()) {
- for (Cell cell : result.listCells()) {
- // each cell represents a column
- }
-}
-rs.close();
-----
-
-[[client.filter.kvm.crf]]
-==== ColumnRangeFilter
-
-A link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/ColumnRangeFilter.html[ColumnRangeFilter] allows efficient intra row scanning.
-
-A ColumnRangeFilter can seek ahead to the first matching column for each involved column family.
-It can be used to efficiently get a 'slice' of the columns of a very wide row.
-i.e.
-you have a million columns in a row but you only want to look at columns bbbb-bbdd.
-
-Note: The same column qualifier can be used in different column families.
-This filter returns all matching columns.
-
-Example: Find all columns in a row and family between "bbbb" (inclusive) and "bbdd" (inclusive)
-
-[source,java]
-----
-Table t = ...;
-byte[] row = ...;
-byte[] family = ...;
-byte[] startColumn = Bytes.toBytes("bbbb");
-byte[] endColumn = Bytes.toBytes("bbdd");
-Scan scan = new Scan(row, row); // (optional) limit to one row
-scan.addFamily(family); // (optional) limit to one family
-Filter f = new ColumnRangeFilter(startColumn, true, endColumn, true);
-scan.setFilter(f);
-scan.setBatch(10); // set this if there could be many columns returned
-ResultScanner rs = t.getScanner(scan);
-for (Result r = rs.next(); r != null; r = rs.next()) {
- for (Cell cell : result.listCells()) {
- // each cell represents a column
- }
-}
-rs.close();
-----
-
-Note: Introduced in HBase 0.92
-
-[[client.filter.row]]
-=== RowKey
-
-[[client.filter.row.rf]]
-==== RowFilter
-
-It is generally a better idea to use the startRow/stopRow methods on Scan for row selection, however link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/RowFilter.html[RowFilter] can also be used.
-
-You can supplement a scan (both bounded and unbounded) with RowFilter constructed from link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/BinaryComponentComparator.html[BinaryComponentComparator] for further filtering out or filtering in rows. See link:https://issues.apache.org/jira/browse/HBASE-22969[HBASE-22969] for use cases and other details.
-
-[[client.filter.utility]]
-=== Utility
-
-[[client.filter.utility.fkof]]
-==== FirstKeyOnlyFilter
-
-This is primarily used for rowcount jobs.
-See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html[FirstKeyOnlyFilter].
-
-[[architecture.master]]
-== Master
-
-`HMaster` is the implementation of the Master Server.
-The Master server is responsible for monitoring all RegionServer instances in the cluster, and is the interface for all metadata changes.
-In a distributed cluster, the Master typically runs on the <>.
-J Mohamed Zahoor goes into some more detail on the Master Architecture in this blog posting, link:http://blog.zahoor.in/2012/08/hbase-hmaster-architecture/[HBase HMaster Architecture ].
-
-[[master.startup]]
-=== Startup Behavior
-
-If run in a multi-Master environment, all Masters compete to run the cluster.
-If the active Master loses its lease in ZooKeeper (or the Master shuts down), then the remaining Masters jostle to take over the Master role.
-
-[[master.runtime]]
-=== Runtime Impact
-
-A common dist-list question involves what happens to an HBase cluster when the Master goes down. This information has changed staring 3.0.0.
-
-==== Up until releases 2.x.y
-Because the HBase client talks directly to the RegionServers, the cluster can still function in a "steady state". Additionally, per <>, `hbase:meta` exists as an HBase table and is not resident in the Master.
-However, the Master controls critical functions such as RegionServer failover and completing region splits.
-So while the cluster can still run for a short time without the Master, the Master should be restarted as soon as possible.
-
-==== Staring release 3.0.0
-As mentioned in section <>, the default connection registry for clients is now based on master rpc end points. Hence the requirements for
-masters' uptime are even tighter starting this release.
-
-- At least one active or stand by master is needed for a connection set up, unlike before when all the clients needed was a ZooKeeper ensemble.
-- Master is now in critical path for read/write operations. For example, if the meta region bounces off to a different region server, clients
-need master to fetch the new locations. Earlier this was done by fetching this information directly from ZooKeeper.
-- Masters will now have higher connection load than before. So, the server side configuration might need adjustment depending on the load.
-
-Overall, the master uptime requirements, when this feature is enabled, are even higher for the client operations to go through.
-
-[[master.api]]
-=== Interface
-
-The methods exposed by `HMasterInterface` are primarily metadata-oriented methods:
-
-* Table (createTable, modifyTable, removeTable, enable, disable)
-* ColumnFamily (addColumn, modifyColumn, removeColumn)
-* Region (move, assign, unassign) For example, when the `Admin` method `disableTable` is invoked, it is serviced by the Master server.
-
-[[master.processes]]
-=== Processes
-
-The Master runs several background threads:
-
-[[master.processes.loadbalancer]]
-==== LoadBalancer
-
-Periodically, and when there are no regions in transition, a load balancer will run and move regions around to balance the cluster's load.
-See <> for configuring this property.
-
-See <> for more information on region assignment.
-
-[[master.processes.catalog]]
-==== CatalogJanitor
-
-Periodically checks and cleans up the `hbase:meta` table.
-See <> for more information on the meta table.
-
-[[master.wal]]
-=== MasterProcWAL
-
-_MasterProcWAL is replaced in hbase-2.3.0 by an alternate Procedure Store implementation; see
-<>. This section pertains to hbase-2.0.0 through hbase-2.2.x_
-
-HMaster records administrative operations and their running states, such as the handling of a crashed server,
-table creation, and other DDLs, into a Procedure Store. The Procedure Store WALs are stored under the
-MasterProcWALs directory. The Master WALs are not like RegionServer WALs. Keeping up the Master WAL allows
-us run a state machine that is resilient across Master failures. For example, if a HMaster was in the
-middle of creating a table encounters an issue and fails, the next active HMaster can take up where
-the previous left off and carry the operation to completion. Since hbase-2.0.0, a
-new AssignmentManager (A.K.A AMv2) was introduced and the HMaster handles region assignment
-operations, server crash processing, balancing, etc., all via AMv2 persisting all state and
-transitions into MasterProcWALs rather than up into ZooKeeper, as we do in hbase-1.x.
-
-See <> (and <> for its basis) if you would like to learn more about the new
-AssignmentManager.
-
-[[master.wal.conf]]
-==== Configurations for MasterProcWAL
-Here are the list of configurations that effect MasterProcWAL operation.
-You should not have to change your defaults.
-
-[[hbase.procedure.store.wal.periodic.roll.msec]]
-*`hbase.procedure.store.wal.periodic.roll.msec`*::
-+
-.Description
-Frequency of generating a new WAL
-+
-.Default
-`1h (3600000 in msec)`
-
-[[hbase.procedure.store.wal.roll.threshold]]
-*`hbase.procedure.store.wal.roll.threshold`*::
-+
-.Description
-Threshold in size before the WAL rolls. Every time the WAL reaches this size or the above period, 1 hour, passes since last log roll, the HMaster will generate a new WAL.
-+
-.Default
-`32MB (33554432 in byte)`
-
-[[hbase.procedure.store.wal.warn.threshold]]
-*`hbase.procedure.store.wal.warn.threshold`*::
-+
-.Description
-If the number of WALs goes beyond this threshold, the following message should appear in the HMaster log with WARN level when rolling.
-
- procedure WALs count=xx above the warning threshold 64. check running procedures to see if something is stuck.
-
-+
-.Default
-`64`
-
-[[hbase.procedure.store.wal.max.retries.before.roll]]
-*`hbase.procedure.store.wal.max.retries.before.roll`*::
-+
-.Description
-Max number of retry when syncing slots (records) to its underlying storage, such as HDFS. Every attempt, the following message should appear in the HMaster log.
-
- unable to sync slots, retry=xx
-
-+
-.Default
-`3`
-
-[[hbase.procedure.store.wal.sync.failure.roll.max]]
-*`hbase.procedure.store.wal.sync.failure.roll.max`*::
-+
-.Description
-After the above 3 retrials, the log is rolled and the retry count is reset to 0, thereon a new set of retrial starts. This configuration controls the max number of attempts of log rolling upon sync failure. That is, HMaster is allowed to fail to sync 9 times in total. Once it exceeds, the following log should appear in the HMaster log.
-
- Sync slots after log roll failed, abort.
-+
-.Default
-`3`
-
-[[regionserver.arch]]
-== RegionServer
-
-`HRegionServer` is the RegionServer implementation.
-It is responsible for serving and managing regions.
-In a distributed cluster, a RegionServer runs on a <>.
-
-[[regionserver.arch.api]]
-=== Interface
-
-The methods exposed by `HRegionRegionInterface` contain both data-oriented and region-maintenance methods:
-
-* Data (get, put, delete, next, etc.)
-* Region (splitRegion, compactRegion, etc.) For example, when the `Admin` method `majorCompact` is invoked on a table, the client is actually iterating through all regions for the specified table and requesting a major compaction directly to each region.
-
-[[regionserver.arch.processes]]
-=== Processes
-
-The RegionServer runs a variety of background threads:
-
-[[regionserver.arch.processes.compactsplit]]
-==== CompactSplitThread
-
-Checks for splits and handle minor compactions.
-
-[[regionserver.arch.processes.majorcompact]]
-==== MajorCompactionChecker
-
-Checks for major compactions.
-
-[[regionserver.arch.processes.memstore]]
-==== MemStoreFlusher
-
-Periodically flushes in-memory writes in the MemStore to StoreFiles.
-
-[[regionserver.arch.processes.log]]
-==== LogRoller
-
-Periodically checks the RegionServer's WAL.
-
-=== Coprocessors
-
-Coprocessors were added in 0.92.
-There is a thorough link:https://blogs.apache.org/hbase/entry/coprocessor_introduction[Blog Overview of CoProcessors] posted.
-Documentation will eventually move to this reference guide, but the blog is the most current information available at this time.
-
-[[block.cache]]
-=== Block Cache
-
-HBase provides two different BlockCache implementations to cache data read from HDFS:
-the default on-heap `LruBlockCache` and the `BucketCache`, which is (usually) off-heap.
-This section discusses benefits and drawbacks of each implementation, how to choose the
-appropriate option, and configuration options for each.
-
-.Block Cache Reporting: UI
-[NOTE]
-====
-See the RegionServer UI for detail on caching deploy.
-See configurations, sizings, current usage, time-in-the-cache, and even detail on block counts and types.
-====
-
-==== Cache Choices
-
-`LruBlockCache` is the original implementation, and is entirely within the Java heap.
-`BucketCache` is optional and mainly intended for keeping block cache data off-heap, although `BucketCache` can also be a file-backed cache.
- In file-backed we can either use it in the file mode or the mmaped mode.
- We also have pmem mode where the bucket cache resides on the persistent memory device.
-
-When you enable BucketCache, you are enabling a two tier caching system. We used to describe the
-tiers as "L1" and "L2" but have deprecated this terminology as of hbase-2.0.0. The "L1" cache referred to an
-instance of LruBlockCache and "L2" to an off-heap BucketCache. Instead, when BucketCache is enabled,
-all DATA blocks are kept in the BucketCache tier and meta blocks -- INDEX and BLOOM blocks -- are on-heap in the `LruBlockCache`.
-Management of these two tiers and the policy that dictates how blocks move between them is done by `CombinedBlockCache`.
-
-[[cache.configurations]]
-==== General Cache Configurations
-
-Apart from the cache implementation itself, you can set some general configuration options to control how the cache performs.
-See link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig].
-After setting any of these options, restart or rolling restart your cluster for the configuration to take effect.
-Check logs for errors or unexpected behavior.
-
-See also <>, which discusses a new option introduced in link:https://issues.apache.org/jira/browse/HBASE-9857[HBASE-9857].
-
-[[block.cache.design]]
-==== LruBlockCache Design
-
-The LruBlockCache is an LRU cache that contains three levels of block priority to allow for scan-resistance and in-memory ColumnFamilies:
-
-* Single access priority: The first time a block is loaded from HDFS it normally has this priority and it will be part of the first group to be considered during evictions.
- The advantage is that scanned blocks are more likely to get evicted than blocks that are getting more usage.
-* Multi access priority: If a block in the previous priority group is accessed again, it upgrades to this priority.
- It is thus part of the second group considered during evictions.
-* In-memory access priority: If the block's family was configured to be "in-memory", it will be part of this priority disregarding the number of times it was accessed.
- Catalog tables are configured like this.
- This group is the last one considered during evictions.
-+
-To mark a column family as in-memory, call
-
-[source,java]
-----
-HColumnDescriptor.setInMemory(true);
-----
-
-if creating a table from java, or set `IN_MEMORY => true` when creating or altering a table in the shell: e.g.
-
-[source]
-----
-hbase(main):003:0> create 't', {NAME => 'f', IN_MEMORY => 'true'}
-----
-
-For more information, see the LruBlockCache source
-
-[[block.cache.usage]]
-==== LruBlockCache Usage
-
-Block caching is enabled by default for all the user tables which means that any read operation will load the LRU cache.
-This might be good for a large number of use cases, but further tunings are usually required in order to achieve better performance.
-An important concept is the link:http://en.wikipedia.org/wiki/Working_set_size[working set size], or WSS, which is: "the amount of memory needed to compute the answer to a problem". For a website, this would be the data that's needed to answer the queries over a short amount of time.
-
-The way to calculate how much memory is available in HBase for caching is:
-
-[source]
-----
-number of region servers * heap size * hfile.block.cache.size * 0.99
-----
-
-The default value for the block cache is 0.4 which represents 40% of the available heap.
-The last value (99%) is the default acceptable loading factor in the LRU cache after which eviction is started.
-The reason it is included in this equation is that it would be unrealistic to say that it is possible to use 100% of the available memory since this would make the process blocking from the point where it loads new blocks.
-Here are some examples:
-
-* One region server with the heap size set to 1 GB and the default block cache size will have 405 MB of block cache available.
-* 20 region servers with the heap size set to 8 GB and a default block cache size will have 63.3 of block cache.
-* 100 region servers with the heap size set to 24 GB and a block cache size of 0.5 will have about 1.16 TB of block cache.
-
-Your data is not the only resident of the block cache.
-Here are others that you may have to take into account:
-
-Catalog Tables::
- The `hbase:meta` table is forced into the block cache and have the in-memory priority which means that they are harder to evict.
-
-NOTE: The hbase:meta tables can occupy a few MBs depending on the number of regions.
-
-HFiles Indexes::
- An _HFile_ is the file format that HBase uses to store data in HDFS.
- It contains a multi-layered index which allows HBase to seek to the data without having to read the whole file.
- The size of those indexes is a factor of the block size (64KB by default), the size of your keys and the amount of data you are storing.
- For big data sets it's not unusual to see numbers around 1GB per region server, although not all of it will be in cache because the LRU will evict indexes that aren't used.
-
-Keys::
- The values that are stored are only half the picture, since each value is stored along with its keys (row key, family qualifier, and timestamp). See <>.
-
-Bloom Filters::
- Just like the HFile indexes, those data structures (when enabled) are stored in the LRU.
-
-Currently the recommended way to measure HFile indexes and bloom filters sizes is to look at the region server web UI and checkout the relevant metrics.
-For keys, sampling can be done by using the HFile command line tool and look for the average key size metric.
-Since HBase 0.98.3, you can view details on BlockCache stats and metrics in a special Block Cache section in the UI.
-
-It's generally bad to use block caching when the WSS doesn't fit in memory.
-This is the case when you have for example 40GB available across all your region servers' block caches but you need to process 1TB of data.
-One of the reasons is that the churn generated by the evictions will trigger more garbage collections unnecessarily.
-Here are two use cases:
-
-* Fully random reading pattern: This is a case where you almost never access the same row twice within a short amount of time such that the chance of hitting a cached block is close to 0.
- Setting block caching on such a table is a waste of memory and CPU cycles, more so that it will generate more garbage to pick up by the JVM.
- For more information on monitoring GC, see <>.
-* Mapping a table: In a typical MapReduce job that takes a table in input, every row will be read only once so there's no need to put them into the block cache.
- The Scan object has the option of turning this off via the setCacheBlocks method (set it to false). You can still keep block caching turned on on this table if you need fast random read access.
- An example would be counting the number of rows in a table that serves live traffic, caching every block of that table would create massive churn and would surely evict data that's currently in use.
-
-[[data.blocks.in.fscache]]
-===== Caching META blocks only (DATA blocks in fscache)
-
-An interesting setup is one where we cache META blocks only and we read DATA blocks in on each access.
-If the DATA blocks fit inside fscache, this alternative may make sense when access is completely random across a very large dataset.
-To enable this setup, alter your table and for each column family set `BLOCKCACHE => 'false'`.
-You are 'disabling' the BlockCache for this column family only. You can never disable the caching of META blocks.
-Since link:https://issues.apache.org/jira/browse/HBASE-4683[HBASE-4683 Always cache index and bloom blocks], we will cache META blocks even if the BlockCache is disabled.
-
-[[offheap.blockcache]]
-==== Off-heap Block Cache
-
-[[enable.bucketcache]]
-===== How to Enable BucketCache
-
-The usual deploy of BucketCache is via a managing class that sets up two caching tiers:
-an on-heap cache implemented by LruBlockCache and a second cache implemented with BucketCache.
-The managing class is link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.html[CombinedBlockCache] by default.
-The previous link describes the caching 'policy' implemented by CombinedBlockCache.
-In short, it works by keeping meta blocks -- INDEX and BLOOM in the on-heap LruBlockCache tier -- and DATA blocks are kept in the BucketCache tier.
-
-====
-Pre-hbase-2.0.0 versions::
-Fetching will always be slower when fetching from BucketCache in pre-hbase-2.0.0,
-as compared to the native on-heap LruBlockCache. However, latencies tend to be less
-erratic across time, because there is less garbage collection when you use BucketCache since it is managing BlockCache allocations, not the GC.
-If the BucketCache is deployed in off-heap mode, this memory is not managed by the GC at all.
-This is why you'd use BucketCache in pre-2.0.0, so your latencies are less erratic,
-to mitigate GCs and heap fragmentation, and so you can safely use more memory.
-See Nick Dimiduk's link:http://www.n10k.com/blog/blockcache-101/[BlockCache 101] for comparisons running on-heap vs off-heap tests.
-Also see link:https://people.apache.org/~stack/bc/[Comparing BlockCache Deploys] which finds that if your dataset fits inside your LruBlockCache deploy, use it otherwise if you are experiencing cache churn (or you want your cache to exist beyond the vagaries of java GC), use BucketCache.
-+
-In pre-2.0.0,
-one can configure the BucketCache so it receives the `victim` of an LruBlockCache eviction.
-All Data and index blocks are cached in L1 first. When eviction happens from L1, the blocks (or `victims`) will get moved to L2.
-Set `cacheDataInL1` via `(HColumnDescriptor.setCacheDataInL1(true)` or in the shell, creating or amending column families setting `CACHE_DATA_IN_L1` to true: e.g.
-[source]
-----
-hbase(main):003:0> create 't', {NAME => 't', CONFIGURATION => {CACHE_DATA_IN_L1 => 'true'}}
-----
-
-hbase-2.0.0+ versions::
-HBASE-11425 changed the HBase read path so it could hold the read-data off-heap avoiding copying of cached data on to the java heap.
-See <>. In hbase-2.0.0, off-heap latencies approach those of on-heap cache latencies with the added
-benefit of NOT provoking GC.
-+
-From HBase 2.0.0 onwards, the notions of L1 and L2 have been deprecated. When BucketCache is turned on, the DATA blocks will always go to BucketCache and INDEX/BLOOM blocks go to on heap LRUBlockCache. `cacheDataInL1` support hase been removed.
-====
-
-[[bc.deloy.modes]]
-====== BucketCache Deploy Modes
-The BucketCache Block Cache can be deployed _offheap_, _file_ or _mmaped_ file mode.
-
-You set which via the `hbase.bucketcache.ioengine` setting.
-Setting it to `offheap` will have BucketCache make its allocations off-heap, and an ioengine setting of `file:PATH_TO_FILE` will direct BucketCache to use file caching (Useful in particular if you have some fast I/O attached to the box such as SSDs). From 2.0.0, it is possible to have more than one file backing the BucketCache. This is very useful specially when the Cache size requirement is high. For multiple backing files, configure ioengine as `files:PATH_TO_FILE1,PATH_TO_FILE2,PATH_TO_FILE3`. BucketCache can be configured to use an mmapped file also. Configure ioengine as `mmap:PATH_TO_FILE` for this.
-
-It is possible to deploy a tiered setup where we bypass the CombinedBlockCache policy and have BucketCache working as a strict L2 cache to the L1 LruBlockCache.
-For such a setup, set `hbase.bucketcache.combinedcache.enabled` to `false`.
-In this mode, on eviction from L1, blocks go to L2.
-When a block is cached, it is cached first in L1.
-When we go to look for a cached block, we look first in L1 and if none found, then search L2.
-Let us call this deploy format, _Raw L1+L2_.
-NOTE: This L1+L2 mode is removed from 2.0.0. When BucketCache is used, it will be strictly the DATA cache and the LruBlockCache will cache INDEX/META blocks.
-
-Other BucketCache configs include: specifying a location to persist cache to across restarts, how many threads to use writing the cache, etc.
-See the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig.html] class for configuration options and descriptions.
-
-To check it enabled, look for the log line describing cache setup; it will detail how BucketCache has been deployed.
-Also see the UI. It will detail the cache tiering and their configuration.
-
-[[bc.example]]
-====== BucketCache Example Configuration
-This sample provides a configuration for a 4 GB off-heap BucketCache with a 1 GB on-heap cache.
-
-Configuration is performed on the RegionServer.
-
-Setting `hbase.bucketcache.ioengine` and `hbase.bucketcache.size` > 0 enables `CombinedBlockCache`.
-Let us presume that the RegionServer has been set to run with a 5G heap: i.e. `HBASE_HEAPSIZE=5g`.
-
-
-. First, edit the RegionServer's _hbase-env.sh_ and set `HBASE_OFFHEAPSIZE` to a value greater than the off-heap size wanted, in this case, 4 GB (expressed as 4G). Let's set it to 5G.
- That'll be 4G for our off-heap cache and 1G for any other uses of off-heap memory (there are other users of off-heap memory other than BlockCache; e.g.
- DFSClient in RegionServer can make use of off-heap memory). See <>.
-+
-[source]
-----
-HBASE_OFFHEAPSIZE=5G
-----
-
-. Next, add the following configuration to the RegionServer's _hbase-site.xml_.
-+
-[source,xml]
-----
-
- hbase.bucketcache.ioengine
- offheap
-
-
- hfile.block.cache.size
- 0.2
-
-
- hbase.bucketcache.size
- 4196
-
-----
-
-. Restart or rolling restart your cluster, and check the logs for any issues.
-
-
-In the above, we set the BucketCache to be 4G.
-We configured the on-heap LruBlockCache have 20% (0.2) of the RegionServer's heap size (0.2 * 5G = 1G). In other words, you configure the L1 LruBlockCache as you would normally (as if there were no L2 cache present).
-
-link:https://issues.apache.org/jira/browse/HBASE-10641[HBASE-10641] introduced the ability to configure multiple sizes for the buckets of the BucketCache, in HBase 0.98 and newer.
-To configurable multiple bucket sizes, configure the new property `hbase.bucketcache.bucket.sizes` to a comma-separated list of block sizes, ordered from smallest to largest, with no spaces.
-The goal is to optimize the bucket sizes based on your data access patterns.
-The following example configures buckets of size 4096 and 8192.
-
-[source,xml]
-----
-
- hbase.bucketcache.bucket.sizes
- 4096,8192
-
-----
-
-[[direct.memory]]
-.Direct Memory Usage In HBase
-[NOTE]
-====
-The default maximum direct memory varies by JVM.
-Traditionally it is 64M or some relation to allocated heap size (-Xmx) or no limit at all (JDK7 apparently). HBase servers use direct memory, in particular short-circuit reading (See <>), the hosted DFSClient will allocate direct memory buffers. How much the DFSClient uses is not easy to quantify; it is the number of open HFiles * `hbase.dfs.client.read.shortcircuit.buffer.size` where `hbase.dfs.client.read.shortcircuit.buffer.size` is set to 128k in HBase -- see _hbase-default.xml_ default configurations.
-If you do off-heap block caching, you'll be making use of direct memory.
-The RPCServer uses a ByteBuffer pool. From 2.0.0, these buffers are off-heap ByteBuffers.
-Starting your JVM, make sure the `-XX:MaxDirectMemorySize` setting in _conf/hbase-env.sh_ considers off-heap BlockCache (`hbase.bucketcache.size`), DFSClient usage, RPC side ByteBufferPool max size. This has to be bit higher than sum of off heap BlockCache size and max ByteBufferPool size. Allocating an extra of 1-2 GB for the max direct memory size has worked in tests. Direct memory, which is part of the Java process heap, is separate from the object heap allocated by -Xmx.
-The value allocated by `MaxDirectMemorySize` must not exceed physical RAM, and is likely to be less than the total available RAM due to other memory requirements and system constraints.
-
-You can see how much memory -- on-heap and off-heap/direct -- a RegionServer is configured to use and how much it is using at any one time by looking at the _Server Metrics: Memory_ tab in the UI.
-It can also be gotten via JMX.
-In particular the direct memory currently used by the server can be found on the `java.nio.type=BufferPool,name=direct` bean.
-Terracotta has a link:http://terracotta.org/documentation/4.0/bigmemorygo/configuration/storage-options[good write up] on using off-heap memory in Java.
-It is for their product BigMemory but a lot of the issues noted apply in general to any attempt at going off-heap. Check it out.
-====
-
-.hbase.bucketcache.percentage.in.combinedcache
-[NOTE]
-====
-This is a pre-HBase 1.0 configuration removed because it was confusing.
-It was a float that you would set to some value between 0.0 and 1.0.
-Its default was 0.9.
-If the deploy was using CombinedBlockCache, then the LruBlockCache L1 size was calculated to be `(1 - hbase.bucketcache.percentage.in.combinedcache) * size-of-bucketcache` and the BucketCache size was `hbase.bucketcache.percentage.in.combinedcache * size-of-bucket-cache`.
-where size-of-bucket-cache itself is EITHER the value of the configuration `hbase.bucketcache.size` IF it was specified as Megabytes OR `hbase.bucketcache.size` * `-XX:MaxDirectMemorySize` if `hbase.bucketcache.size` is between 0 and 1.0.
-
-In 1.0, it should be more straight-forward.
-Onheap LruBlockCache size is set as a fraction of java heap using `hfile.block.cache.size setting` (not the best name) and BucketCache is set as above in absolute Megabytes.
-====
-
-==== Compressed BlockCache
-
-link:https://issues.apache.org/jira/browse/HBASE-11331[HBASE-11331] introduced lazy BlockCache decompression, more simply referred to as compressed BlockCache.
-When compressed BlockCache is enabled data and encoded data blocks are cached in the BlockCache in their on-disk format, rather than being decompressed and decrypted before caching.
-
-For a RegionServer hosting more data than can fit into cache, enabling this feature with SNAPPY compression has been shown to result in 50% increase in throughput and 30% improvement in mean latency while, increasing garbage collection by 80% and increasing overall CPU load by 2%. See HBASE-11331 for more details about how performance was measured and achieved.
-For a RegionServer hosting data that can comfortably fit into cache, or if your workload is sensitive to extra CPU or garbage-collection load, you may receive less benefit.
-
-The compressed BlockCache is disabled by default. To enable it, set `hbase.block.data.cachecompressed` to `true` in _hbase-site.xml_ on all RegionServers.
-
-[[regionserver_splitting_implementation]]
-=== RegionServer Splitting Implementation
-
-As write requests are handled by the region server, they accumulate in an in-memory storage system called the _memstore_. Once the memstore fills, its content are written to disk as additional store files. This event is called a _memstore flush_. As store files accumulate, the RegionServer will <> them into fewer, larger files. After each flush or compaction finishes, the amount of data stored in the region has changed. The RegionServer consults the region split policy to determine if the region has grown too large or should be split for another policy-specific reason. A region split request is enqueued if the policy recommends it.
-
-Logically, the process of splitting a region is simple. We find a suitable point in the keyspace of the region where we should divide the region in half, then split the region's data into two new regions at that point. The details of the process however are not simple. When a split happens, the newly created _daughter regions_ do not rewrite all the data into new files immediately. Instead, they create small files similar to symbolic link files, named link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/Reference.html[Reference files], which point to either the top or bottom part of the parent store file according to the split point. The reference file is used just like a regular data file, but only half of the records are considered. The region can only be split if there are no more references to the immutable data files of the parent region. Those reference files are cleaned gradually by compactions, so that the region will stop referring to its parents files, and can be split further.
-
-Although splitting the region is a local decision made by the RegionServer, the split process itself must coordinate with many actors. The RegionServer notifies the Master before and after the split, updates the `.META.` table so that clients can discover the new daughter regions, and rearranges the directory structure and data files in HDFS. Splitting is a multi-task process. To enable rollback in case of an error, the RegionServer keeps an in-memory journal about the execution state. The steps taken by the RegionServer to execute the split are illustrated in <>. Each step is labeled with its step number. Actions from RegionServers or Master are shown in red, while actions from the clients are show in green.
-
-[[regionserver_split_process_image]]
-.RegionServer Split Process
-image::region_split_process.png[Region Split Process]
-
-. The RegionServer decides locally to split the region, and prepares the split. *THE SPLIT TRANSACTION IS STARTED.* As a first step, the RegionServer acquires a shared read lock on the table to prevent schema modifications during the splitting process. Then it creates a znode in zookeeper under `/hbase/region-in-transition/region-name`, and sets the znode's state to `SPLITTING`.
-. The Master learns about this znode, since it has a watcher for the parent `region-in-transition` znode.
-. The RegionServer creates a sub-directory named `.splits` under the parent’s `region` directory in HDFS.
-. The RegionServer closes the parent region and marks the region as offline in its local data structures. *THE SPLITTING REGION IS NOW OFFLINE.* At this point, client requests coming to the parent region will throw `NotServingRegionException`. The client will retry with some backoff. The closing region is flushed.
-. The RegionServer creates region directories under the `.splits` directory, for daughter
-regions A and B, and creates necessary data structures. Then it splits the store files,
-in the sense that it creates two Reference files per store file in the parent region.
-Those reference files will point to the parent region's files.
-. The RegionServer creates the actual region directory in HDFS, and moves the reference files for each daughter.
-. The RegionServer sends a `Put` request to the `.META.` table, to set the parent as offline in the `.META.` table and add information about daughter regions. At this point, there won’t be individual entries in `.META.` for the daughters. Clients will see that the parent region is split if they scan `.META.`, but won’t know about the daughters until they appear in `.META.`. Also, if this `Put` to `.META`. succeeds, the parent will be effectively split. If the RegionServer fails before this RPC succeeds, Master and the next Region Server opening the region will clean dirty state about the region split. After the `.META.` update, though, the region split will be rolled-forward by Master.
-. The RegionServer opens daughters A and B in parallel.
-. The RegionServer adds the daughters A and B to `.META.`, together with information that it hosts the regions. *THE SPLIT REGIONS (DAUGHTERS WITH REFERENCES TO PARENT) ARE NOW ONLINE.* After this point, clients can discover the new regions and issue requests to them. Clients cache the `.META.` entries locally, but when they make requests to the RegionServer or `.META.`, their caches will be invalidated, and they will learn about the new regions from `.META.`.
-. The RegionServer updates znode `/hbase/region-in-transition/region-name` in ZooKeeper to state `SPLIT`, so that the master can learn about it. The balancer can freely re-assign the daughter regions to other region servers if necessary. *THE SPLIT TRANSACTION IS NOW FINISHED.*
-. After the split, `.META.` and HDFS will still contain references to the parent region. Those references will be removed when compactions in daughter regions rewrite the data files. Garbage collection tasks in the master periodically check whether the daughter regions still refer to the parent region's files. If not, the parent region will be removed.
-
-[[wal]]
-=== Write Ahead Log (WAL)
-
-[[purpose.wal]]
-==== Purpose
-
-The _Write Ahead Log (WAL)_ records all changes to data in HBase, to file-based storage.
-Under normal operations, the WAL is not needed because data changes move from the MemStore to StoreFiles.
-However, if a RegionServer crashes or becomes unavailable before the MemStore is flushed, the WAL ensures that the changes to the data can be replayed.
-If writing to the WAL fails, the entire operation to modify the data fails.
-
-HBase uses an implementation of the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/wal/WAL.html[WAL] interface.
-Usually, there is only one instance of a WAL per RegionServer. An exception
-is the RegionServer that is carrying _hbase:meta_; the _meta_ table gets its
-own dedicated WAL.
-The RegionServer records Puts and Deletes to its WAL, before recording them
-these Mutations <> for the affected <>.
-
-.The HLog
-[NOTE]
-====
-Prior to 2.0, the interface for WALs in HBase was named `HLog`.
-In 0.94, HLog was the name of the implementation of the WAL.
-You will likely find references to the HLog in documentation tailored to these older versions.
-====
-
-The WAL resides in HDFS in the _/hbase/WALs/_ directory, with subdirectories per region.
-
-For more general information about the concept of write ahead logs, see the Wikipedia
-link:http://en.wikipedia.org/wiki/Write-ahead_logging[Write-Ahead Log] article.
-
-
-[[wal.providers]]
-==== WAL Providers
-In HBase, there are a number of WAL imlementations (or 'Providers'). Each is known
-by a short name label (that unfortunately is not always descriptive). You set the provider in
-_hbase-site.xml_ passing the WAL provder short-name as the value on the
-_hbase.wal.provider_ property (Set the provider for _hbase:meta_ using the
-_hbase.wal.meta_provider_ property, otherwise it uses the same provider configured
-by _hbase.wal.provider_).
-
- * _asyncfs_: The *default*. New since hbase-2.0.0 (HBASE-15536, HBASE-14790). This _AsyncFSWAL_ provider, as it identifies itself in RegionServer logs, is built on a new non-blocking dfsclient implementation. It is currently resident in the hbase codebase but intent is to move it back up into HDFS itself. WALs edits are written concurrently ("fan-out") style to each of the WAL-block replicas on each DataNode rather than in a chained pipeline as the default client does. Latencies should be better. See link:https://www.slideshare.net/HBaseCon/apache-hbase-improvements-and-practices-at-xiaomi[Apache HBase Improements and Practices at Xiaomi] at slide 14 onward for more detail on implementation.
- * _filesystem_: This was the default in hbase-1.x releases. It is built on the blocking _DFSClient_ and writes to replicas in classic _DFSCLient_ pipeline mode. In logs it identifies as _FSHLog_ or _FSHLogProvider_.
- * _multiwal_: This provider is made of multiple instances of _asyncfs_ or _filesystem_. See the next section for more on _multiwal_.
-
-Look for the lines like the below in the RegionServer log to see which provider is in place (The below shows the default AsyncFSWALProvider):
-
-----
-2018-04-02 13:22:37,983 INFO [regionserver/ve0528:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider
-----
-
-NOTE: As the _AsyncFSWAL_ hacks into the internal of DFSClient implementation, it will be easily broken by upgrading the hadoop dependencies, even for a simple patch release. So if you do not specify the wal provider explicitly, we will first try to use the _asyncfs_, if failed, we will fall back to use _filesystem_. And notice that this may not always work, so if you still have problem starting HBase due to the problem of starting _AsyncFSWAL_, please specify _filesystem_ explicitly in the config file.
-
-NOTE: EC support has been added to hadoop-3.x, and it is incompatible with WAL as the EC output stream does not support hflush/hsync. In order to create a non-EC file in an EC directory, we need to use the new builder-based create API for _FileSystem_, but it is only introduced in hadoop-2.9+ and for HBase we still need to support hadoop-2.7.x. So please do not enable EC for the WAL directory until we find a way to deal with it.
-
-==== MultiWAL
-With a single WAL per RegionServer, the RegionServer must write to the WAL serially, because HDFS files must be sequential. This causes the WAL to be a performance bottleneck.
-
-HBase 1.0 introduces support MultiWal in link:https://issues.apache.org/jira/browse/HBASE-5699[HBASE-5699]. MultiWAL allows a RegionServer to write multiple WAL streams in parallel, by using multiple pipelines in the underlying HDFS instance, which increases total throughput during writes. This parallelization is done by partitioning incoming edits by their Region. Thus, the current implementation will not help with increasing the throughput to a single Region.
-
-RegionServers using the original WAL implementation and those using the MultiWAL implementation can each handle recovery of either set of WALs, so a zero-downtime configuration update is possible through a rolling restart.
-
-.Configure MultiWAL
-To configure MultiWAL for a RegionServer, set the value of the property `hbase.wal.provider` to `multiwal` by pasting in the following XML:
-
-[source,xml]
-----
-
- hbase.wal.provider
- multiwal
-
-----
-
-Restart the RegionServer for the changes to take effect.
-
-To disable MultiWAL for a RegionServer, unset the property and restart the RegionServer.
-
-
-[[wal_flush]]
-==== WAL Flushing
-
-TODO (describe).
-
-==== WAL Splitting
-
-A RegionServer serves many regions.
-All of the regions in a region server share the same active WAL file.
-Each edit in the WAL file includes information about which region it belongs to.
-When a region is opened, the edits in the WAL file which belong to that region need to be replayed.
-Therefore, edits in the WAL file must be grouped by region so that particular sets can be replayed to regenerate the data in a particular region.
-The process of grouping the WAL edits by region is called _log splitting_.
-It is a critical process for recovering data if a region server fails.
-
-Log splitting is done by the HMaster during cluster start-up or by the ServerShutdownHandler as a region server shuts down.
-So that consistency is guaranteed, affected regions are unavailable until data is restored.
-All WAL edits need to be recovered and replayed before a given region can become available again.
-As a result, regions affected by log splitting are unavailable until the process completes.
-
-.Procedure: Log Splitting, Step by Step
-. The _/hbase/WALs/,,_ directory is renamed.
-+
-Renaming the directory is important because a RegionServer may still be up and accepting requests even if the HMaster thinks it is down.
-If the RegionServer does not respond immediately and does not heartbeat its ZooKeeper session, the HMaster may interpret this as a RegionServer failure.
-Renaming the logs directory ensures that existing, valid WAL files which are still in use by an active but busy RegionServer are not written to by accident.
-+
-The new directory is named according to the following pattern:
-+
-----
-/hbase/WALs/,,-splitting
-----
-+
-An example of such a renamed directory might look like the following:
-+
-----
-/hbase/WALs/srv.example.com,60020,1254173957298-splitting
-----
-
-. Each log file is split, one at a time.
-+
-The log splitter reads the log file one edit entry at a time and puts each edit entry into the buffer corresponding to the edit's region.
-At the same time, the splitter starts several writer threads.
-Writer threads pick up a corresponding buffer and write the edit entries in the buffer to a temporary recovered edit file.
-The temporary edit file is stored to disk with the following naming pattern:
-+
-----
-/hbase///recovered.edits/.temp
-----
-+
-This file is used to store all the edits in the WAL log for this region.
-After log splitting completes, the _.temp_ file is renamed to the sequence ID of the first log written to the file.
-+
-To determine whether all edits have been written, the sequence ID is compared to the sequence of the last edit that was written to the HFile.
-If the sequence of the last edit is greater than or equal to the sequence ID included in the file name, it is clear that all writes from the edit file have been completed.
-
-. After log splitting is complete, each affected region is assigned to a RegionServer.
-+
-When the region is opened, the _recovered.edits_ folder is checked for recovered edits files.
-If any such files are present, they are replayed by reading the edits and saving them to the MemStore.
-After all edit files are replayed, the contents of the MemStore are written to disk (HFile) and the edit files are deleted.
-
-
-===== Handling of Errors During Log Splitting
-
-If you set the `hbase.hlog.split.skip.errors` option to `true`, errors are treated as follows:
-
-* Any error encountered during splitting will be logged.
-* The problematic WAL log will be moved into the _.corrupt_ directory under the hbase `rootdir`,
-* Processing of the WAL will continue
-
-If the `hbase.hlog.split.skip.errors` option is set to `false`, the default, the exception will be propagated and the split will be logged as failed.
-See link:https://issues.apache.org/jira/browse/HBASE-2958[HBASE-2958 When
-hbase.hlog.split.skip.errors is set to false, we fail the split but that's it].
-We need to do more than just fail split if this flag is set.
-
-====== How EOFExceptions are treated when splitting a crashed RegionServer's WALs
-
-If an EOFException occurs while splitting logs, the split proceeds even when `hbase.hlog.split.skip.errors` is set to `false`.
-An EOFException while reading the last log in the set of files to split is likely, because the RegionServer was likely in the process of writing a record at the time of a crash.
-For background, see link:https://issues.apache.org/jira/browse/HBASE-2643[HBASE-2643 Figure how to deal with eof splitting logs]
-
-===== Performance Improvements during Log Splitting
-
-WAL log splitting and recovery can be resource intensive and take a long time, depending on the number of RegionServers involved in the crash and the size of the regions. <> was developed to improve performance during log splitting.
-
-[[distributed.log.splitting]]
-.Enabling or Disabling Distributed Log Splitting
-
-Distributed log processing is enabled by default since HBase 0.92.
-The setting is controlled by the `hbase.master.distributed.log.splitting` property, which can be set to `true` or `false`, but defaults to `true`.
-
-==== WAL splitting based on procedureV2
-After HBASE-20610, we introduce a new way to do WAL splitting coordination by procedureV2 framework. This can simplify the process of WAL splitting and no need to connect zookeeper any more.
-
-[[background]]
-.Background
-Currently, splitting WAL processes are coordinated by zookeeper. Each region server are trying to grab tasks from zookeeper. And the burden becomes heavier when the number of region server increase.
-
-[[implementation.on.master.side]]
-.Implementation on Master side
-During ServerCrashProcedure, SplitWALManager will create one SplitWALProcedure for each WAL file which should be split. Then each SplitWALProcedure will spawn a SplitWalRemoteProcedure to send the request to region server.
-SplitWALProcedure is a StateMachineProcedure and here is the state transfer diagram.
-
-.WAL_splitting_coordination
-image::WAL_splitting.png[]
-
-[[implementation.on.region.server.side]]
-.Implementation on Region Server side
-Region Server will receive a SplitWALCallable and execute it, which is much more straightforward than before. It will return null if success and return exception if there is any error.
-
-[[preformance]]
-.Performance
-According to tests on a cluster which has 5 regionserver and 1 master.
-procedureV2 coordinated WAL splitting has a better performance than ZK coordinated WAL splitting no master when restarting the whole cluster or one region server crashing.
-
-[[enable.this.feature]]
-.Enable this feature
-To enable this feature, first we should ensure our package of HBase already contains these code. If not, please upgrade the package of HBase cluster without any configuration change first.
-Then change configuration 'hbase.split.wal.zk.coordinated' to false. Rolling upgrade the master with new configuration. Now WAL splitting are handled by our new implementation.
-But region server are still trying to grab tasks from zookeeper, we can rolling upgrade the region servers with the new configuration to stop that.
-
-* steps as follows:
-** Upgrade whole cluster to get the new Implementation.
-** Upgrade Master with new configuration 'hbase.split.wal.zk.coordinated'=false.
-** Upgrade region server to stop grab tasks from zookeeper.
-
-[[wal.compression]]
-==== WAL Compression ====
-
-The content of the WAL can be compressed using LRU Dictionary compression.
-This can be used to speed up WAL replication to different datanodes.
-The dictionary can store up to 2^15^ elements; eviction starts after this number is exceeded.
-
-To enable WAL compression, set the `hbase.regionserver.wal.enablecompression` property to `true`.
-The default value for this property is `false`.
-By default, WAL tag compression is turned on when WAL compression is enabled.
-You can turn off WAL tag compression by setting the `hbase.regionserver.wal.tags.enablecompression` property to 'false'.
-
-A possible downside to WAL compression is that we lose more data from the last block in the WAL if it ill-terminated
-mid-write. If entries in this last block were added with new dictionary entries but we failed persist the amended
-dictionary because of an abrupt termination, a read of this last block may not be able to resolve last-written entries.
-
-[[wal.durability]]
-==== Durability
-It is possible to set _durability_ on each Mutation or on a Table basis. Options include:
-
- * _SKIP_WAL_: Do not write Mutations to the WAL (See the next section, <>).
- * _ASYNC_WAL_: Write the WAL asynchronously; do not hold-up clients waiting on the sync of their write to the filesystem but return immediately. The edit becomes visible. Meanwhile, in the background, the Mutation will be flushed to the WAL at some time later. This option currently may lose data. See HBASE-16689.
- * _SYNC_WAL_: The *default*. Each edit is sync'd to HDFS before we return success to the client.
- * _FSYNC_WAL_: Each edit is fsync'd to HDFS and the filesystem before we return success to the client.
-
-Do not confuse the _ASYNC_WAL_ option on a Mutation or Table with the _AsyncFSWAL_ writer; they are distinct
-options unfortunately closely named
-
-[[arch.custom.wal.dir]]
-==== Custom WAL Directory
-HBASE-17437 added support for specifying a WAL directory outside the HBase root directory or even in a different FileSystem since 1.3.3/2.0+. Some FileSystems (such as Amazon S3) don’t support append or consistent writes, in such scenario WAL directory needs to be configured in a different FileSystem to avoid loss of writes.
-
-Following configurations are added to accomplish this:
-
-. `hbase.wal.dir`
-+
-This defines where the root WAL directory is located, could be on a different FileSystem than the root directory. WAL directory can not be set to a subdirectory of the root directory. The default value of this is the root directory if unset.
-
-. `hbase.rootdir.perms`
-+
-Configures FileSystem permissions to set on the root directory. This is '700' by default.
-
-. `hbase.wal.dir.perms`
-+
-Configures FileSystem permissions to set on the WAL directory FileSystem. This is '700' by default.
-
-NOTE: While migrating to custom WAL dir (outside the HBase root directory or a different FileSystem) existing WAL files must be copied manually to new WAL dir, otherwise it may lead to data loss/inconsistency as HMaster has no information about previous WAL directory.
-
-[[wal.disable]]
-==== Disabling the WAL
-
-It is possible to disable the WAL, to improve performance in certain specific situations.
-However, disabling the WAL puts your data at risk.
-The only situation where this is recommended is during a bulk load.
-This is because, in the event of a problem, the bulk load can be re-run with no risk of data loss.
-
-The WAL is disabled by calling the HBase client field `Mutation.writeToWAL(false)`.
-Use the `Mutation.setDurability(Durability.SKIP_WAL)` and Mutation.getDurability() methods to set and get the field's value.
-There is no way to disable the WAL for only a specific table.
-
-WARNING: If you disable the WAL for anything other than bulk loads, your data is at risk.
-
-
-[[regions.arch]]
-== Regions
-
-Regions are the basic element of availability and distribution for tables, and are comprised of a Store per Column Family.
-The hierarchy of objects is as follows:
-
-----
-Table (HBase table)
- Region (Regions for the table)
- Store (Store per ColumnFamily for each Region for the table)
- MemStore (MemStore for each Store for each Region for the table)
- StoreFile (StoreFiles for each Store for each Region for the table)
- Block (Blocks within a StoreFile within a Store for each Region for the table)
-----
-
-For a description of what HBase files look like when written to HDFS, see <>.
-
-[[arch.regions.size]]
-=== Considerations for Number of Regions
-
-In general, HBase is designed to run with a small (20-200) number of relatively large (5-20Gb) regions per server.
-The considerations for this are as follows:
-
-[[too_many_regions]]
-==== Why should I keep my Region count low?
-
-Typically you want to keep your region count low on HBase for numerous reasons.
-Usually right around 100 regions per RegionServer has yielded the best results.
-Here are some of the reasons below for keeping region count low:
-
-. MSLAB (MemStore-local allocation buffer) requires 2MB per MemStore (that's 2MB per family per region). 1000 regions that have 2 families each is 3.9GB of heap used, and it's not even storing data yet.
- NB: the 2MB value is configurable.
-. If you fill all the regions at somewhat the same rate, the global memory usage makes it that it forces tiny flushes when you have too many regions which in turn generates compactions.
- Rewriting the same data tens of times is the last thing you want.
- An example is filling 1000 regions (with one family) equally and let's consider a lower bound for global MemStore usage of 5GB (the region server would have a big heap). Once it reaches 5GB it will force flush the biggest region, at that point they should almost all have about 5MB of data so it would flush that amount.
- 5MB inserted later, it would flush another region that will now have a bit over 5MB of data, and so on.
- This is currently the main limiting factor for the number of regions; see <> for detailed formula.
-. The master as is is allergic to tons of regions, and will take a lot of time assigning them and moving them around in batches.
- The reason is that it's heavy on ZK usage, and it's not very async at the moment (could really be improved -- and has been improved a bunch in 0.96 HBase).
-. In older versions of HBase (pre-HFile v2, 0.90 and previous), tons of regions on a few RS can cause the store file index to rise, increasing heap usage and potentially creating memory pressure or OOME on the RSs
-
-Another issue is the effect of the number of regions on MapReduce jobs; it is typical to have one mapper per HBase region.
-Thus, hosting only 5 regions per RS may not be enough to get sufficient number of tasks for a MapReduce job, while 1000 regions will generate far too many tasks.
-
-See <> for configuration guidelines.
-
-[[regions.arch.assignment]]
-=== Region-RegionServer Assignment
-
-This section describes how Regions are assigned to RegionServers.
-
-[[regions.arch.assignment.startup]]
-==== Startup
-
-When HBase starts regions are assigned as follows (short version):
-
-. The Master invokes the `AssignmentManager` upon startup.
-. The `AssignmentManager` looks at the existing region assignments in `hbase:meta`.
-. If the region assignment is still valid (i.e., if the RegionServer is still online) then the assignment is kept.
-. If the assignment is invalid, then the `LoadBalancerFactory` is invoked to assign the region.
- The load balancer (`StochasticLoadBalancer` by default in HBase 1.0) assign the region to a RegionServer.
-. `hbase:meta` is updated with the RegionServer assignment (if needed) and the RegionServer start codes (start time of the RegionServer process) upon region opening by the RegionServer.
-
-[[regions.arch.assignment.failover]]
-==== Failover
-
-When a RegionServer fails:
-
-. The regions immediately become unavailable because the RegionServer is down.
-. The Master will detect that the RegionServer has failed.
-. The region assignments will be considered invalid and will be re-assigned just like the startup sequence.
-. In-flight queries are re-tried, and not lost.
-. Operations are switched to a new RegionServer within the following amount of time:
-+
-[source]
-----
-ZooKeeper session timeout + split time + assignment/replay time
-----
-
-
-[[regions.arch.balancer]]
-==== Region Load Balancing
-
-Regions can be periodically moved by the <>.
-
-[[regions.arch.states]]
-==== Region State Transition
-
-HBase maintains a state for each region and persists the state in `hbase:meta`.
-The state of the `hbase:meta` region itself is persisted in ZooKeeper.
-You can see the states of regions in transition in the Master web UI.
-Following is the list of possible region states.
-
-.Possible Region States
-* `OFFLINE`: the region is offline and not opening
-* `OPENING`: the region is in the process of being opened
-* `OPEN`: the region is open and the RegionServer has notified the master
-* `FAILED_OPEN`: the RegionServer failed to open the region
-* `CLOSING`: the region is in the process of being closed
-* `CLOSED`: the RegionServer has closed the region and notified the master
-* `FAILED_CLOSE`: the RegionServer failed to close the region
-* `SPLITTING`: the RegionServer notified the master that the region is splitting
-* `SPLIT`: the RegionServer notified the master that the region has finished splitting
-* `SPLITTING_NEW`: this region is being created by a split which is in progress
-* `MERGING`: the RegionServer notified the master that this region is being merged with another region
-* `MERGED`: the RegionServer notified the master that this region has been merged
-* `MERGING_NEW`: this region is being created by a merge of two regions
-
-.Region State Transitions
-image::region_states.png[]
-
-.Graph Legend
-* Brown: Offline state, a special state that can be transient (after closed before opening), terminal (regions of disabled tables), or initial (regions of newly created tables)
-* Palegreen: Online state that regions can serve requests
-* Lightblue: Transient states
-* Red: Failure states that need OPS attention
-* Gold: Terminal states of regions split/merged
-* Grey: Initial states of regions created through split/merge
-
-.Transition State Descriptions
-. The master moves a region from `OFFLINE` to `OPENING` state and tries to assign the region to a RegionServer.
- The RegionServer may or may not have received the open region request.
- The master retries sending the open region request to the RegionServer until the RPC goes through or the master runs out of retries.
- After the RegionServer receives the open region request, the RegionServer begins opening the region.
-. If the master is running out of retries, the master prevents the RegionServer from opening the region by moving the region to `CLOSING` state and trying to close it, even if the RegionServer is starting to open the region.
-. After the RegionServer opens the region, it continues to try to notify the master until the master moves the region to `OPEN` state and notifies the RegionServer.
- The region is now open.
-. If the RegionServer cannot open the region, it notifies the master.
- The master moves the region to `CLOSED` state and tries to open the region on a different RegionServer.
-. If the master cannot open the region on any of a certain number of regions, it moves the region to `FAILED_OPEN` state, and takes no further action until an operator intervenes from the HBase shell, or the server is dead.
-. The master moves a region from `OPEN` to `CLOSING` state.
- The RegionServer holding the region may or may not have received the close region request.
- The master retries sending the close request to the server until the RPC goes through or the master runs out of retries.
-. If the RegionServer is not online, or throws `NotServingRegionException`, the master moves the region to `OFFLINE` state and re-assigns it to a different RegionServer.
-. If the RegionServer is online, but not reachable after the master runs out of retries, the master moves the region to `FAILED_CLOSE` state and takes no further action until an operator intervenes from the HBase shell, or the server is dead.
-. If the RegionServer gets the close region request, it closes the region and notifies the master.
- The master moves the region to `CLOSED` state and re-assigns it to a different RegionServer.
-. Before assigning a region, the master moves the region to `OFFLINE` state automatically if it is in `CLOSED` state.
-. When a RegionServer is about to split a region, it notifies the master.
- The master moves the region to be split from `OPEN` to `SPLITTING` state and add the two new regions to be created to the RegionServer.
- These two regions are in `SPLITTING_NEW` state initially.
-. After notifying the master, the RegionServer starts to split the region.
- Once past the point of no return, the RegionServer notifies the master again so the master can update the `hbase:meta` table.
- However, the master does not update the region states until it is notified by the server that the split is done.
- If the split is successful, the splitting region is moved from `SPLITTING` to `SPLIT` state and the two new regions are moved from `SPLITTING_NEW` to `OPEN` state.
-. If the split fails, the splitting region is moved from `SPLITTING` back to `OPEN` state, and the two new regions which were created are moved from `SPLITTING_NEW` to `OFFLINE` state.
-. When a RegionServer is about to merge two regions, it notifies the master first.
- The master moves the two regions to be merged from `OPEN` to `MERGING` state, and adds the new region which will hold the contents of the merged regions region to the RegionServer.
- The new region is in `MERGING_NEW` state initially.
-. After notifying the master, the RegionServer starts to merge the two regions.
- Once past the point of no return, the RegionServer notifies the master again so the master can update the META.
- However, the master does not update the region states until it is notified by the RegionServer that the merge has completed.
- If the merge is successful, the two merging regions are moved from `MERGING` to `MERGED` state and the new region is moved from `MERGING_NEW` to `OPEN` state.
-. If the merge fails, the two merging regions are moved from `MERGING` back to `OPEN` state, and the new region which was created to hold the contents of the merged regions is moved from `MERGING_NEW` to `OFFLINE` state.
-. For regions in `FAILED_OPEN` or `FAILED_CLOSE` states, the master tries to close them again when they are reassigned by an operator via HBase Shell.
-
-[[regions.arch.locality]]
-=== Region-RegionServer Locality
-
-Over time, Region-RegionServer locality is achieved via HDFS block replication.
-The HDFS client does the following by default when choosing locations to write replicas:
-
-. First replica is written to local node
-. Second replica is written to a random node on another rack
-. Third replica is written on the same rack as the second, but on a different node chosen randomly
-. Subsequent replicas are written on random nodes on the cluster.
- See _Replica Placement: The First Baby Steps_ on this page: link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html[HDFS Architecture]
-
-Thus, HBase eventually achieves locality for a region after a flush or a compaction.
-In a RegionServer failover situation a RegionServer may be assigned regions with non-local StoreFiles (because none of the replicas are local), however as new data is written in the region, or the table is compacted and StoreFiles are re-written, they will become "local" to the RegionServer.
-
-For more information, see _Replica Placement: The First Baby Steps_ on this page: link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html[HDFS Architecture] and also Lars George's blog on link:http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html[HBase and HDFS locality].
-
-[[arch.region.splits]]
-=== Region Splits
-
-Regions split when they reach a configured threshold.
-Below we treat the topic in short.
-For a longer exposition, see link:http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging/[Apache HBase Region Splitting and Merging] by our Enis Soztutar.
-
-Splits run unaided on the RegionServer; i.e. the Master does not participate.
-The RegionServer splits a region, offlines the split region and then adds the daughter regions to `hbase:meta`, opens daughters on the parent's hosting RegionServer and then reports the split to the Master.
-See <> for how to manually manage splits (and for why you might do this).
-
-==== Custom Split Policies
-You can override the default split policy using a custom
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy](HBase 0.94+).
-Typically a custom split policy should extend HBase's default split policy:
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.html[IncreasingToUpperBoundRegionSplitPolicy].
-
-The policy can set globally through the HBase configuration or on a per-table
-basis.
-
-.Configuring the Split Policy Globally in _hbase-site.xml_
-[source,xml]
-----
-
- hbase.regionserver.region.split.policy
- org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy
-
-----
-
-.Configuring a Split Policy On a Table Using the Java API
-[source,java]
-HTableDescriptor tableDesc = new HTableDescriptor("test");
-tableDesc.setValue(HTableDescriptor.SPLIT_POLICY, ConstantSizeRegionSplitPolicy.class.getName());
-tableDesc.addFamily(new HColumnDescriptor(Bytes.toBytes("cf1")));
-admin.createTable(tableDesc);
-----
-
-[source]
-.Configuring the Split Policy On a Table Using HBase Shell
-----
-hbase> create 'test', {METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},{NAME => 'cf1'}
-----
-
-The policy can be set globally through the HBaseConfiguration used or on a per table basis:
-[source,java]
-----
-HTableDescriptor myHtd = ...;
-myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName());
-----
-
-NOTE: The `DisabledRegionSplitPolicy` policy blocks manual region splitting.
-
-[[manual_region_splitting_decisions]]
-=== Manual Region Splitting
-
-It is possible to manually split your table, either at table creation (pre-splitting), or at a later time as an administrative action.
-You might choose to split your region for one or more of the following reasons.
-There may be other valid reasons, but the need to manually split your table might also point to problems with your schema design.
-
-.Reasons to Manually Split Your Table
-* Your data is sorted by timeseries or another similar algorithm that sorts new data at the end of the table.
- This means that the Region Server holding the last region is always under load, and the other Region Servers are idle, or mostly idle.
- See also <>.
-* You have developed an unexpected hotspot in one region of your table.
- For instance, an application which tracks web searches might be inundated by a lot of searches for a celebrity in the event of news about that celebrity.
- See <> for more discussion about this particular scenario.
-* After a big increase in the number of RegionServers in your cluster, to get the load spread out quickly.
-* Before a bulk-load which is likely to cause unusual and uneven load across regions.
-
-See <> for a discussion about the dangers and possible benefits of managing splitting completely manually.
-
-NOTE: The `DisabledRegionSplitPolicy` policy blocks manual region splitting.
-
-==== Determining Split Points
-
-The goal of splitting your table manually is to improve the chances of balancing the load across the cluster in situations where good rowkey design alone won't get you there.
-Keeping that in mind, the way you split your regions is very dependent upon the characteristics of your data.
-It may be that you already know the best way to split your table.
-If not, the way you split your table depends on what your keys are like.
-
-Alphanumeric Rowkeys::
- If your rowkeys start with a letter or number, you can split your table at letter or number boundaries.
- For instance, the following command creates a table with regions that split at each vowel, so the first region has A-D, the second region has E-H, the third region has I-N, the fourth region has O-V, and the fifth region has U-Z.
-
-Using a Custom Algorithm::
- The RegionSplitter tool is provided with HBase, and uses a _SplitAlgorithm_ to determine split points for you.
- As parameters, you give it the algorithm, desired number of regions, and column families.
- It includes three split algorithms.
- The first is the
- `link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]`
- algorithm, which assumes the row keys are hexadecimal strings.
- The second is the
- `link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.DecimalStringSplit.html[DecimalStringSplit]`
- algorithm, which assumes the row keys are decimal strings in the range 00000000 to 99999999.
- The third,
- `link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`,
- assumes the row keys are random byte arrays.
- You will probably need to develop your own
- `link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.SplitAlgorithm.html[SplitAlgorithm]`,
- using the provided ones as models.
-
-=== Online Region Merges
-
-Both Master and RegionServer participate in the event of online region merges.
-Client sends merge RPC to the master, then the master moves the regions together to the RegionServer where the more heavily loaded region resided. Finally the master sends the merge request to this RegionServer which then runs the merge.
-Similar to process of region splitting, region merges run as a local transaction on the RegionServer. It offlines the regions and then merges two regions on the file system, atomically delete merging regions from `hbase:meta` and adds the merged region to `hbase:meta`, opens the merged region on the RegionServer and reports the merge to the Master.
-
-An example of region merges in the HBase shell
-[source,bourne]
-----
-$ hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME'
-$ hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true
-----
-It's an asynchronous operation and call returns immediately without waiting merge completed.
-Passing `true` as the optional third parameter will force a merge. Normally only adjacent regions can be merged.
-The `force` parameter overrides this behaviour and is for expert use only.
-
-[[store]]
-=== Store
-
-A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.
-
-[[store.memstore]]
-==== MemStore
-
-The MemStore holds in-memory modifications to the Store.
-Modifications are Cells/KeyValues.
-When a flush is requested, the current MemStore is moved to a snapshot and is cleared.
-HBase continues to serve edits from the new MemStore and backing snapshot until the flusher reports that the flush succeeded.
-At this point, the snapshot is discarded.
-Note that when the flush happens, MemStores that belong to the same region will all be flushed.
-
-==== MemStore Flush
-
-A MemStore flush can be triggered under any of the conditions listed below.
-The minimum flush unit is per region, not at individual MemStore level.
-
-. When a MemStore reaches the size specified by `hbase.hregion.memstore.flush.size`,
- all MemStores that belong to its region will be flushed out to disk.
-
-. When the overall MemStore usage reaches the value specified by
- `hbase.regionserver.global.memstore.upperLimit`, MemStores from various regions
- will be flushed out to disk to reduce overall MemStore usage in a RegionServer.
-+
-The flush order is based on the descending order of a region's MemStore usage.
-+
-Regions will have their MemStores flushed until the overall MemStore usage drops
-to or slightly below `hbase.regionserver.global.memstore.lowerLimit`.
-
-. When the number of WAL log entries in a given region server's WAL reaches the
- value specified in `hbase.regionserver.max.logs`, MemStores from various regions
- will be flushed out to disk to reduce the number of logs in the WAL.
-+
-The flush order is based on time.
-+
-Regions with the oldest MemStores are flushed first until WAL count drops below
-`hbase.regionserver.max.logs`.
-
-[[hregion.scans]]
-==== Scans
-
-* When a client issues a scan against a table, HBase generates `RegionScanner` objects, one per region, to serve the scan request.
-* The `RegionScanner` object contains a list of `StoreScanner` objects, one per column family.
-* Each `StoreScanner` object further contains a list of `StoreFileScanner` objects, corresponding to each StoreFile and HFile of the corresponding column family, and a list of `KeyValueScanner` objects for the MemStore.
-* The two lists are merged into one, which is sorted in ascending order with the scan object for the MemStore at the end of the list.
-* When a `StoreFileScanner` object is constructed, it is associated with a `MultiVersionConcurrencyControl` read point, which is the current `memstoreTS`, filtering out any new updates beyond the read point.
-
-[[hfile]]
-==== StoreFile (HFile)
-
-StoreFiles are where your data lives.
-
-===== HFile Format
-
-The _HFile_ file format is based on the SSTable file described in the link:http://research.google.com/archive/bigtable.html[BigTable [2006]] paper and on Hadoop's link:https://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/file/tfile/TFile.html[TFile] (The unit test suite and the compression harness were taken directly from TFile). Schubert Zhang's blog post on link:http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html[HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs] makes for a thorough introduction to HBase's HFile.
-Matteo Bertozzi has also put up a helpful description, link:http://th30z.blogspot.com/2011/02/hbase-io-hfile.html?spref=tw[HBase I/O: HFile].
-
-For more information, see the HFile source code.
-Also see <> for information about the HFile v2 format that was included in 0.92.
-
-[[hfile_tool]]
-===== HFile Tool
-
-To view a textualized version of HFile content, you can use the `hbase hfile` tool.
-Type the following to see usage:
-
-[source,bash]
-----
-$ ${HBASE_HOME}/bin/hbase hfile
-----
-For example, to view the content of the file _hdfs://10.81.47.41:8020/hbase/default/TEST/1418428042/DSMP/4759508618286845475_, type the following:
-[source,bash]
-----
- $ ${HBASE_HOME}/bin/hbase hfile -v -f hdfs://10.81.47.41:8020/hbase/default/TEST/1418428042/DSMP/4759508618286845475
-----
-If you leave off the option -v to see just a summary on the HFile.
-See usage for other things to do with the `hfile` tool.
-
-NOTE: In the output of this tool, you might see 'seqid=0' for certain keys in places such as 'Mid-key'/'firstKey'/'lastKey'. These are
- 'KeyOnlyKeyValue' type instances - meaning their seqid is irrelevant & we just need the keys of these Key-Value instances.
-
-[[store.file.dir]]
-===== StoreFile Directory Structure on HDFS
-
-For more information of what StoreFiles look like on HDFS with respect to the directory structure, see <>.
-
-[[hfile.blocks]]
-==== Blocks
-
-StoreFiles are composed of blocks.
-The blocksize is configured on a per-ColumnFamily basis.
-
-Compression happens at the block level within StoreFiles.
-For more information on compression, see <>.
-
-For more information on blocks, see the HFileBlock source code.
-
-[[keyvalue]]
-==== KeyValue
-
-The KeyValue class is the heart of data storage in HBase.
-KeyValue wraps a byte array and takes offsets and lengths into the passed array which specify where to start interpreting the content as KeyValue.
-
-The KeyValue format inside a byte array is:
-
-* keylength
-* valuelength
-* key
-* value
-
-The Key is further decomposed as:
-
-* rowlength
-* row (i.e., the rowkey)
-* columnfamilylength
-* columnfamily
-* columnqualifier
-* timestamp
-* keytype (e.g., Put, Delete, DeleteColumn, DeleteFamily)
-
-KeyValue instances are _not_ split across blocks.
-For example, if there is an 8 MB KeyValue, even if the block-size is 64kb this KeyValue will be read in as a coherent block.
-For more information, see the KeyValue source code.
-
-[[keyvalue.example]]
-===== Example
-
-To emphasize the points above, examine what happens with two Puts for two different columns for the same row:
-
-* Put #1: `rowkey=row1, cf:attr1=value1`
-* Put #2: `rowkey=row1, cf:attr2=value2`
-
-Even though these are for the same row, a KeyValue is created for each column:
-
-Key portion for Put #1:
-
-* `rowlength ------------> 4`
-* `row ------------------> row1`
-* `columnfamilylength ---> 2`
-* `columnfamily ---------> cf`
-* `columnqualifier ------> attr1`
-* `timestamp ------------> server time of Put`
-* `keytype --------------> Put`
-
-Key portion for Put #2:
-
-* `rowlength ------------> 4`
-* `row ------------------> row1`
-* `columnfamilylength ---> 2`
-* `columnfamily ---------> cf`
-* `columnqualifier ------> attr2`
-* `timestamp ------------> server time of Put`
-* `keytype --------------> Put`
-
-It is critical to understand that the rowkey, ColumnFamily, and column (aka columnqualifier) are embedded within the KeyValue instance.
-The longer these identifiers are, the bigger the KeyValue is.
-
-[[compaction]]
-==== Compaction
-
-.Ambiguous Terminology
-* A _StoreFile_ is a facade of HFile.
- In terms of compaction, use of StoreFile seems to have prevailed in the past.
-* A _Store_ is the same thing as a ColumnFamily.
- StoreFiles are related to a Store, or ColumnFamily.
-* If you want to read more about StoreFiles versus HFiles and Stores versus ColumnFamilies, see link:https://issues.apache.org/jira/browse/HBASE-11316[HBASE-11316].
-
-When the MemStore reaches a given size (`hbase.hregion.memstore.flush.size`), it flushes its contents to a StoreFile.
-The number of StoreFiles in a Store increases over time. _Compaction_ is an operation which reduces the number of StoreFiles in a Store, by merging them together, in order to increase performance on read operations.
-Compactions can be resource-intensive to perform, and can either help or hinder performance depending on many factors.
-
-Compactions fall into two categories: minor and major.
-Minor and major compactions differ in the following ways.
-
-_Minor compactions_ usually select a small number of small, adjacent StoreFiles and rewrite them as a single StoreFile.
-Minor compactions do not drop (filter out) deletes or expired versions, because of potential side effects.
-See <> and <> for information on how deletes and versions are handled in relation to compactions.
-The end result of a minor compaction is fewer, larger StoreFiles for a given Store.
-
-The end result of a _major compaction_ is a single StoreFile per Store.
-Major compactions also process delete markers and max versions.
-See <> and <> for information on how deletes and versions are handled in relation to compactions.
-
-[[compaction.and.deletes]]
-.Compaction and Deletions
-When an explicit deletion occurs in HBase, the data is not actually deleted.
-Instead, a _tombstone_ marker is written.
-The tombstone marker prevents the data from being returned with queries.
-During a major compaction, the data is actually deleted, and the tombstone marker is removed from the StoreFile.
-If the deletion happens because of an expired TTL, no tombstone is created.
-Instead, the expired data is filtered out and is not written back to the compacted StoreFile.
-
-[[compaction.and.versions]]
-.Compaction and Versions
-When you create a Column Family, you can specify the maximum number of versions to keep, by specifying `ColumnFamilyDescriptorBuilder.setMaxVersions(int versions)`.
-The default value is `1`.
-If more versions than the specified maximum exist, the excess versions are filtered out and not written back to the compacted StoreFile.
-
-.Major Compactions Can Impact Query Results
-[NOTE]
-====
-In some situations, older versions can be inadvertently resurrected if a newer version is explicitly deleted.
-See <> for a more in-depth explanation.
-This situation is only possible before the compaction finishes.
-====
-
-In theory, major compactions improve performance.
-However, on a highly loaded system, major compactions can require an inappropriate number of resources and adversely affect performance.
-In a default configuration, major compactions are scheduled automatically to run once in a 7-day period.
-This is sometimes inappropriate for systems in production.
-You can manage major compactions manually.
-See <>.
-
-Compactions do not perform region merges.
-See <> for more information on region merging.
-
-.Compaction Switch
-We can switch on and off the compactions at region servers. Switching off compactions will also
-interrupt any currently ongoing compactions. It can be done dynamically using the "compaction_switch"
-command from hbase shell. If done from the command line, this setting will be lost on restart of the
-server. To persist the changes across region servers modify the configuration hbase.regionserver
-.compaction.enabled in hbase-site.xml and restart HBase.
-
-
-[[compaction.file.selection]]
-===== Compaction Policy - HBase 0.96.x and newer
-
-Compacting large StoreFiles, or too many StoreFiles at once, can cause more IO load than your cluster is able to handle without causing performance problems.
-The method by which HBase selects which StoreFiles to include in a compaction (and whether the compaction is a minor or major compaction) is called the _compaction policy_.
-
-Prior to HBase 0.96.x, there was only one compaction policy.
-That original compaction policy is still available as `RatioBasedCompactionPolicy`. The new compaction default policy, called `ExploringCompactionPolicy`, was subsequently backported to HBase 0.94 and HBase 0.95, and is the default in HBase 0.96 and newer.
-It was implemented in link:https://issues.apache.org/jira/browse/HBASE-7842[HBASE-7842].
-In short, `ExploringCompactionPolicy` attempts to select the best possible set of StoreFiles to compact with the least amount of work, while the `RatioBasedCompactionPolicy` selects the first set that meets the criteria.
-
-Regardless of the compaction policy used, file selection is controlled by several configurable parameters and happens in a multi-step approach.
-These parameters will be explained in context, and then will be given in a table which shows their descriptions, defaults, and implications of changing them.
-
-[[compaction.being.stuck]]
-====== Being Stuck
-
-When the MemStore gets too large, it needs to flush its contents to a StoreFile.
-However, Stores are configured with a bound on the number StoreFiles,
-`hbase.hstore.blockingStoreFiles`, and if in excess, the MemStore flush must wait
-until the StoreFile count is reduced by one or more compactions. If the MemStore
-is too large and the number of StoreFiles is also too high, the algorithm is said
-to be "stuck". By default we'll wait on compactions up to
-`hbase.hstore.blockingWaitTime` milliseconds. If this period expires, we'll flush
-anyways even though we are in excess of the
-`hbase.hstore.blockingStoreFiles` count.
-
-Upping the `hbase.hstore.blockingStoreFiles` count will allow flushes to happen
-but a Store with many StoreFiles in will likely have higher read latencies. Try to
-figure why Compactions are not keeping up. Is it a write spurt that is bringing
-about this situation or is a regular occurance and the cluster is under-provisioned
-for the volume of writes?
-
-[[exploringcompaction.policy]]
-====== The ExploringCompactionPolicy Algorithm
-
-The ExploringCompactionPolicy algorithm considers each possible set of adjacent StoreFiles before choosing the set where compaction will have the most benefit.
-
-One situation where the ExploringCompactionPolicy works especially well is when you are bulk-loading data and the bulk loads create larger StoreFiles than the StoreFiles which are holding data older than the bulk-loaded data.
-This can "trick" HBase into choosing to perform a major compaction each time a compaction is needed, and cause a lot of extra overhead.
-With the ExploringCompactionPolicy, major compactions happen much less frequently because minor compactions are more efficient.
-
-In general, ExploringCompactionPolicy is the right choice for most situations, and thus is the default compaction policy.
-You can also use ExploringCompactionPolicy along with <>.
-
-The logic of this policy can be examined in hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java.
-The following is a walk-through of the logic of the ExploringCompactionPolicy.
-
-
-. Make a list of all existing StoreFiles in the Store.
- The rest of the algorithm filters this list to come up with the subset of HFiles which will be chosen for compaction.
-. If this was a user-requested compaction, attempt to perform the requested compaction type, regardless of what would normally be chosen.
- Note that even if the user requests a major compaction, it may not be possible to perform a major compaction.
- This may be because not all StoreFiles in the Column Family are available to compact or because there are too many Stores in the Column Family.
-. Some StoreFiles are automatically excluded from consideration.
- These include:
-+
-* StoreFiles that are larger than `hbase.hstore.compaction.max.size`
-* StoreFiles that were created by a bulk-load operation which explicitly excluded compaction.
- You may decide to exclude StoreFiles resulting from bulk loads, from compaction.
- To do this, specify the `hbase.mapreduce.hfileoutputformat.compaction.exclude` parameter during the bulk load operation.
-
-. Iterate through the list from step 1, and make a list of all potential sets of StoreFiles to compact together.
- A potential set is a grouping of `hbase.hstore.compaction.min` contiguous StoreFiles in the list.
- For each set, perform some sanity-checking and figure out whether this is the best compaction that could be done:
-+
-* If the number of StoreFiles in this set (not the size of the StoreFiles) is fewer than `hbase.hstore.compaction.min` or more than `hbase.hstore.compaction.max`, take it out of consideration.
-* Compare the size of this set of StoreFiles with the size of the smallest possible compaction that has been found in the list so far.
- If the size of this set of StoreFiles represents the smallest compaction that could be done, store it to be used as a fall-back if the algorithm is "stuck" and no StoreFiles would otherwise be chosen.
- See <>.
-* Do size-based sanity checks against each StoreFile in this set of StoreFiles.
-** If the size of this StoreFile is larger than `hbase.hstore.compaction.max.size`, take it out of consideration.
-** If the size is greater than or equal to `hbase.hstore.compaction.min.size`, sanity-check it against the file-based ratio to see whether it is too large to be considered.
-+
-The sanity-checking is successful if:
-** There is only one StoreFile in this set, or
-** For each StoreFile, its size multiplied by `hbase.hstore.compaction.ratio` (or `hbase.hstore.compaction.ratio.offpeak` if off-peak hours are configured and it is during off-peak hours) is less than the sum of the sizes of the other HFiles in the set.
-
-. If this set of StoreFiles is still in consideration, compare it to the previously-selected best compaction.
- If it is better, replace the previously-selected best compaction with this one.
-. When the entire list of potential compactions has been processed, perform the best compaction that was found.
- If no StoreFiles were selected for compaction, but there are multiple StoreFiles, assume the algorithm is stuck (see <>) and if so, perform the smallest compaction that was found in step 3.
-
-[[compaction.ratiobasedcompactionpolicy.algorithm]]
-====== RatioBasedCompactionPolicy Algorithm
-
-The RatioBasedCompactionPolicy was the only compaction policy prior to HBase 0.96, though ExploringCompactionPolicy has now been backported to HBase 0.94 and 0.95.
-To use the RatioBasedCompactionPolicy rather than the ExploringCompactionPolicy, set `hbase.hstore.defaultengine.compactionpolicy.class` to `RatioBasedCompactionPolicy` in the _hbase-site.xml_ file.
-To switch back to the ExploringCompactionPolicy, remove the setting from the _hbase-site.xml_.
-
-The following section walks you through the algorithm used to select StoreFiles for compaction in the RatioBasedCompactionPolicy.
-
-
-. The first phase is to create a list of all candidates for compaction.
- A list is created of all StoreFiles not already in the compaction queue, and all StoreFiles newer than the newest file that is currently being compacted.
- This list of StoreFiles is ordered by the sequence ID.
- The sequence ID is generated when a Put is appended to the write-ahead log (WAL), and is stored in the metadata of the HFile.
-. Check to see if the algorithm is stuck (see <>, and if so, a major compaction is forced.
- This is a key area where <> is often a better choice than the RatioBasedCompactionPolicy.
-. If the compaction was user-requested, try to perform the type of compaction that was requested.
- Note that a major compaction may not be possible if all HFiles are not available for compaction or if too many StoreFiles exist (more than `hbase.hstore.compaction.max`).
-. Some StoreFiles are automatically excluded from consideration.
- These include:
-+
-* StoreFiles that are larger than `hbase.hstore.compaction.max.size`
-* StoreFiles that were created by a bulk-load operation which explicitly excluded compaction.
- You may decide to exclude StoreFiles resulting from bulk loads, from compaction.
- To do this, specify the `hbase.mapreduce.hfileoutputformat.compaction.exclude` parameter during the bulk load operation.
-
-. The maximum number of StoreFiles allowed in a major compaction is controlled by the `hbase.hstore.compaction.max` parameter.
- If the list contains more than this number of StoreFiles, a minor compaction is performed even if a major compaction would otherwise have been done.
- However, a user-requested major compaction still occurs even if there are more than `hbase.hstore.compaction.max` StoreFiles to compact.
-. If the list contains fewer than `hbase.hstore.compaction.min` StoreFiles to compact, a minor compaction is aborted.
- Note that a major compaction can be performed on a single HFile.
- Its function is to remove deletes and expired versions, and reset locality on the StoreFile.
-. The value of the `hbase.hstore.compaction.ratio` parameter is multiplied by the sum of StoreFiles smaller than a given file, to determine whether that StoreFile is selected for compaction during a minor compaction.
- For instance, if hbase.hstore.compaction.ratio is 1.2, FileX is 5MB, FileY is 2MB, and FileZ is 3MB:
-+
-----
-5 <= 1.2 x (2 + 3) or 5 <= 6
-----
-+
-In this scenario, FileX is eligible for minor compaction.
-If FileX were 7MB, it would not be eligible for minor compaction.
-This ratio favors smaller StoreFile.
-You can configure a different ratio for use in off-peak hours, using the parameter `hbase.hstore.compaction.ratio.offpeak`, if you also configure `hbase.offpeak.start.hour` and `hbase.offpeak.end.hour`.
-
-. If the last major compaction was too long ago and there is more than one StoreFile to be compacted, a major compaction is run, even if it would otherwise have been minor.
- By default, the maximum time between major compactions is 7 days, plus or minus a 4.8 hour period, and determined randomly within those parameters.
- Prior to HBase 0.96, the major compaction period was 24 hours.
- See `hbase.hregion.majorcompaction` in the table below to tune or disable time-based major compactions.
-
-[[compaction.parameters]]
-====== Parameters Used by Compaction Algorithm
-
-This table contains the main configuration parameters for compaction.
-This list is not exhaustive.
-To tune these parameters from the defaults, edit the _hbase-default.xml_ file.
-For a full list of all configuration parameters available, see <>
-
-`hbase.hstore.compaction.min`::
- The minimum number of StoreFiles which must be eligible for compaction before compaction can run.
- The goal of tuning `hbase.hstore.compaction.min` is to avoid ending up with too many tiny StoreFiles
- to compact. Setting this value to 2 would cause a minor compaction each time you have two StoreFiles
- in a Store, and this is probably not appropriate. If you set this value too high, all the other
- values will need to be adjusted accordingly. For most cases, the default value is appropriate.
- In previous versions of HBase, the parameter `hbase.hstore.compaction.min` was called
- `hbase.hstore.compactionThreshold`.
-+
-*Default*: 3
-
-`hbase.hstore.compaction.max`::
- The maximum number of StoreFiles which will be selected for a single minor compaction,
- regardless of the number of eligible StoreFiles. Effectively, the value of
- `hbase.hstore.compaction.max` controls the length of time it takes a single
- compaction to complete. Setting it larger means that more StoreFiles are included
- in a compaction. For most cases, the default value is appropriate.
-+
-*Default*: 10
-
-`hbase.hstore.compaction.min.size`::
- A StoreFile smaller than this size will always be eligible for minor compaction.
- StoreFiles this size or larger are evaluated by `hbase.hstore.compaction.ratio`
- to determine if they are eligible. Because this limit represents the "automatic
- include" limit for all StoreFiles smaller than this value, this value may need
- to be reduced in write-heavy environments where many files in the 1-2 MB range
- are being flushed, because every StoreFile will be targeted for compaction and
- the resulting StoreFiles may still be under the minimum size and require further
- compaction. If this parameter is lowered, the ratio check is triggered more quickly.
- This addressed some issues seen in earlier versions of HBase but changing this
- parameter is no longer necessary in most situations.
-+
-*Default*:128 MB
-
-`hbase.hstore.compaction.max.size`::
- A StoreFile larger than this size will be excluded from compaction. The effect of
- raising `hbase.hstore.compaction.max.size` is fewer, larger StoreFiles that do not
- get compacted often. If you feel that compaction is happening too often without
- much benefit, you can try raising this value.
-+
-*Default*: `Long.MAX_VALUE`
-
-`hbase.hstore.compaction.ratio`::
- For minor compaction, this ratio is used to determine whether a given StoreFile
- which is larger than `hbase.hstore.compaction.min.size` is eligible for compaction.
- Its effect is to limit compaction of large StoreFile. The value of
- `hbase.hstore.compaction.ratio` is expressed as a floating-point decimal.
-+
-* A large ratio, such as 10, will produce a single giant StoreFile. Conversely,
- a value of .25, will produce behavior similar to the BigTable compaction algorithm,
- producing four StoreFiles.
-* A moderate value of between 1.0 and 1.4 is recommended. When tuning this value,
- you are balancing write costs with read costs. Raising the value (to something like
- 1.4) will have more write costs, because you will compact larger StoreFiles.
- However, during reads, HBase will need to seek through fewer StoreFiles to
- accomplish the read. Consider this approach if you cannot take advantage of <>.
-* Alternatively, you can lower this value to something like 1.0 to reduce the
- background cost of writes, and use to limit the number of StoreFiles touched
- during reads. For most cases, the default value is appropriate.
-+
-*Default*: `1.2F`
-
-`hbase.hstore.compaction.ratio.offpeak`::
- The compaction ratio used during off-peak compactions, if off-peak hours are
- also configured (see below). Expressed as a floating-point decimal. This allows
- for more aggressive (or less aggressive, if you set it lower than
- `hbase.hstore.compaction.ratio`) compaction during a set time period. Ignored
- if off-peak is disabled (default). This works the same as
- `hbase.hstore.compaction.ratio`.
-+
-*Default*: `5.0F`
-
-`hbase.offpeak.start.hour`::
- The start of off-peak hours, expressed as an integer between 0 and 23, inclusive.
- Set to -1 to disable off-peak.
-+
-*Default*: `-1` (disabled)
-
-`hbase.offpeak.end.hour`::
- The end of off-peak hours, expressed as an integer between 0 and 23, inclusive.
- Set to -1 to disable off-peak.
-+
-*Default*: `-1` (disabled)
-
-`hbase.regionserver.thread.compaction.throttle`::
- There are two different thread pools for compactions, one for large compactions
- and the other for small compactions. This helps to keep compaction of lean tables
- (such as `hbase:meta`) fast. If a compaction is larger than this threshold,
- it goes into the large compaction pool. In most cases, the default value is
- appropriate.
-+
-*Default*: `2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size`
-(which defaults to `128`)
-
-`hbase.hregion.majorcompaction`::
- Time between major compactions, expressed in milliseconds. Set to 0 to disable
- time-based automatic major compactions. User-requested and size-based major
- compactions will still run. This value is multiplied by
- `hbase.hregion.majorcompaction.jitter` to cause compaction to start at a
- somewhat-random time during a given window of time.
-+
-*Default*: 7 days (`604800000` milliseconds)
-
-`hbase.hregion.majorcompaction.jitter`::
- A multiplier applied to hbase.hregion.majorcompaction to cause compaction to
- occur a given amount of time either side of `hbase.hregion.majorcompaction`.
- The smaller the number, the closer the compactions will happen to the
- `hbase.hregion.majorcompaction` interval. Expressed as a floating-point decimal.
-+
-*Default*: `.50F`
-
-[[compaction.file.selection.old]]
-===== Compaction File Selection
-
-.Legacy Information
-[NOTE]
-====
-This section has been preserved for historical reasons and refers to the way compaction worked prior to HBase 0.96.x.
-You can still use this behavior if you enable <>. For information on the way that compactions work in HBase 0.96.x and later, see <>.
-====
-
-To understand the core algorithm for StoreFile selection, there is some ASCII-art in the Store source code that will serve as useful reference.
-
-It has been copied below:
-[source]
-----
-/* normal skew:
- *
- * older ----> newer
- * _
- * | | _
- * | | | | _
- * --|-|- |-|- |-|---_-------_------- minCompactSize
- * | | | | | | | | _ | |
- * | | | | | | | | | | | |
- * | | | | | | | | | | | |
- */
-----
-.Important knobs:
-* `hbase.hstore.compaction.ratio` Ratio used in compaction file selection algorithm (default 1.2f).
-* `hbase.hstore.compaction.min` (in HBase v 0.90 this is called `hbase.hstore.compactionThreshold`) (files) Minimum number of StoreFiles per Store to be selected for a compaction to occur (default 2).
-* `hbase.hstore.compaction.max` (files) Maximum number of StoreFiles to compact per minor compaction (default 10).
-* `hbase.hstore.compaction.min.size` (bytes) Any StoreFile smaller than this setting with automatically be a candidate for compaction.
- Defaults to `hbase.hregion.memstore.flush.size` (128 mb).
-* `hbase.hstore.compaction.max.size` (.92) (bytes) Any StoreFile larger than this setting with automatically be excluded from compaction (default Long.MAX_VALUE).
-
-The minor compaction StoreFile selection logic is size based, and selects a file for compaction when the `file <= sum(smaller_files) * hbase.hstore.compaction.ratio`.
-
-[[compaction.file.selection.example1]]
-====== Minor Compaction File Selection - Example #1 (Basic Example)
-
-This example mirrors an example from the unit test `TestCompactSelection`.
-
-* `hbase.hstore.compaction.ratio` = 1.0f
-* `hbase.hstore.compaction.min` = 3 (files)
-* `hbase.hstore.compaction.max` = 5 (files)
-* `hbase.hstore.compaction.min.size` = 10 (bytes)
-* `hbase.hstore.compaction.max.size` = 1000 (bytes)
-
-The following StoreFiles exist: 100, 50, 23, 12, and 12 bytes apiece (oldest to newest). With the above parameters, the files that would be selected for minor compaction are 23, 12, and 12.
-
-Why?
-
-* 100 -> No, because sum(50, 23, 12, 12) * 1.0 = 97.
-* 50 -> No, because sum(23, 12, 12) * 1.0 = 47.
-* 23 -> Yes, because sum(12, 12) * 1.0 = 24.
-* 12 -> Yes, because the previous file has been included, and because this does not exceed the max-file limit of 5
-* 12 -> Yes, because the previous file had been included, and because this does not exceed the max-file limit of 5.
-
-[[compaction.file.selection.example2]]
-====== Minor Compaction File Selection - Example #2 (Not Enough Files ToCompact)
-
-This example mirrors an example from the unit test `TestCompactSelection`.
-
-* `hbase.hstore.compaction.ratio` = 1.0f
-* `hbase.hstore.compaction.min` = 3 (files)
-* `hbase.hstore.compaction.max` = 5 (files)
-* `hbase.hstore.compaction.min.size` = 10 (bytes)
-* `hbase.hstore.compaction.max.size` = 1000 (bytes)
-
-The following StoreFiles exist: 100, 25, 12, and 12 bytes apiece (oldest to newest). With the above parameters, no compaction will be started.
-
-Why?
-
-* 100 -> No, because sum(25, 12, 12) * 1.0 = 47
-* 25 -> No, because sum(12, 12) * 1.0 = 24
-* 12 -> No. Candidate because sum(12) * 1.0 = 12, there are only 2 files to compact and that is less than the threshold of 3
-* 12 -> No. Candidate because the previous StoreFile was, but there are not enough files to compact
-
-[[compaction.file.selection.example3]]
-====== Minor Compaction File Selection - Example #3 (Limiting Files To Compact)
-
-This example mirrors an example from the unit test `TestCompactSelection`.
-
-* `hbase.hstore.compaction.ratio` = 1.0f
-* `hbase.hstore.compaction.min` = 3 (files)
-* `hbase.hstore.compaction.max` = 5 (files)
-* `hbase.hstore.compaction.min.size` = 10 (bytes)
-* `hbase.hstore.compaction.max.size` = 1000 (bytes)
-
-The following StoreFiles exist: 7, 6, 5, 4, 3, 2, and 1 bytes apiece (oldest to newest). With the above parameters, the files that would be selected for minor compaction are 7, 6, 5, 4, 3.
-
-Why?
-
-* 7 -> Yes, because sum(6, 5, 4, 3, 2, 1) * 1.0 = 21.
- Also, 7 is less than the min-size
-* 6 -> Yes, because sum(5, 4, 3, 2, 1) * 1.0 = 15.
- Also, 6 is less than the min-size.
-* 5 -> Yes, because sum(4, 3, 2, 1) * 1.0 = 10.
- Also, 5 is less than the min-size.
-* 4 -> Yes, because sum(3, 2, 1) * 1.0 = 6.
- Also, 4 is less than the min-size.
-* 3 -> Yes, because sum(2, 1) * 1.0 = 3.
- Also, 3 is less than the min-size.
-* 2 -> No.
- Candidate because previous file was selected and 2 is less than the min-size, but the max-number of files to compact has been reached.
-* 1 -> No.
- Candidate because previous file was selected and 1 is less than the min-size, but max-number of files to compact has been reached.
-
-[[compaction.config.impact]]
-.Impact of Key Configuration Options
-
-NOTE: This information is now included in the configuration parameter table in <>.
-
-[[ops.date.tiered]]
-===== Date Tiered Compaction
-
-Date tiered compaction is a date-aware store file compaction strategy that is beneficial for time-range scans for time-series data.
-
-[[ops.date.tiered.when]]
-===== When To Use Date Tiered Compactions
-
-Consider using Date Tiered Compaction for reads for limited time ranges, especially scans of recent data
-
-Don't use it for
-
-* random gets without a limited time range
-* frequent deletes and updates
-* Frequent out of order data writes creating long tails, especially writes with future timestamps
-* frequent bulk loads with heavily overlapping time ranges
-
-.Performance Improvements
-Performance testing has shown that the performance of time-range scans improve greatly for limited time ranges, especially scans of recent data.
-
-[[ops.date.tiered.enable]]
-====== Enabling Date Tiered Compaction
-
-You can enable Date Tiered compaction for a table or a column family, by setting its `hbase.hstore.engine.class` to `org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine`.
-
-You also need to set `hbase.hstore.blockingStoreFiles` to a high number, such as 60, if using all default settings, rather than the default value of 12). Use 1.5~2 x projected file count if changing the parameters, Projected file count = windows per tier x tier count + incoming window min + files older than max age
-
-You also need to set `hbase.hstore.compaction.max` to the same value as `hbase.hstore.blockingStoreFiles` to unblock major compaction.
-
-.Procedure: Enable Date Tiered Compaction
-. Run one of following commands in the HBase shell.
- Replace the table name `orders_table` with the name of your table.
-+
-[source,sql]
-----
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 'hbase.hstore.compaction.max'=>'60'}
-alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 'hbase.hstore.compaction.max'=>'60'}}
-create 'orders_table', 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 'hbase.hstore.compaction.max'=>'60'}
-----
-
-. Configure other options if needed.
- See <> for more information.
-
-.Procedure: Disable Date Tiered Compaction
-. Set the `hbase.hstore.engine.class` option to either nil or `org.apache.hadoop.hbase.regionserver.DefaultStoreEngine`.
- Either option has the same effect.
- Make sure you set the other options you changed to the original settings too.
-+
-[source,sql]
-----
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.DefaultStoreEngine', 'hbase.hstore.blockingStoreFiles' => '12', 'hbase.hstore.compaction.min'=>'6', 'hbase.hstore.compaction.max'=>'12'}}
-----
-
-When you change the store engine either way, a major compaction will likely be performed on most regions.
-This is not necessary on new tables.
-
-[[ops.date.tiered.config]]
-====== Configuring Date Tiered Compaction
-
-Each of the settings for date tiered compaction should be configured at the table or column family level.
-If you use HBase shell, the general command pattern is as follows:
-
-[source,sql]
-----
-alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}}
-----
-
-[[ops.date.tiered.config.parameters]]
-.Tier Parameters
-
-You can configure your date tiers by changing the settings for the following parameters:
-
-.Date Tier Parameters
-[cols="1,1a", frame="all", options="header"]
-|===
-| Setting
-| Notes
-
-|`hbase.hstore.compaction.date.tiered.max.storefile.age.millis`
-|Files with max-timestamp smaller than this will no longer be compacted.Default at Long.MAX_VALUE.
-
-| `hbase.hstore.compaction.date.tiered.base.window.millis`
-| Base window size in milliseconds. Default at 6 hours.
-
-| `hbase.hstore.compaction.date.tiered.windows.per.tier`
-| Number of windows per tier. Default at 4.
-
-| `hbase.hstore.compaction.date.tiered.incoming.window.min`
-| Minimal number of files to compact in the incoming window. Set it to expected number of files in the window to avoid wasteful compaction. Default at 6.
-
-| `hbase.hstore.compaction.date.tiered.window.policy.class`
-| The policy to select store files within the same time window. It doesn’t apply to the incoming window. Default at exploring compaction. This is to avoid wasteful compaction.
-|===
-
-[[ops.date.tiered.config.compaction.throttler]]
-.Compaction Throttler
-
-With tiered compaction all servers in the cluster will promote windows to higher tier at the same time, so using a compaction throttle is recommended:
-Set `hbase.regionserver.throughput.controller` to `org.apache.hadoop.hbase.regionserver.compactions.PressureAwareCompactionThroughputController`.
-
-NOTE: For more information about date tiered compaction, please refer to the design specification at https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8
-[[ops.stripe]]
-===== Experimental: Stripe Compactions
-
-Stripe compactions is an experimental feature added in HBase 0.98 which aims to improve compactions for large regions or non-uniformly distributed row keys.
-In order to achieve smaller and/or more granular compactions, the StoreFiles within a region are maintained separately for several row-key sub-ranges, or "stripes", of the region.
-The stripes are transparent to the rest of HBase, so other operations on the HFiles or data work without modification.
-
-Stripe compactions change the HFile layout, creating sub-regions within regions.
-These sub-regions are easier to compact, and should result in fewer major compactions.
-This approach alleviates some of the challenges of larger regions.
-
-Stripe compaction is fully compatible with <> and works in conjunction with either the ExploringCompactionPolicy or RatioBasedCompactionPolicy.
-It can be enabled for existing tables, and the table will continue to operate normally if it is disabled later.
-
-[[ops.stripe.when]]
-===== When To Use Stripe Compactions
-
-Consider using stripe compaction if you have either of the following:
-
-* Large regions.
- You can get the positive effects of smaller regions without additional overhead for MemStore and region management overhead.
-* Non-uniform keys, such as time dimension in a key.
- Only the stripes receiving the new keys will need to compact.
- Old data will not compact as often, if at all
-
-.Performance Improvements
-Performance testing has shown that the performance of reads improves somewhat, and variability of performance of reads and writes is greatly reduced.
-An overall long-term performance improvement is seen on large non-uniform-row key regions, such as a hash-prefixed timestamp key.
-These performance gains are the most dramatic on a table which is already large.
-It is possible that the performance improvement might extend to region splits.
-
-[[ops.stripe.enable]]
-====== Enabling Stripe Compaction
-
-You can enable stripe compaction for a table or a column family, by setting its `hbase.hstore.engine.class` to `org.apache.hadoop.hbase.regionserver.StripeStoreEngine`.
-You also need to set the `hbase.hstore.blockingStoreFiles` to a high number, such as 100 (rather than the default value of 10).
-
-.Procedure: Enable Stripe Compaction
-. Run one of following commands in the HBase shell.
- Replace the table name `orders_table` with the name of your table.
-+
-[source,sql]
-----
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}
-alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}}
-create 'orders_table', 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}
-----
-
-. Configure other options if needed.
- See <> for more information.
-. Enable the table.
-
-.Procedure: Disable Stripe Compaction
-. Set the `hbase.hstore.engine.class` option to either nil or `org.apache.hadoop.hbase.regionserver.DefaultStoreEngine`.
- Either option has the same effect.
-+
-[source,sql]
-----
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'rg.apache.hadoop.hbase.regionserver.DefaultStoreEngine'}
-----
-
-. Enable the table.
-
-When you enable a large table after changing the store engine either way, a major compaction will likely be performed on most regions.
-This is not necessary on new tables.
-
-[[ops.stripe.config]]
-====== Configuring Stripe Compaction
-
-Each of the settings for stripe compaction should be configured at the table or column family level.
-If you use HBase shell, the general command pattern is as follows:
-
-[source,sql]
-----
-alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}}
-----
-
-[[ops.stripe.config.sizing]]
-.Region and stripe sizing
-
-You can configure your stripe sizing based upon your region sizing.
-By default, your new regions will start with one stripe.
-On the next compaction after the stripe has grown too large (16 x MemStore flushes size), it is split into two stripes.
-Stripe splitting continues as the region grows, until the region is large enough to split.
-
-You can improve this pattern for your own data.
-A good rule is to aim for a stripe size of at least 1 GB, and about 8-12 stripes for uniform row keys.
-For example, if your regions are 30 GB, 12 x 2.5 GB stripes might be a good starting point.
-
-.Stripe Sizing Settings
-[cols="1,1a", frame="all", options="header"]
-|===
-| Setting
-| Notes
-
-|`hbase.store.stripe.initialStripeCount`
-|The number of stripes to create when stripe compaction is enabled. You can use it as follows:
-
-* For relatively uniform row keys, if you know the approximate
- target number of stripes from the above, you can avoid some
- splitting overhead by starting with several stripes (2, 5, 10...).
- If the early data is not representative of overall row key
- distribution, this will not be as efficient.
-
-* For existing tables with a large amount of data, this setting
- will effectively pre-split your stripes.
-
-* For keys such as hash-prefixed sequential keys, with more than
- one hash prefix per region, pre-splitting may make sense.
-
-
-| `hbase.store.stripe.sizeToSplit`
-| The maximum size a stripe grows before splitting. Use this in
-conjunction with `hbase.store.stripe.splitPartCount` to
-control the target stripe size (`sizeToSplit = splitPartsCount * target
-stripe size`), according to the above sizing considerations.
-
-| `hbase.store.stripe.splitPartCount`
-| The number of new stripes to create when splitting a stripe. The default is 2, which is appropriate for most cases. For non-uniform row keys, you can experiment with increasing the number to 3 or 4, to isolate the arriving updates into narrower slice of the region without additional splits being required.
-|===
-
-[[ops.stripe.config.memstore]]
-.MemStore Size Settings
-
-By default, the flush creates several files from one MemStore, according to existing stripe boundaries and row keys to flush.
-This approach minimizes write amplification, but can be undesirable if the MemStore is small and there are many stripes, because the files will be too small.
-
-In this type of situation, you can set `hbase.store.stripe.compaction.flushToL0` to `true`.
-This will cause a MemStore flush to create a single file instead.
-When at least `hbase.store.stripe.compaction.minFilesL0` such files (by default, 4) accumulate, they will be compacted into striped files.
-
-[[ops.stripe.config.compact]]
-.Normal Compaction Configuration and Stripe Compaction
-
-All the settings that apply to normal compactions (see <>) apply to stripe compactions.
-The exceptions are the minimum and maximum number of files, which are set to higher values by default because the files in stripes are smaller.
-To control these for stripe compactions, use `hbase.store.stripe.compaction.minFiles` and `hbase.store.stripe.compaction.maxFiles`, rather than `hbase.hstore.compaction.min` and `hbase.hstore.compaction.max`.
-
-[[ops.fifo]]
-===== FIFO Compaction
-
-FIFO compaction policy selects only files which have all cells expired. The column family *MUST* have non-default TTL.
-Essentially, FIFO compactor only collects expired store files.
-
-Because we don't do any real compaction, we do not use CPU and IO (disk and network) and evict hot data from a block cache.
-As a result, both RW throughput and latency can be improved.
-
-[[ops.fifo.when]]
-===== When To Use FIFO Compaction
-
-Consider using FIFO Compaction when your use case is
-
-* Very high volume raw data which has low TTL and which is the source of another data (after additional processing).
-* Data which can be kept entirely in a a block cache (RAM/SSD). No need for compaction of a raw data at all.
-
-Do not use FIFO compaction when
-
-* Table/ColumnFamily has MIN_VERSION > 0
-* Table/ColumnFamily has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)
-
-[[ops.fifo.enable]]
-====== Enabling FIFO Compaction
-
-For Table:
-
-[source,java]
-----
-HTableDescriptor desc = new HTableDescriptor(tableName);
- desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
- FIFOCompactionPolicy.class.getName());
-----
-
-For Column Family:
-
-[source,java]
-----
-HColumnDescriptor desc = new HColumnDescriptor(family);
- desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
- FIFOCompactionPolicy.class.getName());
-----
-
-From HBase Shell:
-
-[source,bash]
-----
-create 'x',{NAME=>'y', TTL=>'30'}, {CONFIGURATION => {'hbase.hstore.defaultengine.compactionpolicy.class' => 'org.apache.hadoop.hbase.regionserver.compactions.FIFOCompactionPolicy', 'hbase.hstore.blockingStoreFiles' => 1000}}
-----
-
-Although region splitting is still supported, for optimal performance it should be disabled, either by setting explicitly `DisabledRegionSplitPolicy` or by setting `ConstantSizeRegionSplitPolicy` and very large max region size.
-You will have to increase to a very large number store's blocking file (`hbase.hstore.blockingStoreFiles`) as well.
-There is a sanity check on table/column family configuration in case of FIFO compaction and minimum value for number of blocking file is 1000.
-
-[[arch.bulk.load]]
-== Bulk Loading
-
-[[arch.bulk.load.overview]]
-=== Overview
-
-HBase includes several methods of loading data into tables.
-The most straightforward method is to either use the `TableOutputFormat` class from a MapReduce job, or use the normal client APIs; however, these are not always the most efficient methods.
-
-The bulk load feature uses a MapReduce job to output table data in HBase's internal data format, and then directly load the generated StoreFiles into a running cluster.
-Using bulk load will use less CPU and network resources than loading via the HBase API.
-
-[[arch.bulk.load.arch]]
-=== Bulk Load Architecture
-
-The HBase bulk load process consists of two main steps.
-
-[[arch.bulk.load.prep]]
-==== Preparing data via a MapReduce job
-
-The first step of a bulk load is to generate HBase data files (StoreFiles) from a MapReduce job using `HFileOutputFormat2`.
-This output format writes out data in HBase's internal storage format so that they can be later loaded efficiently into the cluster.
-
-In order to function efficiently, `HFileOutputFormat2` must be configured such that each output HFile fits within a single region.
-In order to do this, jobs whose output will be bulk loaded into HBase use Hadoop's `TotalOrderPartitioner` class to partition the map output into disjoint ranges of the key space, corresponding to the key ranges of the regions in the table.
-
-`HFileOutputFormat2` includes a convenience function, `configureIncrementalLoad()`, which automatically sets up a `TotalOrderPartitioner` based on the current region boundaries of a table.
-
-[[arch.bulk.load.complete]]
-==== Completing the data load
-
-After a data import has been prepared, either by using the `importtsv` tool with the "`importtsv.bulk.output`" option or by some other MapReduce job using the `HFileOutputFormat`, the `completebulkload` tool is used to import the data into the running cluster.
-This command line tool iterates through the prepared data files, and for each one determines the region the file belongs to.
-It then contacts the appropriate RegionServer which adopts the HFile, moving it into its storage directory and making the data available to clients.
-
-If the region boundaries have changed during the course of bulk load preparation, or between the preparation and completion steps, the `completebulkload` utility will automatically split the data files into pieces corresponding to the new boundaries.
-This process is not optimally efficient, so users should take care to minimize the delay between preparing a bulk load and importing it into the cluster, especially if other clients are simultaneously loading data through other means.
-
-[[arch.bulk.load.complete.help]]
-[source,bash]
-----
-$ hadoop jar hbase-mapreduce-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable
-----
-
-The `-c config-file` option can be used to specify a file containing the appropriate hbase parameters (e.g., hbase-site.xml) if not supplied already on the CLASSPATH (In addition, the CLASSPATH must contain the directory that has the zookeeper configuration file if zookeeper is NOT managed by HBase).
-
-[[arch.bulk.load.also]]
-=== See Also
-
-For more information about the referenced utilities, see <> and <>.
-
-See link:http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and-why/[How-to: Use HBase Bulk Loading, and Why] for an old blog post on loading.
-
-[[arch.bulk.load.adv]]
-=== Advanced Usage
-
-Although the `importtsv` tool is useful in many cases, advanced users may want to generate data programmatically, or import data from other formats.
-To get started doing so, dig into `ImportTsv.java` and check the JavaDoc for HFileOutputFormat.
-
-The import step of the bulk load can also be done programmatically.
-See the `LoadIncrementalHFiles` class for more information.
-
-[[arch.bulk.load.complete.strays]]
-==== 'Adopting' Stray Data
-Should an HBase cluster lose account of regions or files during an outage or error, you can use
-the `completebulkload` tool to add back the dropped data. HBase operator tooling such as
-link:https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2[HBCK2] or
-the reporting added to the Master's UI under the `HBCK Report` (Since HBase 2.0.6/2.1.6/2.2.1)
-can identify such 'orphan' directories.
-
-Before you begin the 'adoption', ensure the `hbase:meta` table is in a healthy state.
-Run the `CatalogJanitor` by executing the `catalogjanitor_run` command on the HBase shell.
-When finished, check the `HBCK Report` page on the Master UI. Work on fixing any
-inconsistencies, holes, or overlaps found before proceeding. The `hbase:meta` table
-is the authority on where all data is to be found and must be consistent for
-the `completebulkload` tool to work properly.
-
-The `completebulkload` tool takes a directory and a `tablename`.
-The directory has subdirectories named for column families of the targeted `tablename`.
-In these subdirectories are `hfiles` to load. Given this structure, you can pass
-errant region directories (and the table name to which the region directory belongs)
-and the tool will bring the data files back into the fold by moving them under the
-approprate serving directory. If stray files, then you will need to mock up this
-structure before invoking the `completebulkload` tool; you may have to look at the
-file content using the <> to see what the column family to use is.
-When the tool completes its run, you will notice that the
-source errant directory has had its storefiles moved/removed. It is now desiccated
-since its data has been drained, and the pointed-to directory can be safely
-removed. It may still have `.regioninfo` files and other
-subdirectories but they are of no relevance now (There may be content still
-under the _recovered_edits_ directory; a TODO is tooling to replay the
-content of _recovered_edits_ if needed; see
-link:https://issues.apache.org/jira/browse/HBASE-22976[Add RecoveredEditsPlayer]).
-If you pass `completebulkload` a directory without store files, it will run and
-note the directory is storefile-free. Just remove such 'empty' directories.
-
-For example, presuming a directory at the top level in HDFS named
-`eb3352fb5c9c9a05feeb2caba101e1cc` has data we need to re-add to the
-HBase `TestTable`:
-
-[source,bash]
-----
-$ ${HBASE_HOME}/bin/hbase --config ~/hbase-conf completebulkload hdfs://server.example.org:9000/eb3352fb5c9c9a05feeb2caba101e1cc TestTable
-----
-
-After it successfully completes, any files that were in `eb3352fb5c9c9a05feeb2caba101e1cc` have been moved
-under hbase and the `eb3352fb5c9c9a05feeb2caba101e1cc` directory can be deleted (Check content
-before and after by running `ls -r` on the HDFS directory).
-
-[[arch.bulk.load.replication]]
-=== Bulk Loading Replication
-HBASE-13153 adds replication support for bulk loaded HFiles, available since HBase 1.3/2.0. This feature is enabled by setting `hbase.replication.bulkload.enabled` to `true` (default is `false`).
-You also need to copy the source cluster configuration files to the destination cluster.
-
-Additional configurations are required too:
-
-. `hbase.replication.source.fs.conf.provider`
-+
-This defines the class which loads the source cluster file system client configuration in the destination cluster. This should be configured for all the RS in the destination cluster. Default is `org.apache.hadoop.hbase.replication.regionserver.DefaultSourceFSConfigurationProvider`.
-+
-. `hbase.replication.conf.dir`
-+
-This represents the base directory where the file system client configurations of the source cluster are copied to the destination cluster. This should be configured for all the RS in the destination cluster. Default is `$HBASE_CONF_DIR`.
-+
-. `hbase.replication.cluster.id`
-+
-This configuration is required in the cluster where replication for bulk loaded data is enabled. A source cluster is uniquely identified by the destination cluster using this id. This should be configured for all the RS in the source cluster configuration file for all the RS.
-+
-
-
-
-For example: If source cluster FS client configurations are copied to the destination cluster under directory `/home/user/dc1/`, then `hbase.replication.cluster.id` should be configured as `dc1` and `hbase.replication.conf.dir` as `/home/user`.
-
-NOTE: `DefaultSourceFSConfigurationProvider` supports only `xml` type files. It loads source cluster FS client configuration only once, so if source cluster FS client configuration files are updated, every peer(s) cluster RS must be restarted to reload the configuration.
-
-[[arch.hdfs]]
-== HDFS
-
-As HBase runs on HDFS (and each StoreFile is written as a file on HDFS), it is important to have an understanding of the HDFS Architecture especially in terms of how it stores files, handles failovers, and replicates blocks.
-
-See the Hadoop documentation on link:https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html[HDFS Architecture] for more information.
-
-[[arch.hdfs.nn]]
-=== NameNode
-
-The NameNode is responsible for maintaining the filesystem metadata.
-See the above HDFS Architecture link for more information.
-
-[[arch.hdfs.dn]]
-=== DataNode
-
-The DataNodes are responsible for storing HDFS blocks.
-See the above HDFS Architecture link for more information.
-
-[[arch.timelineconsistent.reads]]
-== Timeline-consistent High Available Reads
-
-[[casestudies.timelineconsistent.intro]]
-=== Introduction
-
-HBase, architecturally, always had the strong consistency guarantee from the start.
-All reads and writes are routed through a single region server, which guarantees that all writes happen in an order, and all reads are seeing the most recent committed data.
-
-
-However, because of this single homing of the reads to a single location, if the server becomes unavailable, the regions of the table that were hosted in the region server become unavailable for some time.
-There are three phases in the region recovery process - detection, assignment, and recovery.
-Of these, the detection is usually the longest and is presently in the order of 20-30 seconds depending on the ZooKeeper session timeout.
-During this time and before the recovery is complete, the clients will not be able to read the region data.
-
-However, for some use cases, either the data may be read-only, or doing reads against some stale data is acceptable.
-With timeline-consistent high available reads, HBase can be used for these kind of latency-sensitive use cases where the application can expect to have a time bound on the read completion.
-
-
-For achieving high availability for reads, HBase provides a feature called _region replication_. In this model, for each region of a table, there will be multiple replicas that are opened in different RegionServers.
-By default, the region replication is set to 1, so only a single region replica is deployed and there will not be any changes from the original model.
-If region replication is set to 2 or more, then the master will assign replicas of the regions of the table.
-The Load Balancer ensures that the region replicas are not co-hosted in the same region servers and also in the same rack (if possible).
-
-All of the replicas for a single region will have a unique replica_id, starting from 0.
-The region replica having replica_id==0 is called the primary region, and the others _secondary regions_ or secondaries.
-Only the primary can accept writes from the client, and the primary will always contain the latest changes.
-Since all writes still have to go through the primary region, the writes are not highly-available (meaning they might block for some time if the region becomes unavailable).
-
-
-=== Timeline Consistency
-
-With this feature, HBase introduces a Consistency definition, which can be provided per read operation (get or scan).
-[source,java]
-----
-public enum Consistency {
- STRONG,
- TIMELINE
-}
-----
-`Consistency.STRONG` is the default consistency model provided by HBase.
-In case the table has region replication = 1, or in a table with region replicas but the reads are done with this consistency, the read is always performed by the primary regions, so that there will not be any change from the previous behaviour, and the client always observes the latest data.
-
-
-In case a read is performed with `Consistency.TIMELINE`, then the read RPC will be sent to the primary region server first.
-After a short interval (`hbase.client.primaryCallTimeout.get`, 10ms by default), parallel RPC for secondary region replicas will also be sent if the primary does not respond back.
-After this, the result is returned from whichever RPC is finished first.
-If the response came back from the primary region replica, we can always know that the data is latest.
-For this Result.isStale() API has been added to inspect the staleness.
-If the result is from a secondary region, then Result.isStale() will be set to true.
-The user can then inspect this field to possibly reason about the data.
-
-
-In terms of semantics, TIMELINE consistency as implemented by HBase differs from pure eventual consistency in these respects:
-
-* Single homed and ordered updates: Region replication or not, on the write side, there is still only 1 defined replica (primary) which can accept writes.
- This replica is responsible for ordering the edits and preventing conflicts.
- This guarantees that two different writes are not committed at the same time by different replicas and the data diverges.
- With this, there is no need to do read-repair or last-timestamp-wins kind of conflict resolution.
-* The secondaries also apply the edits in the order that the primary committed them.
- This way the secondaries will contain a snapshot of the primaries data at any point in time.
- This is similar to RDBMS replications and even HBase's own multi-datacenter replication, however in a single cluster.
-* On the read side, the client can detect whether the read is coming from up-to-date data or is stale data.
- Also, the client can issue reads with different consistency requirements on a per-operation basis to ensure its own semantic guarantees.
-* The client can still observe edits out-of-order, and can go back in time, if it observes reads from one secondary replica first, then another secondary replica.
- There is no stickiness to region replicas or a transaction-id based guarantee.
- If required, this can be implemented later though.
-
-.Timeline Consistency
-image::timeline_consistency.png[Timeline Consistency]
-
-To better understand the TIMELINE semantics, let's look at the above diagram.
-Let's say that there are two clients, and the first one writes x=1 at first, then x=2 and x=3 later.
-As above, all writes are handled by the primary region replica.
-The writes are saved in the write ahead log (WAL), and replicated to the other replicas asynchronously.
-In the above diagram, notice that replica_id=1 received 2 updates, and its data shows that x=2, while the replica_id=2 only received a single update, and its data shows that x=1.
-
-
-If client1 reads with STRONG consistency, it will only talk with the replica_id=0, and thus is guaranteed to observe the latest value of x=3.
-In case of a client issuing TIMELINE consistency reads, the RPC will go to all replicas (after primary timeout) and the result from the first response will be returned back.
-Thus the client can see either 1, 2 or 3 as the value of x.
-Let's say that the primary region has failed and log replication cannot continue for some time.
-If the client does multiple reads with TIMELINE consistency, she can observe x=2 first, then x=1, and so on.
-
-
-=== Tradeoffs
-
-Having secondary regions hosted for read availability comes with some tradeoffs which should be carefully evaluated per use case.
-Following are advantages and disadvantages.
-
-.Advantages
-* High availability for read-only tables
-* High availability for stale reads
-* Ability to do very low latency reads with very high percentile (99.9%+) latencies for stale reads
-
-.Disadvantages
-* Double / Triple MemStore usage (depending on region replication count) for tables with region replication > 1
-* Increased block cache usage
-* Extra network traffic for log replication
-* Extra backup RPCs for replicas
-
-To serve the region data from multiple replicas, HBase opens the regions in secondary mode in the region servers.
-The regions opened in secondary mode will share the same data files with the primary region replica, however each secondary region replica will have its own MemStore to keep the unflushed data (only primary region can do flushes). Also to serve reads from secondary regions, the blocks of data files may be also cached in the block caches for the secondary regions.
-
-=== Where is the code
-This feature is delivered in two phases, Phase 1 and 2. The first phase is done in time for HBase-1.0.0 release. Meaning that using HBase-1.0.x, you can use all the features that are marked for Phase 1. Phase 2 is committed in HBase-1.1.0, meaning all HBase versions after 1.1.0 should contain Phase 2 items.
-
-=== Propagating writes to region replicas
-As discussed above writes only go to the primary region replica. For propagating the writes from the primary region replica to the secondaries, there are two different mechanisms. For read-only tables, you do not need to use any of the following methods. Disabling and enabling the table should make the data available in all region replicas. For mutable tables, you have to use *only* one of the following mechanisms: storefile refresher, or async wal replication. The latter is recommended.
-
-==== StoreFile Refresher
-The first mechanism is store file refresher which is introduced in HBase-1.0+. Store file refresher is a thread per region server, which runs periodically, and does a refresh operation for the store files of the primary region for the secondary region replicas. If enabled, the refresher will ensure that the secondary region replicas see the new flushed, compacted or bulk loaded files from the primary region in a timely manner. However, this means that only flushed data can be read back from the secondary region replicas, and after the refresher is run, making the secondaries lag behind the primary for an a longer time.
-
-For turning this feature on, you should configure `hbase.regionserver.storefile.refresh.period` to a non-zero value. See Configuration section below.
-
-[[async.wal.replication]]
-==== Async WAL replication
-The second mechanism for propagation of writes to secondaries is done via the
-“Async WAL Replication” feature. It is only available in HBase-1.1+. This works
-similarly to HBase’s multi-datacenter replication, but instead the data from a
-region is replicated to the secondary regions. Each secondary replica always
-receives and observes the writes in the same order that the primary region
-committed them. In some sense, this design can be thought of as “in-cluster
-replication”, where instead of replicating to a different datacenter, the data
-goes to secondary regions to keep secondary region’s in-memory state up to date.
-The data files are shared between the primary region and the other replicas, so
-that there is no extra storage overhead. However, the secondary regions will
-have recent non-flushed data in their memstores, which increases the memory
-overhead. The primary region writes flush, compaction, and bulk load events
-to its WAL as well, which are also replicated through wal replication to
-secondaries. When they observe the flush/compaction or bulk load event, the
-secondary regions replay the event to pick up the new files and drop the old
-ones.
-
-Committing writes in the same order as in primary ensures that the secondaries won’t diverge from the primary regions data, but since the log replication is asynchronous, the data might still be stale in secondary regions. Since this feature works as a replication endpoint, the performance and latency characteristics is expected to be similar to inter-cluster replication.
-
-Async WAL Replication is *disabled* by default. You can enable this feature by
-setting `hbase.region.replica.replication.enabled` to `true`. The Async WAL
-Replication feature will add a new replication peer named
-`region_replica_replication` as a replication peer when you create a table with
-region replication > 1 for the first time. Once enabled, if you want to disable
-this feature, you need to do two actions in the following order:
-* Set configuration property `hbase.region.replica.replication.enabled` to false in `hbase-site.xml` (see Configuration section below)
-* Disable the replication peer named `region_replica_replication` in the cluster using hbase shell or `Admin` class:
-[source,bourne]
-----
- hbase> disable_peer 'region_replica_replication'
-----
-
-Async WAL Replication and the `hbase:meta` table is a little more involved and gets its own section below; see <>
-
-=== Store File TTL
-In both of the write propagation approaches mentioned above, store files of the primary will be opened in secondaries independent of the primary region. So for files that the primary compacted away, the secondaries might still be referring to these files for reading. Both features are using HFileLinks to refer to files, but there is no protection (yet) for guaranteeing that the file will not be deleted prematurely. Thus, as a guard, you should set the configuration property `hbase.master.hfilecleaner.ttl` to a larger value, such as 1 hour to guarantee that you will not receive IOExceptions for requests going to replicas.
-
-[[async.wal.replication.meta]]
-=== Region replication for META table’s region
-Async WAL Replication does not work for the META table’s WAL.
-The meta table’s secondary replicas refresh themselves from the persistent store
-files every `hbase.regionserver.meta.storefile.refresh.period`, (a non-zero value).
-Note how the META replication period is distinct from the user-space
-`hbase.regionserver.storefile.refresh.period` value.
-
-==== Async WAL Replication for META table as of hbase-2.4.0+ ====
-Async WAL replication for META is added as a new feature in 2.4.0. It is still under
-active development. Use with caution. Set
-`hbase.region.replica.replication.catalog.enabled` to enable async WAL Replication
-for META region replicas. It is off by default.
-
-Regarding META replicas count, up to hbase-2.4.0, you would set the special
-property 'hbase.meta.replica.count'. Now you can alter the META table as you
-would a user-space table (if `hbase.meta.replica.count` is set, it will take
-precedent over what is set for replica count in the META table updating META
-replica count to match).
-
-===== Load Balancing META table load =====
-
-hbase-2.4.0 also adds a *new* client-side `LoadBalance` mode. When enabled
-client-side, clients will try to read META replicas first before falling back on
-the primary. Before this, the replica lookup mode -- now named `HedgedRead` in
-hbase-2.4.0 -- had clients read the primary and if no response after a
-configurable amount of time had elapsed, it would start up reads against the
-replicas.
-
-The new 'LoadBalance' mode helps alleviate hotspotting on the META
-table distributing META read load.
-
-To enable the meta replica locator's load balance mode, please set the following
-configuration at on the *client-side* (only): set 'hbase.locator.meta.replicas.mode'
-to "LoadBalance". Valid options for this configuration are `None`, `HedgedRead`, and
-`LoadBalance`. Option parse is case insensitive. The default mode is `None` (which falls
-through to `HedgedRead`, the current default). Do NOT put this configuration in any
-hbase server-side's configuration, Master or RegionServer (Master could make decisions
-based off stale state -- to be avoided).
-
-`LoadBalance` also is a new feature. Use with caution.
-
-=== Memory accounting
-The secondary region replicas refer to the data files of the primary region replica, but they have their own memstores (in HBase-1.1+) and uses block cache as well. However, one distinction is that the secondary region replicas cannot flush the data when there is memory pressure for their memstores. They can only free up memstore memory when the primary region does a flush and this flush is replicated to the secondary. Since in a region server hosting primary replicas for some regions and secondaries for some others, the secondaries might cause extra flushes to the primary regions in the same host. In extreme situations, there can be no memory left for adding new writes coming from the primary via wal replication. For unblocking this situation (and since secondary cannot flush by itself), the secondary is allowed to do a “store file refresh” by doing a file system list operation to pick up new files from primary, and possibly dropping its memstore. This refresh will only be performed if the memstore size of the biggest secondary region replica is at least `hbase.region.replica.storefile.refresh.memstore.multiplier` (default 4) times bigger than the biggest memstore of a primary replica. One caveat is that if this is performed, the secondary can observe partial row updates across column families (since column families are flushed independently). The default should be good to not do this operation frequently. You can set this value to a large number to disable this feature if desired, but be warned that it might cause the replication to block forever.
-
-=== Secondary replica failover
-When a secondary region replica first comes online, or fails over, it may have served some edits from its memstore. Since the recovery is handled differently for secondary replicas, the secondary has to ensure that it does not go back in time before it starts serving requests after assignment. For doing that, the secondary waits until it observes a full flush cycle (start flush, commit flush) or a “region open event” replicated from the primary. Until this happens, the secondary region replica will reject all read requests by throwing an IOException with message “The region's reads are disabled”. However, the other replicas will probably still be available to read, thus not causing any impact for the rpc with TIMELINE consistency. To facilitate faster recovery, the secondary region will trigger a flush request from the primary when it is opened. The configuration property `hbase.region.replica.wait.for.primary.flush` (enabled by default) can be used to disable this feature if needed.
-
-
-
-
-=== Configuration properties
-
-To use highly available reads, you should set the following properties in `hbase-site.xml` file.
-There is no specific configuration to enable or disable region replicas.
-Instead you can change the number of region replicas per table to increase or decrease at the table creation or with alter table. The following configuration is for using async wal replication and using meta replicas of 3.
-
-
-==== Server side properties
-
-[source,xml]
-----
-
- hbase.regionserver.storefile.refresh.period
- 0
-
- The period (in milliseconds) for refreshing the store files for the secondary regions. 0 means this feature is disabled. Secondary regions sees new files (from flushes and compactions) from primary once the secondary region refreshes the list of files in the region (there is no notification mechanism). But too frequent refreshes might cause extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger value is also recommended with this setting.
-
-
-
-
- hbase.regionserver.meta.storefile.refresh.period
- 300000
-
- The period (in milliseconds) for refreshing the store files for the hbase:meta tables secondary regions. 0 means this feature is disabled. Secondary regions sees new files (from flushes and compactions) from primary once the secondary region refreshes the list of files in the region (there is no notification mechanism). But too frequent refreshes might cause extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger value is also recommended with this setting. This should be a non-zero number if meta replicas are enabled.
-
-
-
-
- hbase.region.replica.replication.enabled
- true
-
- Whether asynchronous WAL replication to the secondary region replicas is enabled or not. If this is enabled, a replication peer named "region_replica_replication" will be created which will tail the logs and replicate the mutations to region replicas for tables that have region replication > 1. If this is enabled once, disabling this replication also requires disabling the replication peer using shell or Admin java class. Replication to secondary region replicas works over standard inter-cluster replication.
-
-
-
- hbase.region.replica.replication.memstore.enabled
- true
-
- If you set this to `false`, replicas do not receive memstore updates from
- the primary RegionServer. If you set this to `true`, you can still disable
- memstore replication on a per-table basis, by setting the table's
- `REGION_MEMSTORE_REPLICATION` configuration property to `false`. If
- memstore replication is disabled, the secondaries will only receive
- updates for events like flushes and bulkloads, and will not have access to
- data which the primary has not yet flushed. This preserves the guarantee
- of row-level consistency, even when the read requests `Consistency.TIMELINE`.
-
-
-
-
- hbase.master.hfilecleaner.ttl
- 3600000
-
- The period (in milliseconds) to keep store files in the archive folder before deleting them from the file system.
-
-
-
- hbase.region.replica.storefile.refresh.memstore.multiplier
- 4
-
- The multiplier for a “store file refresh” operation for the secondary region replica. If a region server has memory pressure, the secondary region will refresh it’s store files if the memstore size of the biggest secondary replica is bigger this many times than the memstore size of the biggest primary replica. Set this to a very big value to disable this feature (not recommended).
-
-
-
-
- hbase.region.replica.wait.for.primary.flush
- true
-
- Whether to wait for observing a full flush cycle from the primary before start serving data in a secondary. Disabling this might cause the secondary region replicas to go back in time for reads between region movements.
-
-
-----
-
-One thing to keep in mind also is that, region replica placement policy is only enforced by the `StochasticLoadBalancer` which is the default balancer.
-If you are using a custom load balancer property in hbase-site.xml (`hbase.master.loadbalancer.class`) replicas of regions might end up being hosted in the same server.
-
-==== Client side properties
-
-Ensure to set the following for all clients (and servers) that will use region replicas.
-
-[source,xml]
-----
-
- hbase.ipc.client.specificThreadForWriting
- true
-
- Whether to enable interruption of RPC threads at the client side. This is required for region replicas with fallback RPC’s to secondary regions.
-
-
-
- hbase.client.primaryCallTimeout.get
- 10000
-
- The timeout (in microseconds), before secondary fallback RPC’s are submitted for get requests with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies.
-
-
-
- hbase.client.primaryCallTimeout.multiget
- 10000
-
- The timeout (in microseconds), before secondary fallback RPC’s are submitted for multi-get requests (Table.get(List)) with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies.
-
-
-
- hbase.client.replicaCallTimeout.scan
- 1000000
-
- The timeout (in microseconds), before secondary fallback RPC’s are submitted for scan requests with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 1 sec. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies.
-
-
-
- hbase.meta.replicas.use
- true
-
- Whether to use meta table replicas or not. Default is false.
-
-
-----
-
-Note HBase-1.0.x users should use `hbase.ipc.client.allowsInterrupt` rather than `hbase.ipc.client.specificThreadForWriting`.
-
-=== User Interface
-
-In the masters user interface, the region replicas of a table are also shown together with the primary regions.
-You can notice that the replicas of a region will share the same start and end keys and the same region name prefix.
-The only difference would be the appended replica_id (which is encoded as hex), and the region encoded name will be different.
-You can also see the replica ids shown explicitly in the UI.
-
-=== Creating a table with region replication
-
-Region replication is a per-table property.
-All tables have `REGION_REPLICATION = 1` by default, which means that there is only one replica per region.
-You can set and change the number of replicas per region of a table by supplying the `REGION_REPLICATION` property in the table descriptor.
-
-
-==== Shell
-
-[source]
-----
-create 't1', 'f1', {REGION_REPLICATION => 2}
-
-describe 't1'
-for i in 1..100
-put 't1', "r#{i}", 'f1:c1', i
-end
-flush 't1'
-----
-
-==== Java
-
-[source,java]
-----
-HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(“test_table”));
-htd.setRegionReplication(2);
-...
-admin.createTable(htd);
-----
-
-You can also use `setRegionReplication()` and alter table to increase, decrease the region replication for a table.
-
-
-=== Read API and Usage
-
-==== Shell
-
-You can do reads in shell using a the Consistency.TIMELINE semantics as follows
-
-[source]
-----
-hbase(main):001:0> get 't1','r6', {CONSISTENCY => "TIMELINE"}
-----
-
-You can simulate a region server pausing or becoming unavailable and do a read from the secondary replica:
-
-[source,bash]
-----
-$ kill -STOP
-
-hbase(main):001:0> get 't1','r6', {CONSISTENCY => "TIMELINE"}
-----
-
-Using scans is also similar
-
-[source]
-----
-hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
-----
-
-==== Java
-
-You can set the consistency for Gets and Scans and do requests as follows.
-
-[source,java]
-----
-Get get = new Get(row);
-get.setConsistency(Consistency.TIMELINE);
-...
-Result result = table.get(get);
-----
-
-You can also pass multiple gets:
-
-[source,java]
-----
-Get get1 = new Get(row);
-get1.setConsistency(Consistency.TIMELINE);
-...
-ArrayList gets = new ArrayList();
-gets.add(get1);
-...
-Result[] results = table.get(gets);
-----
-
-And Scans:
-
-[source,java]
-----
-Scan scan = new Scan();
-scan.setConsistency(Consistency.TIMELINE);
-...
-ResultScanner scanner = table.getScanner(scan);
-----
-
-You can inspect whether the results are coming from primary region or not by calling the `Result.isStale()` method:
-
-[source,java]
-----
-Result result = table.get(get);
-if (result.isStale()) {
- ...
-}
-----
-
-=== Resources
-
-. More information about the design and implementation can be found at the jira issue: link:https://issues.apache.org/jira/browse/HBASE-10070[HBASE-10070]
-. HBaseCon 2014 talk: link:https://hbase.apache.org/www.hbasecon.com/#2014-PresentationsRecordings[HBase Read High Availability Using Timeline-Consistent Region Replicas] also contains some details and link:http://www.slideshare.net/enissoz/hbase-high-availability-for-reads-with-time[slides].
-
-ifdef::backend-docbook[]
-[index]
-== Index
-// Generated automatically by the DocBook toolchain.
-endif::backend-docbook[]
diff --git a/src/main/asciidoc/_chapters/asf.adoc b/src/main/asciidoc/_chapters/asf.adoc
deleted file mode 100644
index 18cf95a9696d..000000000000
--- a/src/main/asciidoc/_chapters/asf.adoc
+++ /dev/null
@@ -1,47 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[appendix]
-[[asf]]
-== HBase and the Apache Software Foundation
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-:toc: left
-:source-language: java
-
-HBase is a project in the Apache Software Foundation and as such there are responsibilities to the ASF to ensure a healthy project.
-
-[[asf.devprocess]]
-=== ASF Development Process
-
-See the link:https://www.apache.org/dev/#committers[Apache Development Process page] for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing and getting involved, and how open-source works at ASF.
-
-[[asf.reporting]]
-=== ASF Board Reporting
-
-Once a quarter, each project in the ASF portfolio submits a report to the ASF board.
-This is done by the HBase project lead and the committers.
-See link:https://www.apache.org/foundation/board/reporting[ASF board reporting] for more information.
-
-:numbered:
diff --git a/src/main/asciidoc/_chapters/case_studies.adoc b/src/main/asciidoc/_chapters/case_studies.adoc
deleted file mode 100644
index b021aa204bf7..000000000000
--- a/src/main/asciidoc/_chapters/case_studies.adoc
+++ /dev/null
@@ -1,170 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[[casestudies]]
-= Apache HBase Case Studies
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-[[casestudies.overview]]
-== Overview
-
-This chapter will describe a variety of performance and troubleshooting case studies that can provide a useful blueprint on diagnosing Apache HBase cluster issues.
-
-For more information on Performance and Troubleshooting, see <> and <>.
-
-[[casestudies.schema]]
-== Schema Design
-
-See the schema design case studies here: <>
-
-[[casestudies.perftroub]]
-== Performance/Troubleshooting
-
-[[casestudies.slownode]]
-=== Case Study #1 (Performance Issue On A Single Node)
-
-==== Scenario
-
-Following a scheduled reboot, one data node began exhibiting unusual behavior.
-Routine MapReduce jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes to finish.
-These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node (e.g., the slow map tasks all had the same Input Split). The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node.
-
-==== Hardware
-
-.Datanodes:
-* Two 12-core processors
-* Six Enterprise SATA disks
-* 24GB of RAM
-* Two bonded gigabit NICs
-
-.Network:
-* 10 Gigabit top-of-rack switches
-* 20 Gigabit bonded interconnects between racks.
-
-==== Hypotheses
-
-===== HBase "Hot Spot" Region
-
-We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table, where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer process and cause slow response time.
-Examination of the HBase Master status page showed that the number of HBase requests to the troubled node was almost zero.
-Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions in progress.
-This effectively ruled out a "hot spot" as the root cause of the observed slowness.
-
-===== HBase Region With Non-Local Data
-
-Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the DataNode, thus forcing HDFS to request data blocks from other servers over the network.
-Examination of the DataNode logs showed that there were very few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary data was located on the node.
-This ruled out the possibility of non-local data causing a slowdown.
-
-===== Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk
-
-After concluding that the Hadoop and HBase were not likely to be the culprits, we moved on to troubleshooting the DataNode's hardware.
-Java, by design, will periodically scan its entire memory space to do garbage collection.
-If system memory is heavily overcommitted, the Linux kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage collection.
-Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error.
-This can manifest as high iowait, as running processes wait for reads and writes to complete.
-Finally, a disk nearing the upper edge of its performance envelope will begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory.
-However, using `vmstat(1)` and `free(1)`, we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second.
-
-===== Slowness Due To High Processor Usage
-
-Next, we checked to see whether the system was performing slowly simply due to very high computational load. `top(1)` showed that the system load was higher than normal, but `vmstat(1)` and `mpstat(1)` showed that the amount of processor being used for actual computation was low.
-
-===== Network Saturation (The Winner)
-
-Since neither the disks nor the processors were being utilized heavily, we moved on to the performance of the network interfaces.
-The DataNode had two gigabit ethernet adapters, bonded to form an active-standby interface. `ifconfig(8)` showed some unusual anomalies, namely interface errors, overruns, framing errors.
-While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should:
-
-----
-
-$ /sbin/ifconfig bond0
-bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
-inet addr:10.x.x.x Bcast:10.x.x.255 Mask:255.255.255.0
-UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
-RX packets:2990700159 errors:12 dropped:0 overruns:1 frame:6 <--- Look Here! Errors!
-TX packets:3443518196 errors:0 dropped:0 overruns:0 carrier:0
-collisions:0 txqueuelen:0
-RX bytes:2416328868676 (2.4 TB) TX bytes:3464991094001 (3.4 TB)
-----
-
-These errors immediately lead us to suspect that one or more of the ethernet interfaces might have negotiated the wrong line speed.
-This was confirmed both by running an ICMP ping from an external host and observing round-trip-time in excess of 700ms, and by running `ethtool(8)` on the members of the bond interface and discovering that the active interface was operating at 100Mbs/, full duplex.
-
-----
-
-$ sudo ethtool eth0
-Settings for eth0:
-Supported ports: [ TP ]
-Supported link modes: 10baseT/Half 10baseT/Full
- 100baseT/Half 100baseT/Full
- 1000baseT/Full
-Supports auto-negotiation: Yes
-Advertised link modes: 10baseT/Half 10baseT/Full
- 100baseT/Half 100baseT/Full
- 1000baseT/Full
-Advertised pause frame use: No
-Advertised auto-negotiation: Yes
-Link partner advertised link modes: Not reported
-Link partner advertised pause frame use: No
-Link partner advertised auto-negotiation: No
-Speed: 100Mb/s <--- Look Here! Should say 1000Mb/s!
-Duplex: Full
-Port: Twisted Pair
-PHYAD: 1
-Transceiver: internal
-Auto-negotiation: on
-MDI-X: Unknown
-Supports Wake-on: umbg
-Wake-on: g
-Current message level: 0x00000003 (3)
-Link detected: yes
-----
-
-In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively.
-
-==== Resolution
-
-After determining that the active ethernet adapter was at the incorrect speed, we used the `ifenslave(8)` command to make the standby interface the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput:
-
-On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced.
-
-[[casestudies.perf.1]]
-=== Case Study #2 (Performance Research 2012)
-
-Investigation results of a self-described "we're not sure what's wrong, but it seems slow" problem. http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html
-
-[[casestudies.perf.2]]
-=== Case Study #3 (Performance Research 2010))
-
-Investigation results of general cluster performance from 2010.
-Although this research is on an older version of the codebase, this writeup is still very useful in terms of approach. http://hstack.org/hbase-performance-testing/
-
-[[casestudies.max.transfer.threads]]
-=== Case Study #4 (max.transfer.threads Config)
-
-Case study of configuring `max.transfer.threads` (previously known as `xcievers`) and diagnosing errors from misconfigurations. http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html
-
-See also <>.
diff --git a/src/main/asciidoc/_chapters/community.adoc b/src/main/asciidoc/_chapters/community.adoc
deleted file mode 100644
index 91a596d0addd..000000000000
--- a/src/main/asciidoc/_chapters/community.adoc
+++ /dev/null
@@ -1,107 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[[community]]
-= Community
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-== Decisions
-
-.Feature Branches
-
-Feature Branches are easy to make.
-You do not have to be a committer to make one.
-Just request the name of your branch be added to JIRA up on the developer's mailing list and a committer will add it for you.
-Thereafter you can file issues against your feature branch in Apache HBase JIRA.
-Your code you keep elsewhere -- it should be public so it can be observed -- and you can update dev mailing list on progress.
-When the feature is ready for commit, 3 +1s from committers will get your feature merged.
-See link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134eeee009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase, mail # dev - Thoughts
- about large feature dev branches]
-
-[[hbase.fix.version.in.jira]]
-.How to set fix version in JIRA on issue resolve
-
-Here is how we agreed to set versions in JIRA when we resolve an issue.
-If master is going to be 2.0.0, and branch-1 1.4.0 then:
-
-* Commit only to master: Mark with 2.0.0
-* Commit to branch-1 and master: Mark with 2.0.0, and 1.4.0
-* Commit to branch-1.3, branch-1, and master: Mark with 2.0.0, 1.4.0, and 1.3.x
-* Commit site fixes: no version
-
-[[hbase.when.to.close.jira]]
-.Policy on when to set a RESOLVED JIRA as CLOSED
-
-We agreed that for issues that list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the release of any of the versions listed; subsequent change to the issue must happen in a new JIRA.
-
-[[no.permanent.state.in.zk]]
-.Only transient state in ZooKeeper!
-
-You should be able to kill the data in zookeeper and hbase should ride over it recreating the zk content as it goes.
-This is an old adage around these parts.
-We just made note of it now.
-We also are currently in violation of this basic tenet -- replication at least keeps permanent state in zk -- but we are working to undo this breaking of a golden rule.
-
-[[community.roles]]
-== Community Roles
-
-=== Release Managers
-
-Each maintained release branch has a release manager, who volunteers to coordinate new features and bug fixes are backported to that release.
-The release managers are link:https://hbase.apache.org/team-list.html[committers].
-If you would like your feature or bug fix to be included in a given release, communicate with that release manager.
-If this list goes out of date or you can't reach the listed person, reach out to someone else on the list.
-
-NOTE: End-of-life releases are not included in this list.
-
-.Release Managers
-[cols="1,1", options="header"]
-|===
-| Release
-| Release Manager
-
-| 1.3
-| Mikhail Antonov
-
-| 1.4
-| Andrew Purtell
-
-| 2.2
-| Guanghao Zhang
-
-| 2.3
-| Nick Dimiduk
-
-|===
-
-[[hbase.commit.msg.format]]
-== Commit Message format
-
-We agreed to the following Git commit message format:
-[source]
-----
-HBASE-xxxxx . ()
-----
-If the person making the commit is the contributor, leave off the '()' element.
diff --git a/src/main/asciidoc/_chapters/compression.adoc b/src/main/asciidoc/_chapters/compression.adoc
deleted file mode 100644
index 5a0259e502de..000000000000
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ /dev/null
@@ -1,650 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[appendix]
-[[compression]]
-== Compression and Data Block Encoding In HBase(((Compression,Data BlockEncoding)))
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-NOTE: Codecs mentioned in this section are for encoding and decoding data blocks or row keys.
-For information about replication codecs, see <>.
-
-HBase supports several different compression algorithms which can be enabled on a ColumnFamily.
-Data block encoding attempts to limit duplication of information in keys, taking advantage of some of the fundamental designs and patterns of HBase, such as sorted row keys and the schema of a given table.
-Compressors reduce the size of large, opaque byte arrays in cells, and can significantly reduce the storage space needed to store uncompressed data.
-
-Compressors and data block encoding can be used together on the same ColumnFamily.
-
-.Changes Take Effect Upon Compaction
-If you change compression or encoding for a ColumnFamily, the changes take effect during compaction.
-
-Some codecs take advantage of capabilities built into Java, such as GZip compression.
-Others rely on native libraries. Native libraries may be available via codec dependencies installed into
-HBase's library directory, or, if you are utilizing Hadoop codecs, as part of Hadoop. Hadoop codecs
-typically have a native code component so follow instructions for installing Hadoop native binary
-support at <>.
-
-This section discusses common codecs that are used and tested with HBase.
-
-No matter what codec you use, be sure to test that it is installed correctly and is available on all nodes in your cluster.
-Extra operational steps may be necessary to be sure that codecs are available on newly-deployed nodes.
-You can use the <> utility to check that a given codec is correctly installed.
-
-To configure HBase to use a compressor, see <>.
-To enable a compressor for a ColumnFamily, see <>.
-To enable data block encoding for a ColumnFamily, see <>.
-
-.Block Compressors
-* NONE
-+
-This compression type constant selects no compression, and is the default.
-* BROTLI
-+
-https://en.wikipedia.org/wiki/Brotli[Brotli] is a generic-purpose lossless compression algorithm
-that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman
-coding, and 2nd order context modeling, with a compression ratio comparable to the best currently
-available general-purpose compression methods. It is similar in speed with GZ but offers more
-dense compression.
-* BZIP2
-+
-https://en.wikipedia.org/wiki/Bzip2[Bzip2] compresses files using the Burrows-Wheeler block
-sorting text compression algorithm and Huffman coding. Compression is generally considerably
-better than that achieved by the dictionary- (LZ-) based compressors, but both compression and
-decompression can be slow in comparison to other options.
-* GZ
-+
-gzip is based on the https://en.wikipedia.org/wiki/Deflate[DEFLATE] algorithm, which is a
-combination of LZ77 and Huffman coding. It is universally available in the Java Runtime
-Environment so is a good lowest common denominator option. However in comparison to more modern
-algorithms like Zstandard it is quite slow.
-* LZ4
-+
-https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)[LZ4] is a lossless data compression
-algorithm that is focused on compression and decompression speed. It belongs to the LZ77 family
-of compression algorithms, like Brotli, DEFLATE, Zstandard, and others. In our microbenchmarks
-LZ4 is the fastest option for both compression and decompression in that family, and is our
-universally recommended option.
-* LZMA
-+
-https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Markov_chain_algorithm[LZMA] is a
-dictionary compression scheme somewhat similar to the LZ77 algorithm that achieves very high
-compression ratios with a computationally expensive predictive model and variable size
-compression dictionary, while still maintaining decompression speed similar to other commonly used
-compression algorithms. LZMA is superior to all other options in general compression ratio but as
-a compressor it can be extremely slow, especially when configured to operate at higher levels of
-compression.
-* LZO
-+
-https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Oberhumer[LZO] is another LZ-variant
-data compression algorithm, with an implementation focused on decompression speed. It is almost
-but not quite as fast as LZ4.
-* SNAPPY
-+
-https://en.wikipedia.org/wiki/Snappy_(compression)[Snappy] is based on ideas from LZ77 but is
-optimized for very high compression speed, achieving only a "reasonable" compression in trade.
-It is as fast as LZ4 but does not compress quite as well. We offer a pure Java Snappy codec
-that can be used instead of GZ as the universally available option for any Java runtime on any
-hardware architecture.
-* ZSTD
-+
-https://en.wikipedia.org/wiki/Zstd[Zstandard] combines a dictionary-matching stage (LZ77) with
-a large search window and a fast entropy coding stage, using both Finite State Entropy and
-Huffman coding. Compression speed can vary by a factor of 20 or more between the fastest and
-slowest levels, while decompression is uniformly fast, varying by less than 20% between the
-fastest and slowest levels.
-+
-ZStandard is the most flexible of the available compression codec options, offering a compression
-ratio similar to LZ4 at level 1 (but with slightly less performance), compression ratios
-comparable to DEFLATE at mid levels (but with better performance), and LZMA-alike dense
-compression (and LZMA-alike compression speeds) at high levels; while providing universally fast
-decompression.
-
-.Data Block Encoding Types
-Prefix::
- Often, keys are very similar. Specifically, keys often share a common prefix and only differ near the end. For instance, one key might be `RowKey:Family:Qualifier0` and the next key might be `RowKey:Family:Qualifier1`.
- +
-In Prefix encoding, an extra column is added which holds the length of the prefix shared between the current key and the previous key.
-Assuming the first key here is totally different from the key before, its prefix length is 0.
-+
-The second key's prefix length is `23`, since they have the first 23 characters in common.
-+
-Obviously if the keys tend to have nothing in common, Prefix will not provide much benefit.
-+
-The following image shows a hypothetical ColumnFamily with no data block encoding.
-+
-.ColumnFamily with No Encoding
-image::data_block_no_encoding.png[]
-+
-Here is the same data with prefix data encoding.
-+
-.ColumnFamily with Prefix Encoding
-image::data_block_prefix_encoding.png[]
-
-Diff::
- Diff encoding expands upon Prefix encoding.
- Instead of considering the key sequentially as a monolithic series of bytes, each key field is split so that each part of the key can be compressed more efficiently.
-+
-Two new fields are added: timestamp and type.
-+
-If the ColumnFamily is the same as the previous row, it is omitted from the current row.
-+
-If the key length, value length or type are the same as the previous row, the field is omitted.
-+
-In addition, for increased compression, the timestamp is stored as a Diff from the previous row's timestamp, rather than being stored in full.
-Given the two row keys in the Prefix example, and given an exact match on timestamp and the same type, neither the value length, or type needs to be stored for the second row, and the timestamp value for the second row is just 0, rather than a full timestamp.
-+
-Diff encoding is disabled by default because writing and scanning are slower but more data is cached.
-+
-This image shows the same ColumnFamily from the previous images, with Diff encoding.
-+
-.ColumnFamily with Diff Encoding
-image::data_block_diff_encoding.png[]
-
-Fast Diff::
- Fast Diff works similar to Diff, but uses a faster implementation. It also adds another field which stores a single bit to track whether the data itself is the same as the previous row. If it is, the data is not stored again.
-+
-Fast Diff is the recommended codec to use if you have long keys or many columns.
-+
-The data format is nearly identical to Diff encoding, so there is not an image to illustrate it.
-
-
-Prefix Tree::
- Prefix tree encoding was introduced as an experimental feature in HBase 0.96.
- It provides similar memory savings to the Prefix, Diff, and Fast Diff encoder, but provides faster random access at a cost of slower encoding speed.
- It was removed in hbase-2.0.0. It was a good idea but little uptake. If interested in reviving this effort, write the hbase dev list.
-
-[[data.block.encoding.types]]
-=== Which Compressor or Data Block Encoder To Use
-
-The compression or codec type to use depends on the characteristics of your data. Choosing the wrong type could cause your data to take more space rather than less, and can have performance implications.
-
-In general, you need to weigh your options between smaller size and faster compression/decompression. Following are some general guidelines, expanded from a discussion at link:https://lists.apache.org/thread.html/481e67a61163efaaf4345510447a9244871a8d428244868345a155ff%401378926618%40%3Cdev.hbase.apache.org%3E[Documenting Guidance on compression and codecs].
-
-* In most cases, enabling LZ4 or Snappy by default is a good choice, because they have a low
- performance overhead and provide reasonable space savings. A fast compression algorithm almost
- always improves overall system performance by trading some increased CPU usage for better I/O
- efficiency.
-* If the values are large (and not pre-compressed, such as images), use a data block compressor.
-* For [firstterm]_cold data_, which is accessed infrequently, depending on your use case, it might
- make sense to opt for Zstandard at its higher compression levels, or LZMA, especially for high
- entropy binary data, or Brotli for data similar in characteristics to web data. Bzip2 might also
- be a reasonable option but Zstandard is very likely to offer superior decompression speed.
-* For [firstterm]_hot data_, which is accessed frequently, you almost certainly want only LZ4,
- Snappy, LZO, or Zstandard at a low compression level. These options will not provide as high of
- a compression ratio but will in trade not unduly impact system performance.
-* If you have long keys (compared to the values) or many columns, use a prefix encoder.
- FAST_DIFF is recommended.
-* If enabling WAL value compression, consider LZ4 or SNAPPY compression, or Zstandard at
- level 1. Reading and writing the WAL is performance critical. That said, the I/O
- savings of these compression options can improve overall system performance.
-
-[[hadoop.native.lib]]
-=== Making use of Hadoop Native Libraries in HBase
-
-The Hadoop shared library has a bunch of facility including compression libraries and fast crc'ing -- hardware crc'ing if your chipset supports it.
-To make this facility available to HBase, do the following. HBase/Hadoop will fall back to use alternatives if it cannot find the native library
-versions -- or fail outright if you asking for an explicit compressor and there is no alternative available.
-
-First make sure of your Hadoop. Fix this message if you are seeing it starting Hadoop processes:
-----
-16/02/09 22:40:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-----
-It means is not properly pointing at its native libraries or the native libs were compiled for another platform.
-Fix this first.
-
-Then if you see the following in your HBase logs, you know that HBase was unable to locate the Hadoop native libraries:
-[source]
-----
-2014-08-07 09:26:20,139 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-----
-If the libraries loaded successfully, the WARN message does not show. Usually this means you are good to go but read on.
-
-Let's presume your Hadoop shipped with a native library that suits the platform you are running HBase on.
-To check if the Hadoop native library is available to HBase, run the following tool (available in Hadoop 2.1 and greater):
-[source]
-----
-$ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
-2014-08-26 13:15:38,717 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-Native library checking:
-hadoop: false
-zlib: false
-snappy: false
-lz4: false
-bzip2: false
-2014-08-26 13:15:38,863 INFO [main] util.ExitUtil: Exiting with status 1
-----
-Above shows that the native hadoop library is not available in HBase context.
-
-The above NativeLibraryChecker tool may come back saying all is hunky-dory
--- i.e. all libs show 'true', that they are available -- but follow the below
-presecription anyways to ensure the native libs are available in HBase context,
-when it goes to use them.
-
-To fix the above, either copy the Hadoop native libraries local or symlink to them if the Hadoop and HBase stalls are adjacent in the filesystem.
-You could also point at their location by setting the `LD_LIBRARY_PATH` environment variable in your hbase-env.sh.
-
-Where the JVM looks to find native libraries is "system dependent" (See `java.lang.System#loadLibrary(name)`). On linux, by default, is going to look in _lib/native/PLATFORM_ where `PLATFORM` is the label for the platform your HBase is installed on.
-On a local linux machine, it seems to be the concatenation of the java properties `os.name` and `os.arch` followed by whether 32 or 64 bit.
-HBase on startup prints out all of the java system properties so find the os.name and os.arch in the log.
-For example:
-[source]
-----
-...
-2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
-2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
-...
-----
-So in this case, the PLATFORM string is `Linux-amd64-64`.
-Copying the Hadoop native libraries or symlinking at _lib/native/Linux-amd64-64_ will ensure they are found.
-Rolling restart after you have made this change.
-
-Here is an example of how you would set up the symlinks.
-Let the hadoop and hbase installs be in your home directory. Assume your hadoop native libs
-are at ~/hadoop/lib/native. Assume you are on a Linux-amd64-64 platform. In this case,
-you would do the following to link the hadoop native lib so hbase could find them.
-----
-...
-$ mkdir -p ~/hbaseLinux-amd64-64 -> /home/stack/hadoop/lib/native/lib/native/
-$ cd ~/hbase/lib/native/
-$ ln -s ~/hadoop/lib/native Linux-amd64-64
-$ ls -la
-# Linux-amd64-64 -> /home/USER/hadoop/lib/native
-...
-----
-
-If you see PureJavaCrc32C in a stack track or if you see something like the below in a perf trace, then native is not working; you are using the java CRC functions rather than native:
-----
- 5.02% perf-53601.map [.] Lorg/apache/hadoop/util/PureJavaCrc32C;.update
-----
-See link:https://issues.apache.org/jira/browse/HBASE-11927[HBASE-11927 Use Native Hadoop Library for HFile checksum (And flip default from CRC32 to CRC32C)],
-for more on native checksumming support. See in particular the release note for how to check if your hardware to see if your processor has support for hardware CRCs.
-Or checkout the Apache link:https://blogs.apache.org/hbase/entry/saving_cpu_using_native_hadoop[Checksums in HBase] blog post.
-
-Here is example of how to point at the Hadoop libs with `LD_LIBRARY_PATH` environment variable:
-[source]
-----
-$ LD_LIBRARY_PATH=~/hadoop-2.5.0-SNAPSHOT/lib/native ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
-2014-08-26 13:42:49,332 INFO [main] bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
-2014-08-26 13:42:49,337 INFO [main] zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
-Native library checking:
-hadoop: true /home/stack/hadoop-2.5.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
-zlib: true /lib64/libz.so.1
-snappy: true /usr/lib64/libsnappy.so.1
-lz4: true revision:99
-bzip2: true /lib64/libbz2.so.1
-----
-Set in _hbase-env.sh_ the LD_LIBRARY_PATH environment variable when starting your HBase.
-
-=== Compressor Configuration, Installation, and Use
-
-[[compressor.install]]
-==== Configure HBase For Compressors
-
-Compression codecs are provided either by HBase compressor modules or by Hadoop's native compression
-support. As described above you choose a compression type in table or column family schema or in
-site configuration using its short label, e.g. _snappy_ for Snappy, or _zstd_ for ZStandard. Which
-codec implementation is dynamically loaded to support what label is configurable by way of site
-configuration.
-
-[options="header"]
-|===
-|Algorithm label|Codec implementation configuration key|Default value
-//----------------------
-|BROTLI|hbase.io.compress.brotli.codec|org.apache.hadoop.hbase.io.compress.brotli.BrotliCodec
-|BZIP2|hbase.io.compress.bzip2.codec|org.apache.hadoop.io.compress.BZip2Codec
-|GZ|hbase.io.compress.gz.codec|org.apache.hadoop.hbase.io.compress.ReusableStreamGzipCodec
-|LZ4|hbase.io.compress.lz4.codec|org.apache.hadoop.io.compress.Lz4Codec
-|LZMA|hbase.io.compress.lzma.codec|org.apache.hadoop.hbase.io.compress.xz.LzmaCodec
-|LZO|hbase.io.compress.lzo.codec|com.hadoop.compression.lzo.LzoCodec
-|SNAPPY|hbase.io.compress.snappy.codec|org.apache.hadoop.io.compress.SnappyCodec
-|ZSTD|hbase.io.compress.zstd.codec|org.apache.hadoop.io.compress.ZStandardCodec
-|===
-
-The available codec implementation options are:
-
-[options="header"]
-|===
-|Label|Codec implementation class|Notes
-//----------------------
-|BROTLI|org.apache.hadoop.hbase.io.compress.brotli.BrotliCodec|
- Implemented with https://github.com/hyperxpro/Brotli4j[Brotli4j]
-|BZIP2|org.apache.hadoop.io.compress.BZip2Codec|Hadoop native codec
-|GZ|org.apache.hadoop.hbase.io.compress.ReusableStreamGzipCodec|
- Requires the Hadoop native GZ codec
-|LZ4|org.apache.hadoop.io.compress.Lz4Codec|Hadoop native codec
-|LZ4|org.apache.hadoop.hbase.io.compress.aircompressor.Lz4Codec|
- Pure Java implementation
-|LZ4|org.apache.hadoop.hbase.io.compress.lz4.Lz4Codec|
- Implemented with https://github.com/lz4/lz4-java[lz4-java]
-|LZMA|org.apache.hadoop.hbase.io.compress.xz.LzmaCodec|
- Implemented with https://tukaani.org/xz/java.html[XZ For Java]
-|LZO|com.hadoop.compression.lzo.LzoCodec|Hadoop native codec,
- requires GPL licensed native dependencies
-|LZO|org.apache.hadoop.io.compress.LzoCodec|Hadoop native codec,
- requires GPL licensed native dependencies
-|LZO|org.apache.hadoop.hbase.io.compress.aircompressor.LzoCodec|
- Pure Java implementation
-|SNAPPY|org.apache.hadoop.io.compress.SnappyCodec|Hadoop native codec
-|SNAPPY|org.apache.hadoop.hbase.io.compress.aircompressor.SnappyCodec|
- Pure Java implementation
-|SNAPPY|org.apache.hadoop.hbase.io.compress.xerial.SnappyCodec|
- Implemented with https://github.com/xerial/snappy-java[snappy-java]
-|ZSTD|org.apache.hadoop.io.compress.ZStandardCodec|Hadoop native codec
-|ZSTD|org.apache.hadoop.hbase.io.compress.aircompressor.ZStdCodec|
- Pure Java implementation, limited to a fixed compression level,
- not data compatible with the Hadoop zstd codec
-|ZSTD|org.apache.hadoop.hbase.io.compress.zstd.ZStdCodec|
- Implemented with https://github.com/luben/zstd-jni[zstd-jni],
- supports all compression levels, supports custom dictionaries
-|===
-
-Specify which codec implementation option you prefer for a given compression algorithm
-in site configuration, like so:
-[source]
-----
-...
-
- hbase.io.compress.lz4.codec
- org.apache.hadoop.hbase.io.compress.lz4.Lz4Codec
-
-...
-----
-
-.Compressor Microbenchmarks
-
-See https://github.com/apurtell/jmh-compression-tests
-
-256MB (258,126,022 bytes exactly) of block data was extracted from two HFiles containing Common
-Crawl data ingested using IntegrationLoadTestCommonCrawl, 2,680 blocks in total. This data was
-processed by each new codec implementation as if the block data were being compressed again for
-write into an HFile, but without writing any data, comparing only the CPU time and resource demand
-of the codec itself. Absolute performance numbers will vary depending on hardware and software
-particulars of your deployment. The relative differences are what are interesting. Measured time
-is the average time in milliseconds required to compress all blocks of the 256MB file. This is
-how long it would take to write the HFile containing these contents, minus the I/O overhead of
-block encoding and actual persistence.
-
-These are the results:
-
-[options="header"]
-|===
-|Codec|Level|Time (milliseconds)|Result (bytes)|Improvement
-//----------------------
-|AirCompressor LZ4|-|349.989 ± 2.835|76,999,408|70.17%
-|AirCompressor LZO|-|334.554 ± 3.243|79,369,805|69.25%
-|AirCompressor Snappy|-|364.153 ± 19.718|80,201,763|68.93%
-|AirCompressor Zstandard|3 (effective)|1108.267 ± 8.969|55,129,189|78.64%
-|Brotli|1|593.107 ± 2.376|58,672,319|77.27%
-|Brotli|3|1345.195 ± 27.327|53,917,438|79.11%
-|Brotli|6|2812.411 ± 25.372|48,696,441|81.13%
-|Brotli|10|74615.936 ± 224.854|44,970,710|82.58%
-|LZ4 (lz4-java)|-|303.045 ± 0.783|76,974,364|70.18%
-|LZMA|1|6410.428 ± 115.065|49,948,535|80.65%
-|LZMA|3|8144.620 ± 152.119|49,109,363|80.97%
-|LZMA|6|43802.576 ± 382.025|46,951,810|81.81%
-|LZMA|9|49821.979 ± 580.110|46,951,810|81.81%
-|Snappy (xerial)|-|360.225 ± 2.324|80,749,937|68.72%
-|Zstd (zstd-jni)|1|654.699 ± 16.839|56,719,994|78.03%
-|Zstd (zstd-jni)|3|839.160 ± 24.906|54,573,095|78.86%
-|Zstd (zstd-jni)|5|1594.373 ± 22.384|52,025,485|79.84%
-|Zstd (zstd-jni)|7|2308.705 ± 24.744|50,651,554|80.38%
-|Zstd (zstd-jni)|9|3659.677 ± 58.018|50,208,425|80.55%
-|Zstd (zstd-jni)|12|8705.294 ± 58.080|49,841,446|80.69%
-|Zstd (zstd-jni)|15|19785.646 ± 278.080|48,499,508|81.21%
-|Zstd (zstd-jni)|18|47702.097 ± 442.670|48,319,879|81.28%
-|Zstd (zstd-jni)|22|97799.695 ± 1106.571|48,212,220|81.32%
-|===
-
-.Compressor Support On the Master
-
-A new configuration setting was introduced in HBase 0.95, to check the Master to determine which data block encoders are installed and configured on it, and assume that the entire cluster is configured the same.
-This option, `hbase.master.check.compression`, defaults to `true`.
-This prevents the situation described in link:https://issues.apache.org/jira/browse/HBASE-6370[HBASE-6370], where a table is created or modified to support a codec that a region server does not support, leading to failures that take a long time to occur and are difficult to debug.
-
-If `hbase.master.check.compression` is enabled, libraries for all desired compressors need to be installed and configured on the Master, even if the Master does not run a region server.
-
-.Install GZ Support Via Native Libraries
-
-HBase uses Java's built-in GZip support unless the native Hadoop libraries are available on the CLASSPATH.
-The recommended way to add libraries to the CLASSPATH is to set the environment variable `HBASE_LIBRARY_PATH` for the user running HBase.
-If native libraries are not available and Java's GZIP is used, `Got brand-new compressor` reports will be present in the logs.
-See <>).
-
-[[lzo.compression]]
-.Install Hadoop Native LZO Support
-
-HBase cannot ship with the Hadoop native LZO codc because of incompatibility between HBase, which uses an Apache Software License (ASL) and LZO, which uses a GPL license.
-See the link:https://github.com/twitter/hadoop-lzo/blob/master/README.md[Hadoop-LZO at Twitter] for information on configuring LZO support for HBase.
-
-If you depend upon LZO compression, consider using the pure Java and ASL licensed
-AirCompressor LZO codec option instead of the Hadoop native default, or configure your
-RegionServers to fail to start if native LZO support is not available.
-See <>.
-
-[[lz4.compression]]
-.Configure Hadoop Native LZ4 Support
-
-LZ4 support is bundled with Hadoop and is the default LZ4 codec implementation.
-It is not required that you make use of the Hadoop LZ4 codec. Our LZ4 codec implemented
-with lz4-java offers superior performance, and the AirCompressor LZ4 codec offers a
-pure Java option for use where native support is not available.
-
-That said, if you prefer the Hadoop option, make sure the hadoop shared library
-(libhadoop.so) is accessible when you start HBase.
-After configuring your platform (see <>), you can
-make a symbolic link from HBase to the native Hadoop libraries. This assumes the two
-software installs are colocated. For example, if my 'platform' is Linux-amd64-64:
-[source,bourne]
-----
-$ cd $HBASE_HOME
-$ mkdir lib/native
-$ ln -s $HADOOP_HOME/lib/native lib/native/Linux-amd64-64
-----
-Use the compression tool to check that LZ4 is installed on all nodes.
-Start up (or restart) HBase.
-Afterward, you can create and alter tables to enable LZ4 as a compression codec.:
-----
-hbase(main):003:0> alter 'TestTable', {NAME => 'info', COMPRESSION => 'LZ4'}
-----
-
-[[snappy.compression.installation]]
-.Install Hadoop native Snappy Support
-
-Snappy support is bundled with Hadoop and is the default Snappy codec implementation.
-It is not required that you make use of the Hadoop Snappy codec. Our Snappy codec
-implemented with Xerial Snappy offers superior performance, and the AirCompressor
-Snappy codec offers a pure Java option for use where native support is not available.
-
-That said, if you prefer the Hadoop codec option, you can install Snappy binaries (for
-instance, by using +yum install snappy+ on CentOS) or build Snappy from source.
-After installing Snappy, search for the shared library, which will be called _libsnappy.so.X_ where X is a number.
-If you built from source, copy the shared library to a known location on your system, such as _/opt/snappy/lib/_.
-
-In addition to the Snappy library, HBase also needs access to the Hadoop shared library, which will be called something like _libhadoop.so.X.Y_, where X and Y are both numbers.
-Make note of the location of the Hadoop library, or copy it to the same location as the Snappy library.
-
-[NOTE]
-====
-The Snappy and Hadoop libraries need to be available on each node of your cluster.
-See <> to find out how to test that this is the case.
-
-See <> to configure your RegionServers to fail to start if a given compressor is not available.
-====
-
-Each of these library locations need to be added to the environment variable `HBASE_LIBRARY_PATH` for the operating system user that runs HBase.
-You need to restart the RegionServer for the changes to take effect.
-
-[[compression.test]]
-.CompressionTest
-
-You can use the CompressionTest tool to verify that your compressor is available to HBase:
-
-----
-
- $ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://host/path/to/hbase snappy
-----
-
-[[hbase.regionserver.codecs]]
-.Enforce Compression Settings On a RegionServer
-
-You can configure a RegionServer so that it will fail to restart if compression is configured incorrectly, by adding the option hbase.regionserver.codecs to the _hbase-site.xml_, and setting its value to a comma-separated list of codecs that need to be available.
-For example, if you set this property to `lzo,gz`, the RegionServer would fail to start if both compressors were not available.
-This would prevent a new server from being added to the cluster without having codecs configured properly.
-
-[[changing.compression]]
-==== Enable Compression On a ColumnFamily
-
-To enable compression for a ColumnFamily, use an `alter` command.
-You do not need to re-create the table or copy data.
-If you are changing codecs, be sure the old codec is still available until all the old StoreFiles have been compacted.
-
-.Enabling Compression on a ColumnFamily of an Existing Table using HBaseShell
-----
-hbase> alter 'test', {NAME => 'cf', COMPRESSION => 'GZ'}
-----
-
-.Creating a New Table with Compression On a ColumnFamily
-----
-hbase> create 'test2', { NAME => 'cf2', COMPRESSION => 'SNAPPY' }
-----
-
-.Verifying a ColumnFamily's Compression Settings
-----
-
-hbase> describe 'test'
-DESCRIPTION ENABLED
- 'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE false
- ', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
- VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERSIONS
- => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'fa
- lse', BLOCKSIZE => '65536', IN_MEMORY => 'false', B
- LOCKCACHE => 'true'}
-1 row(s) in 0.1070 seconds
-----
-
-==== Testing Compression Performance
-
-HBase includes a tool called LoadTestTool which provides mechanisms to test your compression performance.
-You must specify either `-write` or `-update-read` as your first parameter, and if you do not specify another parameter, usage advice is printed for each option.
-
-.+LoadTestTool+ Usage
-----
-$ bin/hbase org.apache.hadoop.hbase.util.LoadTestTool -h
-usage: bin/hbase org.apache.hadoop.hbase.util.LoadTestTool
-Options:
- -batchupdate Whether to use batch as opposed to separate
- updates for every column in a row
- -bloom Bloom filter type, one of [NONE, ROW, ROWCOL]
- -compression Compression type, one of [LZO, GZ, NONE, SNAPPY,
- LZ4]
- -data_block_encoding Encoding algorithm (e.g. prefix compression) to
- use for data blocks in the test column family, one
- of [NONE, PREFIX, DIFF, FAST_DIFF, ROW_INDEX_V1].
- -encryption Enables transparent encryption on the test table,
- one of [AES]
- -generator The class which generates load for the tool. Any
- args for this class can be passed as colon
- separated after class name
- -h,--help Show usage
- -in_memory Tries to keep the HFiles of the CF inmemory as far
- as possible. Not guaranteed that reads are always
- served from inmemory
- -init_only Initialize the test table only, don't do any
- loading
- -key_window The 'key window' to maintain between reads and
- writes for concurrent write/read workload. The
- default is 0.
- -max_read_errors The maximum number of read errors to tolerate
- before terminating all reader threads. The default
- is 10.
- -multiput Whether to use multi-puts as opposed to separate
- puts for every column in a row
- -num_keys The number of keys to read/write
- -num_tables A positive integer number. When a number n is
- speicfied, load test tool will load n table
- parallely. -tn parameter value becomes table name
- prefix. Each table name is in format
- _1..._n
- -read [:<#threads=20>]
- -regions_per_server A positive integer number. When a number n is
- specified, load test tool will create the test
- table with n regions per server
- -skip_init Skip the initialization; assume test table already
- exists
- -start_key The first key to read/write (a 0-based index). The
- default value is 0.
- -tn The name of the table to read or write
- -update [:<#threads=20>][:<#whether to
- ignore nonce collisions=0>]
- -write :[:<#threads=20>]
- -zk ZK quorum as comma-separated host names without
- port numbers
- -zk_root name of parent znode in zookeeper
-----
-
-.Example Usage of LoadTestTool
-----
-$ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000000
- -read 100:30 -num_tables 1 -data_block_encoding NONE -tn load_test_tool_NONE
-----
-
-[[data.block.encoding.enable]]
-=== Enable Data Block Encoding
-
-Codecs are built into HBase so no extra configuration is needed.
-Codecs are enabled on a table by setting the `DATA_BLOCK_ENCODING` property.
-Disable the table before altering its DATA_BLOCK_ENCODING setting.
-Following is an example using HBase Shell:
-
-.Enable Data Block Encoding On a Table
-----
-hbase> alter 'test', { NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST_DIFF' }
-Updating all regions with the new schema...
-0/1 regions updated.
-1/1 regions updated.
-Done.
-0 row(s) in 2.2820 seconds
-----
-
-.Verifying a ColumnFamily's Data Block Encoding
-----
-hbase> describe 'test'
-DESCRIPTION ENABLED
- 'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST true
- _DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE =>
- '0', VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERS
- IONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =
- > 'false', BLOCKSIZE => '65536', IN_MEMORY => 'fals
- e', BLOCKCACHE => 'true'}
-1 row(s) in 0.0650 seconds
-----
-
-:numbered:
-
-ifdef::backend-docbook[]
-[index]
-== Index
-// Generated automatically by the DocBook toolchain.
-endif::backend-docbook[]
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
deleted file mode 100644
index 353c062d4f47..000000000000
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ /dev/null
@@ -1,1397 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[[configuration]]
-= Apache HBase Configuration
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-This chapter expands upon the <> chapter to further explain configuration of
-Apache HBase. Please read this chapter carefully, especially the
-<> to ensure that your HBase testing and deployment goes
-smoothly. Familiarize yourself with <> as well.
-
-== Configuration Files
-Apache HBase uses the same configuration system as Apache Hadoop. All configuration files are
-located in the _conf/_ directory, which needs to be kept in sync for each node on your cluster.
-
-.HBase Configuration File Descriptions
-_backup-masters_::
- Not present by default. A plain-text file which lists hosts on which the Master should start a
- backup Master process, one host per line.
-
-_hadoop-metrics2-hbase.properties_::
- Used to connect HBase Hadoop's Metrics2 framework.
- See the link:https://cwiki.apache.org/confluence/display/HADOOP2/HADOOP-6728-MetricsV2[Hadoop Wiki entry]
- for more information on Metrics2. Contains only commented-out examples by default.
-
-_hbase-env.cmd_ and _hbase-env.sh_::
- Script for Windows and Linux / Unix environments to set up the working environment for HBase,
- including the location of Java, Java options, and other environment variables. The file contains
- many commented-out examples to provide guidance.
-
-_hbase-policy.xml_::
- The default policy configuration file used by RPC servers to make authorization decisions on
- client requests. Only used if HBase <> is enabled.
-
-_hbase-site.xml_::
- The main HBase configuration file.
- This file specifies configuration options which override HBase's default configuration.
- You can view (but do not edit) the default configuration file at _docs/hbase-default.xml_.
- You can also view the entire effective configuration for your cluster (defaults and overrides) in
- the [label]#HBase Configuration# tab of the HBase Web UI.
-
-_log4j.properties_::
- Configuration file for HBase logging via `log4j`.
-
-_regionservers_::
- A plain-text file containing a list of hosts which should run a RegionServer in your HBase cluster.
- By default, this file contains the single entry `localhost`.
- It should contain a list of hostnames or IP addresses, one per line, and should only contain
- `localhost` if each node in your cluster will run a RegionServer on its `localhost` interface.
-
-.Checking XML Validity
-[TIP]
-====
-When you edit XML, it is a good idea to use an XML-aware editor to be sure that your syntax is
-correct and your XML is well-formed. You can also use the `xmllint` utility to check that your XML
-is well-formed. By default, `xmllint` re-flows and prints the XML to standard output. To check for
-well-formedness and only print output if errors exist, use the command `xmllint -noout filename.xml`.
-====
-.Keep Configuration In Sync Across the Cluster
-[WARNING]
-====
-When running in distributed mode, after you make an edit to an HBase configuration, make sure you
-copy the contents of the _conf/_ directory to all nodes of the cluster. HBase will not do this for
-you. Use a configuration management tool for managing and copying the configuration files to your
-nodes. For most configurations, a restart is needed for servers to pick up changes. Dynamic
-configuration is an exception to this, to be described later below.
-====
-
-[[basic.prerequisites]]
-== Basic Prerequisites
-
-This section lists required services and some required system configuration.
-
-[[java]]
-.Java
-
-HBase runs on the Java Virtual Machine, thus all HBase deployments require a JVM runtime.
-
-The following table summarizes the recommendations of the HBase community with respect to running
-on various Java versions. The icon:check-circle[role="green"] symbol indicates a base level of
-testing and willingness to help diagnose and address issues you might run into; these are the
-expected deployment combinations. An entry of icon:exclamation-circle[role="yellow"]
-means that there may be challenges with this combination, and you should look for more information
-before deciding to pursue this as your deployment strategy. The icon:times-circle[role="red"] means
-this combination does not work; either an older Java version is considered deprecated by the HBase
-community, or this combination is known to not work. For combinations of newer JDK with older HBase
-releases, it's likely there are known compatibility issues that cannot be addressed under our
-compatibility guarantees, making the combination impossible. In some cases, specific guidance on
-limitations (e.g. whether compiling / unit tests work, specific operational issues, etc) are also
-noted. Assume any combination not listed here is considered icon:times-circle[role="red"].
-
-.Long-Term Support JDKs are Recommended
-[WARNING]
-====
-HBase recommends downstream users rely only on JDK releases that are marked as Long-Term Supported
-(LTS), either from the OpenJDK project or vendors. At the time of this writing, the following JDK
-releases are NOT LTS releases and are NOT tested or advocated for use by the Apache HBase
-community: JDK9, JDK10, JDK12, JDK13, and JDK14. Community discussion around this decision is
-recorded on link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264].
-====
-
-.HotSpot vs. OpenJ9
-[TIP]
-====
-At this time, all testing performed by the Apache HBase project runs on the HotSpot variant of the
-JVM. When selecting your JDK distribution, please take this into consideration.
-====
-
-.Java support by release line
-[cols="4*^.^", options="header"]
-|===
-|Java Version
-|HBase 1.3+
-|HBase 2.1+
-|HBase 2.3+
-
-|JDK7
-|icon:check-circle[role="green"]
-|icon:times-circle[role="red"]
-|icon:times-circle[role="red"]
-
-|JDK8
-|icon:check-circle[role="green"]
-|icon:check-circle[role="green"]
-|icon:check-circle[role="green"]
-
-|JDK11
-|icon:times-circle[role="red"]
-|icon:times-circle[role="red"]
-|icon:exclamation-circle[role="yellow"]*
-
-|===
-
-.A Note on JDK11 icon:exclamation-circle[role="yellow"]*
-[WARNING]
-====
-Preliminary support for JDK11 is introduced with HBase 2.3.0. This support is limited to
-compilation and running the full test suite. There are open questions regarding the runtime
-compatibility of JDK11 with Apache ZooKeeper and Apache Hadoop
-(link:https://issues.apache.org/jira/browse/HADOOP-15338[HADOOP-15338]). Significantly, neither
-project has yet released a version with explicit runtime support for JDK11. The remaining known
-issues in HBase are catalogued in
-link:https://issues.apache.org/jira/browse/HBASE-22972[HBASE-22972].
-====
-
-NOTE: You must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy
-mechanism to do this.
-
-[[os]]
-.Operating System Utilities
-ssh::
- HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between
-cluster nodes. Each server in the cluster must be running `ssh` so that the Hadoop and HBase
-daemons can be managed. You must be able to connect to all nodes via SSH, including the local
-node, from the Master as well as any backup Master, using a shared key rather than a password.
-You can see the basic methodology for such a set-up in Linux or Unix systems at
-"<>". If your cluster nodes use OS X, see the section,
-link:https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120730246#RunningHadoopOnOSX10.564-bit(Single-NodeCluster)-SSH:SettingupRemoteDesktopandEnablingSelf-Login[SSH: Setting up Remote Desktop and Enabling Self-Login]
-on the Hadoop wiki.
-
-DNS::
- HBase uses the local hostname to self-report its IP address.
-
-NTP::
- The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable,
-but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is one
-of the first things to check if you see unexplained problems in your cluster. It is recommended
-that you run a Network Time Protocol (NTP) service, or another time-synchronization mechanism on
-your cluster and that all nodes look to the same service for time synchronization. See the
-link:http://www.tldp.org/LDP/sag/html/basic-ntp-config.html[Basic NTP Configuration] at
-[citetitle]_The Linux Documentation Project (TLDP)_ to set up NTP.
-
-[[ulimit]]
-Limits on Number of Files and Processes (ulimit)::
- Apache HBase is a database. It requires the ability to open a large number of files at once. Many
-Linux distributions limit the number of files a single user is allowed to open to `1024` (or `256`
-on older versions of OS X). You can check this limit on your servers by running the command
-`ulimit -n` when logged in as the user which runs HBase. See
-<> for some of the problems you may
-experience if the limit is too low. You may also notice errors such as the following:
-+
-----
-2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
-2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
-----
-+
-It is recommended to raise the ulimit to at least 10,000, but more likely 10,240, because the value
-is usually expressed in multiples of 1024. Each ColumnFamily has at least one StoreFile, and
-possibly more than six StoreFiles if the region is under load. The number of open files required
-depends upon the number of ColumnFamilies and the number of regions. The following is a rough
-formula for calculating the potential number of open files on a RegionServer.
-+
-.Calculate the Potential Number of Open Files
-----
-(StoreFiles per ColumnFamily) x (regions per RegionServer)
-----
-+
-For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles
-per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open `3 * 3 * 100 = 900`
-file descriptors, not counting open JAR files, configuration files, and others. Opening a file does
-not take many resources, and the risk of allowing a user to open too many files is minimal.
-+
-Another related setting is the number of processes a user is allowed to run at once. In Linux and
-Unix, the number of processes is set using the `ulimit -u` command. This should not be confused
-with the `nproc` command, which controls the number of CPUs available to a given user. Under load,
-a `ulimit -u` that is too low can cause OutOfMemoryError exceptions.
-+
-Configuring the maximum number of file descriptors and processes for the user who is running the
-HBase process is an operating system configuration, rather than an HBase configuration. It is also
-important to be sure that the settings are changed for the user that actually runs HBase. To see
-which user started HBase, and that user's ulimit configuration, look at the first line of the
-HBase log for that instance.
-+
-.`ulimit` Settings on Ubuntu
-====
-To configure ulimit settings on Ubuntu, edit _/etc/security/limits.conf_, which is a
-space-delimited file with four columns. Refer to the man page for _limits.conf_ for details about
-the format of this file. In the following example, the first line sets both soft and hard limits
-for the number of open files (nofile) to 32768 for the operating system user with the username
-hadoop. The second line sets the number of processes to 32000 for the same user.
-----
-hadoop - nofile 32768
-hadoop - nproc 32000
-----
-The settings are only applied if the Pluggable Authentication Module (PAM) environment is directed
-to use them. To configure PAM to use these limits, be sure that the _/etc/pam.d/common-session_
-file contains the following line:
-----
-session required pam_limits.so
-----
-====
-
-Linux Shell::
- All of the shell scripts that come with HBase rely on the
-link:http://www.gnu.org/software/bash[GNU Bash] shell.
-
-Windows::
- Running production systems on Windows machines is not recommended.
-
-[[hadoop]]
-=== link:https://hadoop.apache.org[Hadoop](((Hadoop)))
-
-The following table summarizes the versions of Hadoop supported with each version of HBase. Older
-versions not appearing in this table are considered unsupported and likely missing necessary
-features, while newer versions are untested but may be suitable.
-
-Based on the version of HBase, you should select the most appropriate version of Hadoop. You can
-use Apache Hadoop, or a vendor's distribution of Hadoop. No distinction is made here. See
-link:https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support[the Hadoop wiki]
-for information about vendors of Hadoop.
-
-.Hadoop 2.x is recommended.
-[TIP]
-====
-Hadoop 2.x is faster and includes features, such as short-circuit reads (see
-<>), which will help improve your HBase random read profile. Hadoop
-2.x also includes important bug fixes that will improve your overall HBase experience. HBase does
-not support running with earlier versions of Hadoop. See the table below for requirements specific
-to different HBase versions.
-
-Hadoop 3.x is still in early access releases and has not yet been sufficiently tested by the HBase community for production use cases.
-====
-
-Use the following legend to interpret this table:
-
-.Hadoop version support matrix
-
-* icon:check-circle[role="green"] = Tested to be fully-functional
-* icon:times-circle[role="red"] = Known to not be fully-functional, or there are
-link:https://hadoop.apache.org/cve_list.html[CVEs] so we drop the support in newer minor releases
-* icon:exclamation-circle[role="yellow"] = Not tested, may/may-not function
-
-[cols="1,5*^.^", options="header"]
-|===
-| | HBase-1.4.x | HBase-1.6.x | HBase-1.7.x | HBase-2.2.x | HBase-2.3.x
-|Hadoop-2.7.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
-|Hadoop-2.7.1+ | icon:check-circle[role="green"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
-|Hadoop-2.8.[0-2] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
-|Hadoop-2.8.[3-4] | icon:exclamation-circle[role="yellow"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
-|Hadoop-2.8.5+ | icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:times-circle[role="red"]
-|Hadoop-2.9.[0-1] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
-|Hadoop-2.9.2+ | icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:times-circle[role="red"]
-|Hadoop-2.10.x | icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"] | icon:check-circle[role="green"] | icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"]
-|Hadoop-3.1.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
-|Hadoop-3.1.1+ | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:check-circle[role="green"]
-|Hadoop-3.2.x | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:check-circle[role="green"]
-|===
-
-.Hadoop Pre-2.6.1 and JDK 1.8 Kerberos
-[TIP]
-====
-When using pre-2.6.1 Hadoop versions and JDK 1.8 in a Kerberos environment, HBase server can fail
-and abort due to Kerberos keytab relogin error. Late version of JDK 1.7 (1.7.0_80) has the problem
-too. Refer to link:https://issues.apache.org/jira/browse/HADOOP-10786[HADOOP-10786] for additional
-details. Consider upgrading to Hadoop 2.6.1+ in this case.
-====
-
-.Hadoop 2.6.x
-[TIP]
-====
-Hadoop distributions based on the 2.6.x line *must* have
-link:https://issues.apache.org/jira/browse/HADOOP-11710[HADOOP-11710] applied if you plan to run
-HBase on top of an HDFS Encryption Zone. Failure to do so will result in cluster failure and
-data loss. This patch is present in Apache Hadoop releases 2.6.1+.
-====
-
-.Hadoop 2.y.0 Releases
-[TIP]
-====
-Starting around the time of Hadoop version 2.7.0, the Hadoop PMC got into the habit of calling out
-new minor releases on their major version 2 release line as not stable / production ready. As such,
-HBase expressly advises downstream users to avoid running on top of these releases. Note that
-additionally the 2.8.1 release was given the same caveat by the Hadoop PMC. For reference, see the
-release announcements for link:https://s.apache.org/hadoop-2.7.0-announcement[Apache Hadoop 2.7.0],
-link:https://s.apache.org/hadoop-2.8.0-announcement[Apache Hadoop 2.8.0],
-link:https://s.apache.org/hadoop-2.8.1-announcement[Apache Hadoop 2.8.1], and
-link:https://s.apache.org/hadoop-2.9.0-announcement[Apache Hadoop 2.9.0].
-====
-
-.Hadoop 3.0.x Releases
-[TIP]
-====
-Hadoop distributions that include the Application Timeline Service feature may cause unexpected
-versions of HBase classes to be present in the application classpath. Users planning on running
-MapReduce applications with HBase should make sure that
-link:https://issues.apache.org/jira/browse/YARN-7190[YARN-7190] is present in their YARN service
-(currently fixed in 2.9.1+ and 3.1.0+).
-====
-
-.Hadoop 3.1.0 Release
-[TIP]
-====
-The Hadoop PMC called out the 3.1.0 release as not stable / production ready. As such, HBase
-expressly advises downstream users to avoid running on top of this release. For reference, see
-the link:https://s.apache.org/hadoop-3.1.0-announcement[release announcement for Hadoop 3.1.0].
-====
-
-.Replace the Hadoop Bundled With HBase!
-[NOTE]
-====
-Because HBase depends on Hadoop, it bundles Hadoop jars under its _lib_ directory. The bundled jars
-are ONLY for use in stand-alone mode. In distributed mode, it is _critical_ that the version of
-Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jars found in the
-HBase lib directory with the equivalent hadoop jars from the version you are running on your
-cluster to avoid version mismatch issues. Make sure you replace the jars under HBase across your
-whole cluster. Hadoop version mismatch issues have various manifestations. Check for mismatch if
-HBase appears hung.
-====
-
-[[dfs.datanode.max.transfer.threads]]
-==== `dfs.datanode.max.transfer.threads` (((dfs.datanode.max.transfer.threads)))
-
-An HDFS DataNode has an upper bound on the number of files that it will serve at any one time.
-Before doing any loading, make sure you have configured Hadoop's _conf/hdfs-site.xml_, setting the
-`dfs.datanode.max.transfer.threads` value to at least the following:
-
-[source,xml]
-----
-
-
- dfs.datanode.max.transfer.threads
- 4096
-
-----
-
-Be sure to restart your HDFS after making the above configuration.
-
-Not having this configuration in place makes for strange-looking failures.
-One manifestation is a complaint about missing blocks.
-For example:
-
-----
-10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
- blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes
- contain current block. Will get new block locations from namenode and retry...
-----
-
-See also <> and note that this
-property was previously known as `dfs.datanode.max.xcievers` (e.g.
-link:http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html[Hadoop HDFS: Deceived by Xciever]).
-
-[[zookeeper.requirements]]
-=== ZooKeeper Requirements
-
-An Apache ZooKeeper quorum is required. The exact version depends on your version of HBase, though
-the minimum ZooKeeper version is 3.4.x due to the `useMulti` feature made default in 1.0.0
-(see https://issues.apache.org/jira/browse/HBASE-16598[HBASE-16598]).
-
-[[standalone_dist]]
-== HBase run modes: Standalone and Distributed
-
-HBase has two run modes: <> and <>.
-Out of the box, HBase runs in standalone mode.
-Whatever your mode, you will need to configure HBase by editing files in the HBase _conf_ directory.
-At a minimum, you must edit [code]+conf/hbase-env.sh+ to tell HBase which +java+ to use.
-In this file you set HBase environment variables such as the heapsize and other options for the
-`JVM`, the preferred location for log files, etc. Set [var]+JAVA_HOME+ to point at the root of
-your +java+ install.
-
-[[standalone]]
-=== Standalone HBase
-
-This is the default mode.
-Standalone mode is what is described in the <> section.
-In standalone mode, HBase does not use HDFS -- it uses the local filesystem instead -- and it runs
-all HBase daemons and a local ZooKeeper all up in the same JVM. ZooKeeper binds to a well-known
-port so clients may talk to HBase.
-
-[[standalone.over.hdfs]]
-==== Standalone HBase over HDFS
-A sometimes useful variation on standalone hbase has all daemons running inside the
-one JVM but rather than persist to the local filesystem, instead
-they persist to an HDFS instance.
-
-You might consider this profile when you are intent on
-a simple deploy profile, the loading is light, but the
-data must persist across node comings and goings. Writing to
-HDFS where data is replicated ensures the latter.
-
-To configure this standalone variant, edit your _hbase-site.xml_
-setting _hbase.rootdir_ to point at a directory in your
-HDFS instance but then set _hbase.cluster.distributed_
-to _false_. For example:
-
-[source,xml]
-----
-
-
- hbase.rootdir
- hdfs://namenode.example.org:8020/hbase
-
-
- hbase.cluster.distributed
- false
-
-
-----
-
-[[distributed]]
-=== Distributed
-
-Distributed mode can be subdivided into distributed but all daemons run on a single node -- a.k.a.
-_pseudo-distributed_ -- and _fully-distributed_ where the daemons are spread across all nodes in
-the cluster. The _pseudo-distributed_ vs. _fully-distributed_ nomenclature comes from Hadoop.
-
-Pseudo-distributed mode can run against the local filesystem or it can run against an instance of
-the _Hadoop Distributed File System_ (HDFS). Fully-distributed mode can ONLY run on HDFS.
-See the Hadoop link:https://hadoop.apache.org/docs/current/[documentation] for how to set up HDFS.
-A good walk-through for setting up HDFS on Hadoop 2 can be found at
-http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide.
-
-[[pseudo]]
-==== Pseudo-distributed
-
-.Pseudo-Distributed Quickstart
-[NOTE]
-====
-A quickstart has been added to the <> chapter.
-See <>.
-Some of the information that was originally in this section has been moved there.
-====
-
-A pseudo-distributed mode is simply a fully-distributed mode run on a single host.
-Use this HBase configuration for testing and prototyping purposes only.
-Do not use this configuration for production or for performance evaluation.
-
-[[fully_dist]]
-=== Fully-distributed
-
-By default, HBase runs in stand-alone mode. Both stand-alone mode and pseudo-distributed mode are
-provided for the purposes of small-scale testing. For a production environment, distributed mode
-is advised. In distributed mode, multiple instances of HBase daemons run on multiple servers in the
-cluster.
-
-Just as in pseudo-distributed mode, a fully distributed configuration requires that you set the
-`hbase.cluster.distributed` property to `true`. Typically, the `hbase.rootdir` is configured to
-point to a highly-available HDFS filesystem.
-
-In addition, the cluster is configured so that multiple cluster nodes enlist as RegionServers,
-ZooKeeper QuorumPeers, and backup HMaster servers. These configuration basics are all demonstrated
-in <>.
-
-.Distributed RegionServers
-Typically, your cluster will contain multiple RegionServers all running on different servers, as
-well as primary and backup Master and ZooKeeper daemons. The _conf/regionservers_ file on the
-master server contains a list of hosts whose RegionServers are associated with this cluster.
-Each host is on a separate line. All hosts listed in this file will have their RegionServer
-processes started and stopped when the
-master server starts or stops.
-
-.ZooKeeper and HBase
-See the <> section for ZooKeeper setup instructions for HBase.
-
-.Example Distributed HBase Cluster
-====
-This is a bare-bones _conf/hbase-site.xml_ for a distributed HBase cluster.
-A cluster that is used for real-world work would contain more custom configuration parameters.
-Most HBase configuration directives have default values, which are used unless the value is
-overridden in the _hbase-site.xml_. See "<>" for more information.
-
-[source,xml]
-----
-
-
-
- hbase.rootdir
- hdfs://namenode.example.org:8020/hbase
-
-
- hbase.cluster.distributed
- true
-
-
- hbase.zookeeper.quorum
- node-a.example.com,node-b.example.com,node-c.example.com
-
-
-----
-
-This is an example _conf/regionservers_ file, which contains a list of nodes that should run a
-RegionServer in the cluster. These nodes need HBase installed and they need to use the same
-contents of the _conf/_ directory as the Master server.
-
-[source]
-----
-
-node-a.example.com
-node-b.example.com
-node-c.example.com
-----
-
-This is an example _conf/backup-masters_ file, which contains a list of each node that should run
-a backup Master instance. The backup Master instances will sit idle unless the main Master becomes
-unavailable.
-
-[source]
-----
-
-node-b.example.com
-node-c.example.com
-----
-====
-
-.Distributed HBase Quickstart
-See <> for a walk-through of a simple
-three-node cluster configuration with multiple ZooKeeper, backup HMaster, and RegionServer
-instances.
-
-.Procedure: HDFS Client Configuration
-. Of note, if you have made HDFS client configuration changes on your Hadoop cluster, such as
-configuration directives for HDFS clients, as opposed to server-side configurations, you must use
-one of the following methods to enable HBase to see and use these configuration changes:
-+
-a. Add a pointer to your `HADOOP_CONF_DIR` to the `HBASE_CLASSPATH` environment variable in
-_hbase-env.sh_.
-b. Add a copy of _hdfs-site.xml_ (or _hadoop-site.xml_) or, better, symlinks, under
-_${HBASE_HOME}/conf_, or
-c. if only a small set of HDFS client configurations, add them to _hbase-site.xml_.
-
-
-An example of such an HDFS client configuration is `dfs.replication`.
-If for example, you want to run with a replication factor of 5, HBase will create files with the
-default of 3 unless you do the above to make the configuration available to HBase.
-
-[[confirm]]
-== Running and Confirming Your Installation
-
-Make sure HDFS is running first.
-Start and stop the Hadoop HDFS daemons by running _bin/start-hdfs.sh_ over in the `HADOOP_HOME`
-directory. You can ensure it started properly by testing the `put` and `get` of files into the
-Hadoop filesystem. HBase does not normally use the MapReduce or YARN daemons. These do not need to
-be started.
-
-_If_ you are managing your own ZooKeeper, start it and confirm it's running, else HBase will start
-up ZooKeeper for you as part of its start process.
-
-Start HBase with the following command:
-
-----
-bin/start-hbase.sh
-----
-
-Run the above from the `HBASE_HOME` directory.
-
-You should now have a running HBase instance.
-HBase logs can be found in the _logs_ subdirectory.
-Check them out especially if HBase had trouble starting.
-
-HBase also puts up a UI listing vital attributes.
-By default it's deployed on the Master host at port 16010 (HBase RegionServers listen on port 16020
-by default and put up an informational HTTP server at port 16030). If the Master is running on a
-host named `master.example.org` on the default port, point your browser at
-pass:[http://master.example.org:16010] to see the web interface.
-
-Once HBase has started, see the <> section for how to create
-tables, add data, scan your insertions, and finally disable and drop your tables.
-
-To stop HBase after exiting the HBase shell enter
-
-----
-$ ./bin/stop-hbase.sh
-stopping hbase...............
-----
-
-Shutdown can take a moment to complete.
-It can take longer if your cluster is comprised of many machines.
-If you are running a distributed operation, be sure to wait until HBase has shut down completely
-before stopping the Hadoop daemons.
-
-[[config.files]]
-== Default Configuration
-
-[[hbase.site]]
-=== _hbase-site.xml_ and _hbase-default.xml_
-
-Just as in Hadoop where you add site-specific HDFS configuration to the _hdfs-site.xml_ file, for
-HBase, site specific customizations go into the file _conf/hbase-site.xml_. For the list of
-configurable properties, see <> below
-or view the raw _hbase-default.xml_ source file in the HBase source code at _src/main/resources_.
-
-Not all configuration options make it out to _hbase-default.xml_.
-Some configurations would only appear in source code; the only way to identify these changes are
-through code review.
-
-Currently, changes here will require a cluster restart for HBase to notice the change.
-// hbase/src/main/asciidoc
-//
-include::{docdir}/../../../target/asciidoc/hbase-default.adoc[]
-
-
-[[hbase.env.sh]]
-=== _hbase-env.sh_
-
-Set HBase environment variables in this file. Examples include options to pass the JVM on start of
-an HBase daemon such as heap size and garbage collector configs.
-You can also set configurations for HBase configuration, log directories, niceness, ssh options,
-where to locate process pid files, etc. Open the file at _conf/hbase-env.sh_ and peruse its content.
-Each option is fairly well documented. Add your own environment variables here if you want them
-read by HBase daemons on startup.
-
-Changes here will require a cluster restart for HBase to notice the change.
-
-[[log4j]]
-=== _log4j.properties_
-
-Edit this file to change rate at which HBase files are rolled and to change the level at which
-HBase logs messages.
-
-Changes here will require a cluster restart for HBase to notice the change though log levels can
-be changed for particular daemons via the HBase UI.
-
-[[client_dependencies]]
-=== Client configuration and dependencies connecting to an HBase cluster
-
-If you are running HBase in standalone mode, you don't need to configure anything for your client
-to work provided that they are all on the same machine.
-
-Starting release 3.0.0, the default connection registry has been switched to a master based
-implementation. Refer to <> for more details about what a connection
-registry is and implications of this change. Depending on your HBase version, following is the
-expected minimal client configuration.
-
-==== Up until 2.x.y releases
-In 2.x.y releases, the default connection registry was based on ZooKeeper as the source of truth.
-This means that the clients always looked up ZooKeeper znodes to fetch the required metadata. For
-example, if an active master crashed and the a new master is elected, clients looked up the master
-znode to fetch the active master address (similarly for meta locations). This meant that the
-clients needed to have access to ZooKeeper and need to know the ZooKeeper ensemble information
-before they can do anything. This can be configured in the client configuration xml as follows:
-
-[source,xml]
-----
-
-
-
-
- hbase.zookeeper.quorum
- example1,example2,example3
- Zookeeper ensemble information
-
-
-----
-
-==== Starting 3.0.0 release
-
-The default implementation was switched to a master based connection registry. With this
-implementation, clients always contact the active or stand-by master RPC end points to fetch the
-connection registry information. This means that the clients should have access to the list of
-active and master end points before they can do anything. This can be configured in the client
-configuration xml as follows:
-
-[source,xml]
-----
-
-
-
-
- hbase.masters
- example1,example2,example3
- List of master rpc end points for the hbase cluster.
-
-
-----
-
-The configuration value for _hbase.masters_ is a comma separated list of _host:port_ values. If no
-port value is specified, the default of _16000_ is assumed.
-
-Usually this configuration is kept out in the _hbase-site.xml_ and is picked up by the client from
-the `CLASSPATH`.
-
-If you are configuring an IDE to run an HBase client, you should include the _conf/_ directory on
-your classpath so _hbase-site.xml_ settings can be found (or add _src/test/resources_ to pick up
-the hbase-site.xml used by tests).
-
-For Java applications using Maven, including the hbase-shaded-client module is the recommended
-dependency when connecting to a cluster:
-[source,xml]
-----
-
- org.apache.hbase
- hbase-shaded-client
- 2.0.0
-
-----
-
-[[java.client.config]]
-==== Java client configuration
-
-The configuration used by a Java client is kept in an
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration[HBaseConfiguration]
-instance.
-
-The factory method on HBaseConfiguration, `HBaseConfiguration.create();`, on invocation, will read
-in the content of the first _hbase-site.xml_ found on the client's `CLASSPATH`, if one is present
-(Invocation will also factor in any _hbase-default.xml_ found; an _hbase-default.xml_ ships inside
-the _hbase.X.X.X.jar_). It is also possible to specify configuration directly without having to
-read from a _hbase-site.xml_.
-
-For example, to set the ZooKeeper ensemble for the cluster programmatically do as follows:
-
-[source,java]
-----
-Configuration config = HBaseConfiguration.create();
-config.set("hbase.zookeeper.quorum", "localhost"); // Until 2.x.y versions
-// ---- or ----
-config.set("hbase.masters", "localhost:1234"); // Starting 3.0.0 version
-----
-
-[[config_timeouts]]
-=== Timeout settings
-
-HBase provides a wide variety of timeout settings to limit the execution time of various remote
-operations.
-
-* hbase.rpc.timeout
-* hbase.rpc.read.timeout
-* hbase.rpc.write.timeout
-* hbase.client.operation.timeout
-* hbase.client.meta.operation.timeout
-* hbase.client.scanner.timeout.period
-
-The `hbase.rpc.timeout` property limits how long a single RPC call can run before timing out.
-To fine tune read or write related RPC timeouts set `hbase.rpc.read.timeout` and
-`hbase.rpc.write.timeout` configuration properties. In the absence of these properties
-`hbase.rpc.timeout` will be used.
-
-A higher-level timeout is `hbase.client.operation.timeout` which is valid for each client call.
-When an RPC call fails for instance for a timeout due to `hbase.rpc.timeout` it will be retried
-until `hbase.client.operation.timeout` is reached. Client operation timeout for system tables can
-be fine tuned by setting `hbase.client.meta.operation.timeout` configuration value.
-When this is not set its value will use `hbase.client.operation.timeout`.
-
-Timeout for scan operations is controlled differently. Use `hbase.client.scanner.timeout.period`
-property to set this timeout.
-
-[[example_config]]
-== Example Configurations
-
-=== Basic Distributed HBase Install
-
-Here is a basic configuration example for a distributed ten node cluster:
-* The nodes are named `example0`, `example1`, etc., through node `example9` in this example.
-* The HBase Master and the HDFS NameNode are running on the node `example0`.
-* RegionServers run on nodes `example1`-`example9`.
-* A 3-node ZooKeeper ensemble runs on `example1`, `example2`, and `example3` on the default ports.
-* ZooKeeper data is persisted to the directory _/export/zookeeper_.
-
-Below we show what the main configuration files -- _hbase-site.xml_, _regionservers_, and
-_hbase-env.sh_ -- found in the HBase _conf_ directory might look like.
-
-[[hbase_site]]
-==== _hbase-site.xml_
-
-[source,xml]
-----
-
-
-
-
- hbase.zookeeper.quorum
- example1,example2,example3
- The directory shared by RegionServers.
-
-
-
- hbase.zookeeper.property.dataDir
- /export/zookeeper
- Property from ZooKeeper config zoo.cfg.
- The directory where the snapshot is stored.
-
-
-
- hbase.rootdir
- hdfs://example0:8020/hbase
- The directory shared by RegionServers.
-
-
-
- hbase.cluster.distributed
- true
- The mode the cluster will be in. Possible values are
- false: standalone and pseudo-distributed setups with managed ZooKeeper
- true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)
-
-
-
-----
-
-[[regionservers]]
-==== _regionservers_
-
-In this file you list the nodes that will run RegionServers.
-In our case, these nodes are `example1`-`example9`.
-
-[source]
-----
-example1
-example2
-example3
-example4
-example5
-example6
-example7
-example8
-example9
-----
-
-[[hbase_env]]
-==== _hbase-env.sh_
-
-The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` environment variable
-(required for HBase) and set the heap to 4 GB (rather than the default value of 1 GB). If you copy
-and paste this example, be sure to adjust the `JAVA_HOME` to suit your environment.
-
-----
-# The java implementation to use.
-export JAVA_HOME=/usr/java/jdk1.8.0/
-
-# The maximum amount of heap to use. Default is left to JVM default.
-export HBASE_HEAPSIZE=4G
-----
-
-Use +rsync+ to copy the content of the _conf_ directory to all nodes of the cluster.
-
-[[important_configurations]]
-== The Important Configurations
-
-Below we list some _important_ configurations.
-We've divided this section into required configuration and worth-a-look recommended configs.
-
-[[required_configuration]]
-=== Required Configurations
-
-Review the <> and <> sections.
-
-[[big.cluster.config]]
-==== Big Cluster Configurations
-
-If you have a cluster with a lot of regions, it is possible that a Regionserver checks in briefly
-after the Master starts while all the remaining RegionServers lag behind. This first server to
-check in will be assigned all regions which is not optimal. To prevent the above scenario from
-happening, up the `hbase.master.wait.on.regionservers.mintostart` property from its default value
-of 1. See link:https://issues.apache.org/jira/browse/HBASE-6389[HBASE-6389 Modify the
- conditions to ensure that Master waits for sufficient number of Region Servers before
- starting region assignments] for more detail.
-
-[[recommended_configurations]]
-=== Recommended Configurations
-
-[[recommended_configurations.zk]]
-==== ZooKeeper Configuration
-
-[[sect.zookeeper.session.timeout]]
-===== `zookeeper.session.timeout`
-
-The default timeout is 90 seconds (specified in milliseconds). This means that if a server crashes,
-it will be 90 seconds before the Master notices the crash and starts recovery. You might need to
-tune the timeout down to a minute or even less so the Master notices failures sooner. Before
-changing this value, be sure you have your JVM garbage collection configuration under control,
-otherwise, a long garbage collection that lasts beyond the ZooKeeper session timeout will take out
-your RegionServer. (You might be fine with this -- you probably want recovery to start on the
-server if a RegionServer has been in GC for a long period of time).
-
-To change this configuration, edit _hbase-site.xml_, copy the changed file across the cluster and
-restart.
-
-We set this value high to save our having to field questions up on the mailing lists asking why a
-RegionServer went down during a massive import. The usual cause is that their JVM is untuned and
-they are running into long GC pauses. Our thinking is that while users are getting familiar with
-HBase, we'd save them having to know all of its intricacies. Later when they've built some
-confidence, then they can play with configuration such as this.
-
-[[zookeeper.instances]]
-===== Number of ZooKeeper Instances
-
-See <>.
-
-[[recommended.configurations.hdfs]]
-==== HDFS Configurations
-
-[[dfs.datanode.failed.volumes.tolerated]]
-===== `dfs.datanode.failed.volumes.tolerated`
-
-This is the "...number of volumes that are allowed to fail before a DataNode stops offering
-service. By default, any volume failure will cause a datanode to shutdown" from the
-_hdfs-default.xml_ description. You might want to set this to about half the amount of your
-available disks.
-
-[[hbase.regionserver.handler.count]]
-===== `hbase.regionserver.handler.count`
-
-This setting defines the number of threads that are kept open to answer incoming requests to user
-tables. The rule of thumb is to keep this number low when the payload per request approaches the MB
-(big puts, scans using a large cache) and high when the payload is small (gets, small puts, ICVs,
-deletes). The total size of the queries in progress is limited by the setting
-`hbase.ipc.server.max.callqueue.size`.
-
-It is safe to set that number to the maximum number of incoming clients if their payload is small,
-the typical example being a cluster that serves a website since puts aren't typically buffered and
-most of the operations are gets.
-
-The reason why it is dangerous to keep this setting high is that the aggregate size of all the puts
-that are currently happening in a region server may impose too much pressure on its memory, or even
-trigger an OutOfMemoryError. A RegionServer running on low memory will trigger its JVM's garbage
-collector to run more frequently up to a point where GC pauses become noticeable (the reason being
-that all the memory used to keep all the requests' payloads cannot be trashed, no matter how hard
-the garbage collector tries). After some time, the overall cluster throughput is affected since
-every request that hits that RegionServer will take longer, which exacerbates the problem even more.
-
-You can get a sense of whether you have too little or too many handlers by
-<> on an individual RegionServer then tailing its logs (Queued requests
-consume memory).
-
-[[big_memory]]
-==== Configuration for large memory machines
-
-HBase ships with a reasonable, conservative configuration that will work on nearly all machine
-types that people might want to test with. If you have larger machines -- HBase has 8G and larger
-heap -- you might find the following configuration options helpful.
-TODO.
-
-[[config.compression]]
-==== Compression
-
-You should consider enabling ColumnFamily compression.
-There are several options that are near-frictionless and in most all cases boost performance by
-reducing the size of StoreFiles and thus reducing I/O.
-
-See <> for more information.
-
-[[config.wals]]
-==== Configuring the size and number of WAL files
-
-HBase uses <> to recover the memstore data that has not been flushed to disk in case of
-an RS failure. These WAL files should be configured to be slightly smaller than HDFS block (by
-default a HDFS block is 64Mb and a WAL file is ~60Mb).
-
-HBase also has a limit on the number of WAL files, designed to ensure there's never too much data
-that needs to be replayed during recovery. This limit needs to be set according to memstore
-configuration, so that all the necessary data would fit. It is recommended to allocate enough WAL
-files to store at least that much data (when all memstores are close to full). For example, with
-16Gb RS heap, default memstore settings (0.4), and default WAL file size (~60Mb), 16Gb*0.4/60, the
-starting point for WAL file count is ~109. However, as all memstores are not expected to be full
-all the time, less WAL files can be allocated.
-
-[[disable.splitting]]
-==== Managed Splitting
-
-HBase generally handles splitting of your regions based upon the settings in your
-_hbase-default.xml_ and _hbase-site.xml_ configuration files. Important settings include
-`hbase.regionserver.region.split.policy`, `hbase.hregion.max.filesize`,
-`hbase.regionserver.regionSplitLimit`. A simplistic view of splitting is that when a region grows
-to `hbase.hregion.max.filesize`, it is split. For most usage patterns, you should use automatic
-splitting. See <> for more
-information about manual region splitting.
-
-Instead of allowing HBase to split your regions automatically, you can choose to manage the
-splitting yourself. Manually managing splits works if you know your keyspace well, otherwise let
-HBase figure where to split for you. Manual splitting can mitigate region creation and movement
-under load. It also makes it so region boundaries are known and invariant (if you disable region
-splitting). If you use manual splits, it is easier doing staggered, time-based major compactions
-to spread out your network IO load.
-
-.Disable Automatic Splitting
-To disable automatic splitting, you can set region split policy in either cluster configuration
-or table configuration to be `org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy`
-
-.Automatic Splitting Is Recommended
-[NOTE]
-====
-If you disable automatic splits to diagnose a problem or during a period of fast data growth, it
-is recommended to re-enable them when your situation becomes more stable. The potential benefits
-of managing region splits yourself are not undisputed.
-====
-
-.Determine the Optimal Number of Pre-Split Regions
-The optimal number of pre-split regions depends on your application and environment. A good rule of
-thumb is to start with 10 pre-split regions per server and watch as data grows over time. It is
-better to err on the side of too few regions and perform rolling splits later. The optimal number
-of regions depends upon the largest StoreFile in your region. The size of the largest StoreFile
-will increase with time if the amount of data grows. The goal is for the largest region to be just
-large enough that the compaction selection algorithm only compacts it during a timed major
-compaction. Otherwise, the cluster can be prone to compaction storms with a large number of regions
-under compaction at the same time. It is important to understand that the data growth causes
-compaction storms and not the manual split decision.
-
-If the regions are split into too many large regions, you can increase the major compaction
-interval by configuring `HConstants.MAJOR_COMPACTION_PERIOD`. The
-`org.apache.hadoop.hbase.util.RegionSplitter` utility also provides a network-IO-safe rolling
-split of all regions.
-
-[[managed.compactions]]
-==== Managed Compactions
-
-By default, major compactions are scheduled to run once in a 7-day period.
-
-If you need to control exactly when and how often major compaction runs, you can disable managed
-major compactions. See the entry for `hbase.hregion.majorcompaction` in the
-<> table for details.
-
-.Do Not Disable Major Compactions
-[WARNING]
-====
-Major compactions are absolutely necessary for StoreFile clean-up. Do not disable them altogether.
-You can run major compactions manually via the HBase shell or via the
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-[Admin API].
-====
-
-For more information about compactions and the compaction file selection process, see
-<>
-
-[[spec.ex]]
-==== Speculative Execution
-
-Speculative Execution of MapReduce tasks is on by default, and for HBase clusters it is generally
-advised to turn off Speculative Execution at a system-level unless you need it for a specific case,
-where it can be configured per-job. Set the properties `mapreduce.map.speculative` and
-`mapreduce.reduce.speculative` to false.
-
-[[other_configuration]]
-=== Other Configurations
-
-[[balancer_config]]
-==== Balancer
-
-The balancer is a periodic operation which is run on the master to redistribute regions on the
-cluster. It is configured via `hbase.balancer.period` and defaults to 300000 (5 minutes).
-
-See <> for more information on the
-LoadBalancer.
-
-[[disabling.blockcache]]
-==== Disabling Blockcache
-
-Do not turn off block cache (You'd do it by setting `hfile.block.cache.size` to zero). Currently,
-we do not do well if you do this because the RegionServer will spend all its time loading HFile
-indices over and over again. If your working set is such that block cache does you no good, at
-least size the block cache such that HFile indices will stay up in the cache (you can get a rough
-idea on the size you need by surveying RegionServer UIs; you'll see index block size accounted near
-the top of the webpage).
-
-[[nagles]]
-==== link:http://en.wikipedia.org/wiki/Nagle's_algorithm[Nagle's] or the small package problem
-
-If a big 40ms or so occasional delay is seen in operations against HBase, try the Nagles' setting.
-For example, see the user mailing list thread,
-link:https://lists.apache.org/thread.html/3d7ceb41c04a955b1b1c80480cdba95208ca3e97bf6895a40e0c1bbb%401346186127%40%3Cuser.hbase.apache.org%3E[Inconsistent scan performance with caching set to 1]
-and the issue cited therein where setting `notcpdelay` improved scan speeds. You might also see the
-graphs on the tail of
-link:https://issues.apache.org/jira/browse/HBASE-7008[HBASE-7008 Set scanner caching to a better default]
-where our Lars Hofhansl tries various data sizes w/ Nagle's on and off measuring the effect.
-
-[[mttr]]
-==== Better Mean Time to Recover (MTTR)
-
-This section is about configurations that will make servers come back faster after a fail. See the
-Deveraj Das and Nicolas Liochon blog post
-link:http://hortonworks.com/blog/introduction-to-hbase-mean-time-to-recover-mttr/[Introduction to HBase Mean Time to Recover (MTTR)]
-for a brief introduction.
-
-The issue
-link:https://issues.apache.org/jira/browse/HBASE-8389[HBASE-8354 forces Namenode into loop with lease recovery requests]
-is messy but has a bunch of good discussion toward the end on low timeouts and how to cause faster
-recovery including citation of fixes added to HDFS. Read the Varun Sharma comments. The below
-suggested configurations are Varun's suggestions distilled and tested. Make sure you are running
-on a late-version HDFS so you have the fixes he refers to and himself adds to HDFS that help HBase
-MTTR (e.g. HDFS-3703, HDFS-3712, and HDFS-4791 -- Hadoop 2 for sure has them and late Hadoop 1 has
-some). Set the following in the RegionServer.
-
-[source,xml]
-----
-
- hbase.lease.recovery.dfs.timeout
- 23000
- How much time we allow elapse between calls to recover lease.
- Should be larger than the dfs timeout.
-
-
- dfs.client.socket-timeout
- 10000
- Down the DFS timeout from 60 to 10 seconds.
-
-----
-
-And on the NameNode/DataNode side, set the following to enable 'staleness' introduced in HDFS-3703,
-HDFS-3912.
-
-[source,xml]
-----
-
- dfs.client.socket-timeout
- 10000
- Down the DFS timeout from 60 to 10 seconds.
-
-
- dfs.datanode.socket.write.timeout
- 10000
- Down the DFS timeout from 8 * 60 to 10 seconds.
-
-
- ipc.client.connect.timeout
- 3000
- Down from 60 seconds to 3.
-
-
- ipc.client.connect.max.retries.on.timeouts
- 2
- Down from 45 seconds to 3 (2 == 3 retries).
-
-
- dfs.namenode.avoid.read.stale.datanode
- true
- Enable stale state in hdfs
-
-
- dfs.namenode.stale.datanode.interval
- 20000
- Down from default 30 seconds
-
-
- dfs.namenode.avoid.write.stale.datanode
- true
- Enable stale state in hdfs
-
-----
-
-[[jmx_config]]
-==== JMX
-
-JMX (Java Management Extensions) provides built-in instrumentation that enables you to monitor and
-manage the Java VM. To enable monitoring and management from remote systems, you need to set system
-property `com.sun.management.jmxremote.port` (the port number through which you want to enable JMX
-RMI connections) when you start the Java VM. See the
-link:http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html[official documentation]
-for more information. Historically, besides above port mentioned, JMX opens two additional random
-TCP listening ports, which could lead to port conflict problem. (See
-link:https://issues.apache.org/jira/browse/HBASE-10289[HBASE-10289] for details)
-
-As an alternative, you can use the coprocessor-based JMX implementation provided by HBase. To
-enable it, add below property in _hbase-site.xml_:
-
-[source,xml]
-----
-
- hbase.coprocessor.regionserver.classes
- org.apache.hadoop.hbase.JMXListener
-
-----
-
-NOTE: DO NOT set `com.sun.management.jmxremote.port` for Java VM at the same time.
-
-Currently it supports Master and RegionServer Java VM.
-By default, the JMX listens on TCP port 10102, you can further configure the port using below
-properties:
-
-[source,xml]
-----
-
- regionserver.rmi.registry.port
- 61130
-
-
- regionserver.rmi.connector.port
- 61140
-
-----
-
-The registry port can be shared with connector port in most cases, so you only need to configure
-`regionserver.rmi.registry.port`. However, if you want to use SSL communication, the 2 ports must
-be configured to different values.
-
-By default the password authentication and SSL communication is disabled.
-To enable password authentication, you need to update _hbase-env.sh_ like below:
-[source,bash]
-----
-export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.authenticate=true \
- -Dcom.sun.management.jmxremote.password.file=your_password_file \
- -Dcom.sun.management.jmxremote.access.file=your_access_file"
-
-export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE "
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "
-----
-
-See example password/access file under _$JRE_HOME/lib/management_.
-
-To enable SSL communication with password authentication, follow below steps:
-
-[source,bash]
-----
-#1. generate a key pair, stored in myKeyStore
-keytool -genkey -alias jconsole -keystore myKeyStore
-
-#2. export it to file jconsole.cert
-keytool -export -alias jconsole -keystore myKeyStore -file jconsole.cert
-
-#3. copy jconsole.cert to jconsole client machine, import it to jconsoleKeyStore
-keytool -import -alias jconsole -keystore jconsoleKeyStore -file jconsole.cert
-----
-
-And then update _hbase-env.sh_ like below:
-
-[source,bash]
-----
-export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=true \
- -Djavax.net.ssl.keyStore=/home/tianq/myKeyStore \
- -Djavax.net.ssl.keyStorePassword=your_password_in_step_1 \
- -Dcom.sun.management.jmxremote.authenticate=true \
- -Dcom.sun.management.jmxremote.password.file=your_password file \
- -Dcom.sun.management.jmxremote.access.file=your_access_file"
-
-export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE "
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "
-----
-
-Finally start `jconsole` on the client using the key store:
-
-[source,bash]
-----
-jconsole -J-Djavax.net.ssl.trustStore=/home/tianq/jconsoleKeyStore
-----
-
-NOTE: To enable the HBase JMX implementation on Master, you also need to add below property in
-_hbase-site.xml_:
-
-[source,xml]
-----
-
- hbase.coprocessor.master.classes
- org.apache.hadoop.hbase.JMXListener
-
-----
-
-The corresponding properties for port configuration are `master.rmi.registry.port` (by default
-10101) and `master.rmi.connector.port` (by default the same as registry.port)
-
-[[dyn_config]]
-== Dynamic Configuration
-
-It is possible to change a subset of the configuration without requiring a server restart. In the
-HBase shell, the operations `update_config` and `update_all_config` will prompt a server or all
-servers to reload configuration.
-
-Only a subset of all configurations can currently be changed in the running server.
-Here are those configurations:
-
-.Configurations support dynamically change
-[cols="1",options="header"]
-|===
-| Key
-| hbase.ipc.server.fallback-to-simple-auth-allowed
-| hbase.cleaner.scan.dir.concurrent.size
-| hbase.coprocessor.master.classes
-| hbase.coprocessor.region.classes
-| hbase.coprocessor.regionserver.classes
-| hbase.coprocessor.user.region.classes
-| hbase.regionserver.thread.compaction.large
-| hbase.regionserver.thread.compaction.small
-| hbase.regionserver.thread.split
-| hbase.regionserver.throughput.controller
-| hbase.regionserver.thread.hfilecleaner.throttle
-| hbase.regionserver.hfilecleaner.large.queue.size
-| hbase.regionserver.hfilecleaner.small.queue.size
-| hbase.regionserver.hfilecleaner.large.thread.count
-| hbase.regionserver.hfilecleaner.small.thread.count
-| hbase.regionserver.hfilecleaner.thread.timeout.msec
-| hbase.regionserver.hfilecleaner.thread.check.interval.msec
-| hbase.regionserver.flush.throughput.controller
-| hbase.hstore.compaction.max.size
-| hbase.hstore.compaction.max.size.offpeak
-| hbase.hstore.compaction.min.size
-| hbase.hstore.compaction.min
-| hbase.hstore.compaction.max
-| hbase.hstore.compaction.ratio
-| hbase.hstore.compaction.ratio.offpeak
-| hbase.regionserver.thread.compaction.throttle
-| hbase.hregion.majorcompaction
-| hbase.hregion.majorcompaction.jitter
-| hbase.hstore.min.locality.to.skip.major.compact
-| hbase.hstore.compaction.date.tiered.max.storefile.age.millis
-| hbase.hstore.compaction.date.tiered.incoming.window.min
-| hbase.hstore.compaction.date.tiered.window.policy.class
-| hbase.hstore.compaction.date.tiered.single.output.for.minor.compaction
-| hbase.hstore.compaction.date.tiered.window.factory.class
-| hbase.offpeak.start.hour
-| hbase.offpeak.end.hour
-| hbase.oldwals.cleaner.thread.size
-| hbase.oldwals.cleaner.thread.timeout.msec
-| hbase.oldwals.cleaner.thread.check.interval.msec
-| hbase.procedure.worker.keep.alive.time.msec
-| hbase.procedure.worker.add.stuck.percentage
-| hbase.procedure.worker.monitor.interval.msec
-| hbase.procedure.worker.stuck.threshold.msec
-| hbase.regions.slop
-| hbase.regions.overallSlop
-| hbase.balancer.tablesOnMaster
-| hbase.balancer.tablesOnMaster.systemTablesOnly
-| hbase.util.ip.to.rack.determiner
-| hbase.ipc.server.max.callqueue.length
-| hbase.ipc.server.priority.max.callqueue.length
-| hbase.ipc.server.callqueue.type
-| hbase.ipc.server.callqueue.codel.target.delay
-| hbase.ipc.server.callqueue.codel.interval
-| hbase.ipc.server.callqueue.codel.lifo.threshold
-| hbase.master.balancer.stochastic.maxSteps
-| hbase.master.balancer.stochastic.stepsPerRegion
-| hbase.master.balancer.stochastic.maxRunningTime
-| hbase.master.balancer.stochastic.runMaxSteps
-| hbase.master.balancer.stochastic.numRegionLoadsToRemember
-| hbase.master.loadbalance.bytable
-| hbase.master.balancer.stochastic.minCostNeedBalance
-| hbase.master.balancer.stochastic.localityCost
-| hbase.master.balancer.stochastic.rackLocalityCost
-| hbase.master.balancer.stochastic.readRequestCost
-| hbase.master.balancer.stochastic.writeRequestCost
-| hbase.master.balancer.stochastic.memstoreSizeCost
-| hbase.master.balancer.stochastic.storefileSizeCost
-| hbase.master.balancer.stochastic.regionReplicaHostCostKey
-| hbase.master.balancer.stochastic.regionReplicaRackCostKey
-| hbase.master.balancer.stochastic.regionCountCost
-| hbase.master.balancer.stochastic.primaryRegionCountCost
-| hbase.master.balancer.stochastic.moveCost
-| hbase.master.balancer.stochastic.moveCost.offpeak
-| hbase.master.balancer.stochastic.maxMovePercent
-| hbase.master.balancer.stochastic.tableSkewCost
-| hbase.master.regions.recovery.check.interval
-| hbase.regions.recovery.store.file.ref.count
-| hbase.rsgroup.fallback.enable
-|===
-
-ifdef::backend-docbook[]
-[index]
-== Index
-// Generated automatically by the DocBook toolchain.
-endif::backend-docbook[]
diff --git a/src/main/asciidoc/_chapters/cp.adoc b/src/main/asciidoc/_chapters/cp.adoc
deleted file mode 100644
index 43aa55137b9b..000000000000
--- a/src/main/asciidoc/_chapters/cp.adoc
+++ /dev/null
@@ -1,812 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[[cp]]
-= Apache HBase Coprocessors
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-HBase Coprocessors are modeled after Google BigTable's coprocessor implementation
-(http://research.google.com/people/jeff/SOCC2010-keynote-slides.pdf pages 41-42.).
-
-The coprocessor framework provides mechanisms for running your custom code directly on
-the RegionServers managing your data. Efforts are ongoing to bridge gaps between HBase's
-implementation and BigTable's architecture. For more information see
-link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
-
-The information in this chapter is primarily sourced and heavily reused from the following
-resources:
-
-. Mingjie Lai's blog post
-link:https://blogs.apache.org/hbase/entry/coprocessor_introduction[Coprocessor Introduction].
-. Gaurav Bhardwaj's blog post
-link:http://www.3pillarglobal.com/insights/hbase-coprocessors[The How To Of HBase Coprocessors].
-
-[WARNING]
-.Use Coprocessors At Your Own Risk
-====
-Coprocessors are an advanced feature of HBase and are intended to be used by system
-developers only. Because coprocessor code runs directly on the RegionServer and has
-direct access to your data, they introduce the risk of data corruption, man-in-the-middle
-attacks, or other malicious data access. Currently, there is no mechanism to prevent
-data corruption by coprocessors, though work is underway on
-link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
-+
-In addition, there is no resource isolation, so a well-intentioned but misbehaving
-coprocessor can severely degrade cluster performance and stability.
-====
-
-== Coprocessor Overview
-
-In HBase, you fetch data using a `Get` or `Scan`, whereas in an RDBMS you use a SQL
-query. In order to fetch only the relevant data, you filter it using a HBase
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html[Filter]
-, whereas in an RDBMS you use a `WHERE` predicate.
-
-After fetching the data, you perform computations on it. This paradigm works well
-for "small data" with a few thousand rows and several columns. However, when you scale
-to billions of rows and millions of columns, moving large amounts of data across your
-network will create bottlenecks at the network layer, and the client needs to be powerful
-enough and have enough memory to handle the large amounts of data and the computations.
-In addition, the client code can grow large and complex.
-
-In this scenario, coprocessors might make sense. You can put the business computation
-code into a coprocessor which runs on the RegionServer, in the same location as the
-data, and returns the result to the client.
-
-This is only one scenario where using coprocessors can provide benefit. Following
-are some analogies which may help to explain some of the benefits of coprocessors.
-
-[[cp_analogies]]
-=== Coprocessor Analogies
-
-Triggers and Stored Procedure::
- An Observer coprocessor is similar to a trigger in a RDBMS in that it executes
- your code either before or after a specific event (such as a `Get` or `Put`)
- occurs. An endpoint coprocessor is similar to a stored procedure in a RDBMS
- because it allows you to perform custom computations on the data on the
- RegionServer itself, rather than on the client.
-
-MapReduce::
- MapReduce operates on the principle of moving the computation to the location of
- the data. Coprocessors operate on the same principal.
-
-AOP::
- If you are familiar with Aspect Oriented Programming (AOP), you can think of a coprocessor
- as applying advice by intercepting a request and then running some custom code,
- before passing the request on to its final destination (or even changing the destination).
-
-
-=== Coprocessor Implementation Overview
-
-. Your class should implement one of the Coprocessor interfaces -
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor],
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver],
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService] - to name a few.
-
-. Load the coprocessor, either statically (from the configuration) or dynamically,
-using HBase Shell. For more details see <>.
-
-. Call the coprocessor from your client-side code. HBase handles the coprocessor
-transparently.
-
-The framework API is provided in the
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html[coprocessor]
-package.
-
-== Types of Coprocessors
-
-=== Observer Coprocessors
-
-Observer coprocessors are triggered either before or after a specific event occurs.
-Observers that happen before an event use methods that start with a `pre` prefix,
-such as link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#prePut-org.apache.hadoop.hbase.coprocessor.ObserverContext-org.apache.hadoop.hbase.client.Put-org.apache.hadoop.hbase.wal.WALEdit-org.apache.hadoop.hbase.client.Durability-[`prePut`]. Observers that happen just after an event override methods that start
-with a `post` prefix, such as link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#postPut-org.apache.hadoop.hbase.coprocessor.ObserverContext-org.apache.hadoop.hbase.client.Put-org.apache.hadoop.hbase.wal.WALEdit-org.apache.hadoop.hbase.client.Durability-[`postPut`].
-
-
-==== Use Cases for Observer Coprocessors
-Security::
- Before performing a `Get` or `Put` operation, you can check for permission using
- `preGet` or `prePut` methods.
-
-Referential Integrity::
- HBase does not directly support the RDBMS concept of refential integrity, also known
- as foreign keys. You can use a coprocessor to enforce such integrity. For instance,
- if you have a business rule that every insert to the `users` table must be followed
- by a corresponding entry in the `user_daily_attendance` table, you could implement
- a coprocessor to use the `prePut` method on `user` to insert a record into `user_daily_attendance`.
-
-Secondary Indexes::
- You can use a coprocessor to maintain secondary indexes. For more information, see
- link:https://cwiki.apache.org/confluence/display/HADOOP2/Hbase+SecondaryIndexing[SecondaryIndexing].
-
-
-==== Types of Observer Coprocessor
-
-RegionObserver::
- A RegionObserver coprocessor allows you to observe events on a region, such as `Get`
- and `Put` operations. See
- link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver].
-
-RegionServerObserver::
- A RegionServerObserver allows you to observe events related to the RegionServer's
- operation, such as starting, stopping, or performing merges, commits, or rollbacks.
- See
- link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.html[RegionServerObserver].
-
-MasterObserver::
- A MasterObserver allows you to observe events related to the HBase Master, such
- as table creation, deletion, or schema modification. See
- link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/MasterObserver.html[MasterObserver].
-
-WalObserver::
- A WalObserver allows you to observe events related to writes to the Write-Ahead
- Log (WAL). See
- link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/WALObserver.html[WALObserver].
-
-<> provides working examples of observer coprocessors.
-
-
-
-[[cpeps]]
-=== Endpoint Coprocessor
-
-Endpoint processors allow you to perform computation at the location of the data.
-See <>. An example is the need to calculate a running
-average or summation for an entire table which spans hundreds of regions.
-
-In contrast to observer coprocessors, where your code is run transparently, endpoint
-coprocessors must be explicitly invoked using the
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html#coprocessorService-java.util.function.Function-org.apache.hadoop.hbase.client.ServiceCaller-byte:A-[CoprocessorService()]
-method available in
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html[AsyncTable].
-
-[WARNING]
-.On using coprocessorService method with sync client
-====
-The coprocessorService method in link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html[Table]
-has been deprecated.
-
-In link:https://issues.apache.org/jira/browse/HBASE-21512[HBASE-21512]
-we reimplement the sync client based on the async client. The coprocessorService
-method defined in `Table` interface directly references a method from protobuf's
-`BlockingInterface`, which means we need to use a separate thread pool to execute
-the method so we avoid blocking the async client(We want to avoid blocking calls in
-our async implementation).
-
-Since coprocessor is an advanced feature, we believe it is OK for coprocessor users to
-instead switch over to use `AsyncTable`. There is a lightweight
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Connection.html#toAsyncConnection--[toAsyncConnection]
-method to get an `AsyncConnection` from `Connection` if needed.
-====
-
-Starting with HBase 0.96, endpoint coprocessors are implemented using Google Protocol
-Buffers (protobuf). For more details on protobuf, see Google's
-link:https://developers.google.com/protocol-buffers/docs/proto[Protocol Buffer Guide].
-Endpoints Coprocessor written in version 0.94 are not compatible with version 0.96 or later.
-See
-link:https://issues.apache.org/jira/browse/HBASE-5448[HBASE-5448]). To upgrade your
-HBase cluster from 0.94 or earlier to 0.96 or later, you need to reimplement your
-coprocessor.
-
-In HBase 2.x, we made use of a shaded version of protobuf 3.x, but kept the
-protobuf for coprocessors on 2.5.0. In HBase 3.0.0, we removed all dependencies on
-non-shaded protobuf so you need to reimplement your coprocessor to make use of the
-shaded protobuf version provided in hbase-thirdparty. Please see
-the <> section for more details.
-
-Coprocessor Endpoints should make no use of HBase internals and
-only avail of public APIs; ideally a CPEP should depend on Interfaces
-and data structures only. This is not always possible but beware
-that doing so makes the Endpoint brittle, liable to breakage as HBase
-internals evolve. HBase internal APIs annotated as private or evolving
-do not have to respect semantic versioning rules or general java rules on
-deprecation before removal. While generated protobuf files are
-absent the hbase audience annotations -- they are created by the
-protobuf protoc tool which knows nothing of how HBase works --
-they should be consided `@InterfaceAudience.Private` so are liable to
-change.
-
-<> provides working examples of endpoint coprocessors.
-
-[[cp_loading]]
-== Loading Coprocessors
-
-To make your coprocessor available to HBase, it must be _loaded_, either statically
-(through the HBase configuration) or dynamically (using HBase Shell or the Java API).
-
-=== Static Loading
-
-Follow these steps to statically load your coprocessor. Keep in mind that you must
-restart HBase to unload a coprocessor that has been loaded statically.
-
-. Define the Coprocessor in _hbase-site.xml_, with a element with a
-and a sub-element. The should be one of the following:
-+
-- `hbase.coprocessor.region.classes` for RegionObservers and Endpoints.
-- `hbase.coprocessor.wal.classes` for WALObservers.
-- `hbase.coprocessor.master.classes` for MasterObservers.
-+
- must contain the fully-qualified class name of your coprocessor's implementation
-class.
-+
-For example to load a Coprocessor (implemented in class SumEndPoint.java) you have to create
-following entry in RegionServer's 'hbase-site.xml' file (generally located under 'conf' directory):
-+
-[source,xml]
-----
-
- hbase.coprocessor.region.classes
- org.myname.hbase.coprocessor.endpoint.SumEndPoint
-
-----
-+
-If multiple classes are specified for loading, the class names must be comma-separated.
-The framework attempts to load all the configured classes using the default class loader.
-Therefore, the jar file must reside on the server-side HBase classpath.
-
-+
-Coprocessors which are loaded in this way will be active on all regions of all tables.
-These are also called system Coprocessor.
-The first listed Coprocessors will be assigned the priority `Coprocessor.Priority.SYSTEM`.
-Each subsequent coprocessor in the list will have its priority value incremented by one (which
-reduces its priority, because priorities have the natural sort order of Integers).
-
-+
-These priority values can be manually overriden in hbase-site.xml. This can be useful if you
-want to guarantee that a coprocessor will execute after another. For example, in the following
-configuration `SumEndPoint` would be guaranteed to go last, except in the case of a tie with
-another coprocessor:
-+
-[source,xml]
-----
-
- hbase.coprocessor.region.classes
- org.myname.hbase.coprocessor.endpoint.SumEndPoint|2147483647
-
-----
-
-+
-When calling out to registered observers, the framework executes their callbacks methods in the
-sorted order of their priority. +
-Ties are broken arbitrarily.
-
-. Put your code on HBase's classpath. One easy way to do this is to drop the jar
- (containing you code and all the dependencies) into the `lib/` directory in the
- HBase installation.
-
-. Restart HBase.
-
-
-=== Static Unloading
-
-. Delete the coprocessor's element, including sub-elements, from `hbase-site.xml`.
-. Restart HBase.
-. Optionally, remove the coprocessor's JAR file from the classpath or HBase's `lib/`
- directory.
-
-
-=== Dynamic Loading
-
-You can also load a coprocessor dynamically, without restarting HBase. This may seem
-preferable to static loading, but dynamically loaded coprocessors are loaded on a
-per-table basis, and are only available to the table for which they were loaded. For
-this reason, dynamically loaded tables are sometimes called *Table Coprocessor*.
-
-In addition, dynamically loading a coprocessor acts as a schema change on the table,
-and the table must be taken offline to load the coprocessor.
-
-There are three ways to dynamically load Coprocessor.
-
-[NOTE]
-.Assumptions
-====
-The below mentioned instructions makes the following assumptions:
-
-* A JAR called `coprocessor.jar` contains the Coprocessor implementation along with all of its
-dependencies.
-* The JAR is available in HDFS in some location like
-`hdfs://:/user//coprocessor.jar`.
-====
-
-[[load_coprocessor_in_shell]]
-==== Using HBase Shell
-
-. Load the Coprocessor, using a command like the following:
-+
-[source]
-----
-hbase alter 'users', METHOD => 'table_att', 'Coprocessor'=>'hdfs://:/
-user//coprocessor.jar| org.myname.hbase.Coprocessor.RegionObserverExample|1073741823|
-arg1=1,arg2=2'
-----
-+
-The Coprocessor framework will try to read the class information from the coprocessor table
-attribute value.
-The value contains four pieces of information which are separated by the pipe (`|`) character.
-+
-* File path: The jar file containing the Coprocessor implementation must be in a location where
-all region servers can read it. +
-You could copy the file onto the local disk on each region server, but it is recommended to store
-it in HDFS. +
-https://issues.apache.org/jira/browse/HBASE-14548[HBASE-14548] allows a directory containing the jars
-or some wildcards to be specified, such as: hdfs://:/user// or
-hdfs://:/user//*.jar. Please note that if a directory is specified,
-all jar files(.jar) in the directory are added. It does not search for files in sub-directories.
-Do not use a wildcard if you would like to specify a directory. This enhancement applies to the
-usage via the JAVA API as well.
-* Class name: The full class name of the Coprocessor.
-* Priority: An integer. The framework will determine the execution sequence of all configured
-observers registered at the same hook using priorities. This field can be left blank. In that
-case the framework will assign a default priority value.
-* Arguments (Optional): This field is passed to the Coprocessor implementation. This is optional.
-
-. Verify that the coprocessor loaded:
-+
-----
-hbase(main):04:0> describe 'users'
-----
-+
-The coprocessor should be listed in the `TABLE_ATTRIBUTES`.
-
-==== Using the Java API (all HBase versions)
-
-The following Java code shows how to use the `setValue()` method of `HTableDescriptor`
-to load a coprocessor on the `users` table.
-
-[source,java]
-----
-TableName tableName = TableName.valueOf("users");
-String path = "hdfs://:/user//coprocessor.jar";
-Configuration conf = HBaseConfiguration.create();
-Connection connection = ConnectionFactory.createConnection(conf);
-Admin admin = connection.getAdmin();
-HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
-HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
-columnFamily1.setMaxVersions(3);
-hTableDescriptor.addFamily(columnFamily1);
-HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
-columnFamily2.setMaxVersions(3);
-hTableDescriptor.addFamily(columnFamily2);
-hTableDescriptor.setValue("COPROCESSOR$1", path + "|"
-+ RegionObserverExample.class.getCanonicalName() + "|"
-+ Coprocessor.PRIORITY_USER);
-admin.modifyTable(tableName, hTableDescriptor);
-----
-
-==== Using the Java API (HBase 0.96+ only)
-
-In HBase 0.96 and newer, the `addCoprocessor()` method of `HTableDescriptor` provides
-an easier way to load a coprocessor dynamically.
-
-[source,java]
-----
-TableName tableName = TableName.valueOf("users");
-Path path = new Path("hdfs://:/user//coprocessor.jar");
-Configuration conf = HBaseConfiguration.create();
-Connection connection = ConnectionFactory.createConnection(conf);
-Admin admin = connection.getAdmin();
-HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
-HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
-columnFamily1.setMaxVersions(3);
-hTableDescriptor.addFamily(columnFamily1);
-HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
-columnFamily2.setMaxVersions(3);
-hTableDescriptor.addFamily(columnFamily2);
-hTableDescriptor.addCoprocessor(RegionObserverExample.class.getCanonicalName(), path,
-Coprocessor.PRIORITY_USER, null);
-admin.modifyTable(tableName, hTableDescriptor);
-----
-
-WARNING: There is no guarantee that the framework will load a given Coprocessor successfully.
-For example, the shell command neither guarantees a jar file exists at a particular location nor
-verifies whether the given class is actually contained in the jar file.
-
-
-=== Dynamic Unloading
-
-==== Using HBase Shell
-
-. Alter the table to remove the coprocessor.
-+
-[source]
-----
-hbase> alter 'users', METHOD => 'table_att_unset', NAME => 'coprocessor$1'
-----
-
-==== Using the Java API
-
-Reload the table definition without setting the value of the coprocessor either by
-using `setValue()` or `addCoprocessor()` methods. This will remove any coprocessor
-attached to the table.
-
-[source,java]
-----
-TableName tableName = TableName.valueOf("users");
-String path = "hdfs://:/user//coprocessor.jar";
-Configuration conf = HBaseConfiguration.create();
-Connection connection = ConnectionFactory.createConnection(conf);
-Admin admin = connection.getAdmin();
-HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
-HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
-columnFamily1.setMaxVersions(3);
-hTableDescriptor.addFamily(columnFamily1);
-HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
-columnFamily2.setMaxVersions(3);
-hTableDescriptor.addFamily(columnFamily2);
-admin.modifyTable(tableName, hTableDescriptor);
-----
-
-In HBase 0.96 and newer, you can instead use the `removeCoprocessor()` method of the
-`HTableDescriptor` class.
-
-
-[[cp_example]]
-== Examples
-HBase ships examples for Observer Coprocessor.
-
-A more detailed example is given below.
-
-These examples assume a table called `users`, which has two column families `personalDet`
-and `salaryDet`, containing personal and salary details. Below is the graphical representation
-of the `users` table.
-
-.Users Table
-[width="100%",cols="7",options="header,footer"]
-|====================
-| 3+|personalDet 3+|salaryDet
-|*rowkey* |*name* |*lastname* |*dob* |*gross* |*net* |*allowances*
-|admin |Admin |Admin | 3+|
-|cdickens |Charles |Dickens |02/07/1812 |10000 |8000 |2000
-|jverne |Jules |Verne |02/08/1828 |12000 |9000 |3000
-|====================
-
-
-=== Observer Example
-
-The following Observer coprocessor prevents the details of the user `admin` from being
-returned in a `Get` or `Scan` of the `users` table.
-
-. Write a class that implements the
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionCoprocessor.html[RegionCoprocessor],
-link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver]
-class.
-
-. Override the `preGetOp()` method (the `preGet()` method is deprecated) to check
-whether the client has queried for the rowkey with value `admin`. If so, return an
-empty result. Otherwise, process the request as normal.
-
-. Put your code and dependencies in a JAR file.
-
-. Place the JAR in HDFS where HBase can locate it.
-
-. Load the Coprocessor.
-
-. Write a simple program to test it.
-
-Following are the implementation of the above steps:
-
-[source,java]
-----
-public class RegionObserverExample implements RegionCoprocessor, RegionObserver {
-
- private static final byte[] ADMIN = Bytes.toBytes("admin");
- private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
- private static final byte[] COLUMN = Bytes.toBytes("Admin_det");
- private static final byte[] VALUE = Bytes.toBytes("You can't see Admin details");
-
- @Override
- public Optional getRegionObserver() {
- return Optional.of(this);
- }
-
- @Override
- public void preGetOp(final ObserverContext |