Skip to content

Commit e0d4b3f

Browse files
meszibalupetersomogyi
authored andcommitted
HBASE-22221 Extend kafka-proxy documentation with required hbase settings
1 parent 1cf301a commit e0d4b3f

File tree

1 file changed

+32
-17
lines changed

1 file changed

+32
-17
lines changed

kafka/README.md

+32-17
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Apache HBase™ Kafka Proxy
22

3-
Welcome to the hbase kafka proxy. The purpose of this proxy is to act as a _fake peer_.
3+
Welcome to the HBase kafka proxy. The purpose of this proxy is to act as a _fake peer_.
44
It receives replication events from a peer cluster and applies a set of rules (stored in
55
a _kafka-route-rules.xml_ file) to determine if the event should be forwarded to a
66
kafka topic. If the mutation matches a rule, the mutation is converted to an avro object
@@ -12,9 +12,9 @@ pass them as properties on the command line; i.e `-Dkey=value`.
1212

1313
## Usage
1414

15-
1. Make sure the hbase command is in your path. The proxy runs `hbase classpath` to find hbase libraries.
15+
1. Make sure the `hbase` command is in your path. The proxy runs `hbase classpath` to find hbase libraries.
1616
2. Create any topics in your kafka broker that you wish to use.
17-
3. set up _kafka-route-rules.xml_. This file controls how the mutations are routed. There are two kinds of rules: _route_ and _drop_.
17+
3. Set up _kafka-route-rules.xml_. This file controls how the mutations are routed. There are two kinds of rules: _route_ and _drop_.
1818
* _drop_: any mutation that matches this rule will be dropped.
1919
* _route_: any mutation that matches this rule will be routed to the configured topic.
2020

@@ -57,6 +57,17 @@ This combination will route all mutations from `default:mytable` columnFamily `m
5757
The way the rule is written, all other mutations for column family `mycf` will be routed
5858
to the `mykafka` topic.
5959

60+
### Setting up HBase
61+
62+
1. Enable replication `hbase.replication=true`.
63+
2. Enable table replication in shell. Table name is `table` and column family is `cf` in the
64+
following example:
65+
```
66+
disable 'table'
67+
alter 'table', {NAME => 'cf', REPLICATION_SCOPE => 1}
68+
enable 'table'
69+
```
70+
6071
## Service Arguments
6172

6273
```
@@ -69,27 +80,31 @@ to the `mykafka` topic.
6980
--auto (or -a) auto create peer
7081
```
7182

83+
## Starting the Service
7284

73-
## Starting the Service.
74-
* make sure the hbase command is in your path
75-
* by default, the service looks for route-rules.xml in the conf directory. You can specify a different file or location with the `-r` argument
85+
* Make sure the `hbase` command is in your path.
86+
* By default, the service looks for _kafka-route-rules.xml_ in the conf directory. You can
87+
specify a different file or location with the `-r` argument.
7688

89+
For example:
7790

78-
### Example
7991
```
80-
$ bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p wootman -b localhost:9092 -r ~/kafka-route-rules.xml
92+
$ bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p <peer> -b <kafka.address>:<kafka.port>
8193
```
8294

8395
This:
84-
* starts the kafka proxy
85-
* passes -a so proxy will create the replication peer specified by -p if it does not exist (not required, but it saves some busy work).
86-
* enables the peer (-e) when the service starts (not required, you can manually enable the peer in the hbase shell)
96+
* Starts the kafka proxy.
97+
* Passes `-a` so proxy will create the replication peer specified by `-p` if it does not exist
98+
(not required, but it saves some busy work).
99+
* Enables the peer (`-e`) when the service starts (not required, you can manually enable the
100+
peer in the shell).
101+
* The proxy will use _conf/kafka-route-rules.xml_ by default.
87102

88103
## Notes
89104

90-
1. The proxy will connect to the zookeeper in `hbase-site.xml` by default. You can override this by passing `-Dhbase.zookeeper.quorum`
105+
1. The proxy will connect to the zookeeper in `hbase-site.xml` by default. You can override this
106+
by passing `-Dhbase.zookeeper.quorum`.
91107
2. Route rules only support unicode characters.
92-
3. I do not have access to a secured hadoop clsuter to test this on.
93108

94109
### Message Format
95110

@@ -120,10 +135,10 @@ A utility is included to test the routing rules.
120135
$ bin/hbase-connectors-daemon.sh start kafkaproxytest -k <kafka.broker> -t <topic to listen to>
121136
```
122137

123-
The messages will be dumped in string format under `logs/`
138+
The messages will be dumped in string format under `logs/`.
124139

125-
## TODO:
140+
## TODO
126141
1. Some properties passed into the region server are hard-coded.
127-
2. The avro objects should be generic
128-
3. Allow rules to be refreshed without a restart
142+
2. The avro objects should be generic.
143+
3. Allow rules to be refreshed without a restart.
129144
4. Get this tested on a secure (TLS & Kerberos) enabled cluster.

0 commit comments

Comments
 (0)