1
1
# Apache HBase&trade ; Kafka Proxy
2
2
3
- Welcome to the hbase kafka proxy. The purpose of this proxy is to act as a _ fake peer_ .
3
+ Welcome to the HBase kafka proxy. The purpose of this proxy is to act as a _ fake peer_ .
4
4
It receives replication events from a peer cluster and applies a set of rules (stored in
5
5
a _ kafka-route-rules.xml_ file) to determine if the event should be forwarded to a
6
6
kafka topic. If the mutation matches a rule, the mutation is converted to an avro object
@@ -12,9 +12,9 @@ pass them as properties on the command line; i.e `-Dkey=value`.
12
12
13
13
## Usage
14
14
15
- 1 . Make sure the hbase command is in your path. The proxy runs ` hbase classpath ` to find hbase libraries.
15
+ 1 . Make sure the ` hbase ` command is in your path. The proxy runs ` hbase classpath ` to find hbase libraries.
16
16
2 . Create any topics in your kafka broker that you wish to use.
17
- 3 . set up _ kafka-route-rules.xml_ . This file controls how the mutations are routed. There are two kinds of rules: _ route_ and _ drop_ .
17
+ 3 . Set up _ kafka-route-rules.xml_ . This file controls how the mutations are routed. There are two kinds of rules: _ route_ and _ drop_ .
18
18
* _ drop_ : any mutation that matches this rule will be dropped.
19
19
* _ route_ : any mutation that matches this rule will be routed to the configured topic.
20
20
@@ -57,6 +57,17 @@ This combination will route all mutations from `default:mytable` columnFamily `m
57
57
The way the rule is written, all other mutations for column family ` mycf ` will be routed
58
58
to the ` mykafka ` topic.
59
59
60
+ ### Setting up HBase
61
+
62
+ 1 . Enable replication ` hbase.replication=true ` .
63
+ 2 . Enable table replication in shell. Table name is ` table ` and column family is ` cf ` in the
64
+ following example:
65
+ ```
66
+ disable 'table'
67
+ alter 'table', {NAME => 'cf', REPLICATION_SCOPE => 1}
68
+ enable 'table'
69
+ ```
70
+
60
71
## Service Arguments
61
72
62
73
```
@@ -69,27 +80,31 @@ to the `mykafka` topic.
69
80
--auto (or -a) auto create peer
70
81
```
71
82
83
+ ## Starting the Service
72
84
73
- ## Starting the Service .
74
- * make sure the hbase command is in your path
75
- * by default, the service looks for route-rules.xml in the conf directory. You can specify a different file or location with the ` -r ` argument
85
+ * Make sure the ` hbase ` command is in your path .
86
+ * By default, the service looks for _ kafka-route-rules.xml _ in the conf directory. You can
87
+ specify a different file or location with the ` -r ` argument.
76
88
89
+ For example:
77
90
78
- ### Example
79
91
```
80
- $ bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p wootman -b localhost:9092 -r ~/ kafka-route-rules.xml
92
+ $ bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p <peer> -b <kafka.address>:< kafka.port>
81
93
```
82
94
83
95
This:
84
- * starts the kafka proxy
85
- * passes -a so proxy will create the replication peer specified by -p if it does not exist (not required, but it saves some busy work).
86
- * enables the peer (-e) when the service starts (not required, you can manually enable the peer in the hbase shell)
96
+ * Starts the kafka proxy.
97
+ * Passes ` -a ` so proxy will create the replication peer specified by ` -p ` if it does not exist
98
+ (not required, but it saves some busy work).
99
+ * Enables the peer (` -e ` ) when the service starts (not required, you can manually enable the
100
+ peer in the shell).
101
+ * The proxy will use _ conf/kafka-route-rules.xml_ by default.
87
102
88
103
## Notes
89
104
90
- 1 . The proxy will connect to the zookeeper in ` hbase-site.xml ` by default. You can override this by passing ` -Dhbase.zookeeper.quorum `
105
+ 1 . The proxy will connect to the zookeeper in ` hbase-site.xml ` by default. You can override this
106
+ by passing ` -Dhbase.zookeeper.quorum ` .
91
107
2 . Route rules only support unicode characters.
92
- 3 . I do not have access to a secured hadoop clsuter to test this on.
93
108
94
109
### Message Format
95
110
@@ -120,10 +135,10 @@ A utility is included to test the routing rules.
120
135
$ bin/hbase-connectors-daemon.sh start kafkaproxytest -k <kafka.broker> -t <topic to listen to>
121
136
```
122
137
123
- The messages will be dumped in string format under ` logs/ `
138
+ The messages will be dumped in string format under ` logs/ ` .
124
139
125
- ## TODO:
140
+ ## TODO
126
141
1 . Some properties passed into the region server are hard-coded.
127
- 2 . The avro objects should be generic
128
- 3 . Allow rules to be refreshed without a restart
142
+ 2 . The avro objects should be generic.
143
+ 3 . Allow rules to be refreshed without a restart.
129
144
4 . Get this tested on a secure (TLS & Kerberos) enabled cluster.
0 commit comments