Skip to content

Commit 1ffb6b2

Browse files
committed
[docs] update docs
1 parent 1ed9e6d commit 1ffb6b2

File tree

6 files changed

+84
-41
lines changed

6 files changed

+84
-41
lines changed

examples/fagcn/readme.md

+9-6
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
# Dataset Statics
77
| Dataset | # Nodes | # Edges | # Classes |
8-
| -------- | ------- | ------- | --------- |
8+
|----------|---------|---------|-----------|
99
| Cora | 2,708 | 10,556 | 7 |
1010
| Citeseer | 3,327 | 9,228 | 6 |
1111
| Pubmed | 19,717 | 88,651 | 3 |
@@ -21,9 +21,12 @@ TL_BACKEND="tensorflow" python fagcn_trainer.py --dataset pubmed --lr 0.01 --l2_
2121
TL_BACKEND="paddle" python fagcn_trainer.py --dataset cora --lr 0.01 --l2_coef 0.001 --drop_rate 0.6 --hidden_dim 16 --eps 0.2 --num_layers 3
2222
TL_BACKEND="paddle" python fagcn_trainer.py --dataset citeseer --lr 0.01 --l2_coef 0.001 --drop_rate 0.6 --hidden_dim 16 --eps 0.2 --num_layers 5
2323
TL_BACKEND="paddle" python fagcn_trainer.py --dataset pubmed --lr 0.01 --l2_coef 0.001 --drop_rate 0.4 --hidden_dim 16 --eps 0.2 --num_layers 6
24+
TL_BACKEND="torch" python fagcn_trainer.py --dataset cora --lr 0.005 --l2_coef 0.0005 --drop_rate 0.4 --hidden_dim 16 --eps 0.3 --num_layers 5
25+
TL_BACKEND="torch" python fagcn_trainer.py --dataset citeseer --lr 0.005 --l2_coef 0.001 --drop_rate 0.4 --hidden_dim 16 --eps 0.3 --num_layers 3
26+
TL_BACKEND="torch" python fagcn_trainer.py --dataset pubmed --lr 0.005 --l2_coef 0.001 --drop_rate 0.4 --hidden_dim 16 --eps 0.5 --num_layers 6
2427
```
25-
| Dataset | Paper | Our(tf) | Our(pd) |
26-
|----------|----------|----------|----------|
27-
| cora | 84.1±0.5 | 83.1±0.4 | 82.1±0.4 |
28-
| citeseer | 72.7±0.8 | 68.3±0.8 | 68.2±0.8 |
29-
| pubmed | 79.4±0.3 | 79.2±0.1 | 79.7±0.3 |
28+
| Dataset | Paper | Our(tf) | Our(pd) | Our(torch) |
29+
|----------|----------|----------|----------|------------|
30+
| cora | 84.1±0.5 | 83.1±0.4 | 82.1±0.4 | 78.1±0.7 |
31+
| citeseer | 72.7±0.8 | 68.3±0.8 | 68.2±0.8 | 65.3±1.3 |
32+
| pubmed | 79.4±0.3 | 79.2±0.1 | 79.7±0.3 | 77.9±0.8 |

examples/gat/readme.md

+12-5
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,17 @@ Results
2424
```bash
2525
TL_BACKEND="paddle" python gat_trainer.py --dataset cora --lr 0.01 --l2_coef 0.01 --drop_rate 0.7
2626
TL_BACKEND="paddle" python gat_trainer.py --dataset citeseer --lr 0.006 --l2_coef 0.04 --drop_rate 0.6
27+
TL_BACKEND="paddle" python gat_trainer.py --dataset pubmed --lr 0.05 --l2_coef 0.0015 --drop_rate 0.6
28+
TL_BACKEND="torch" python gat_trainer.py --dataset cora --lr 0.01 --l2_coef 0.005 --drop_rate 0.7
29+
TL_BACKEND="torch" python gat_trainer.py --dataset citeseer --lr 0.01 --l2_coef 0.01 --drop_rate 0.6
30+
TL_BACKEND="torch" python gat_trainer.py --dataset pubmed --lr 0.01 --l2_coef 0.001 --drop_rate 0.2
31+
TL_BACKEND="tensorflow" python gat_trainer.py --dataset cora --lr 0.01 --l2_coef 0.01 --drop_rate 0.7
32+
TL_BACKEND="tensorflow" python gat_trainer.py --dataset citeseer --lr 0.01 --l2_coef 0.04 --drop_rate 0.7
33+
TL_BACKEND="tensorflow" python gat_trainer.py --dataset pubmed --lr 0.005 --l2_coef 0.003 --drop_rate 0.6
2734
```
2835

29-
| Dataset | Paper | Our(pd) | Our(tf) |
30-
| -------- | ---------- | ------------ | ------------ |
31-
| cora | 83.0(±0.7) | 83.54(±0.75) | 83.26(±0.96) |
32-
| pubmed | 72.5(±0.7) | 72.74(±0.76) | 72.5(±0.65) |
33-
| citeseer | 79.0(±0.3) | OOM | OOM |
36+
| Dataset | Paper | Our(pd) | Our(torch) | Our(tf) |
37+
| -------- | ---------- | ------------ | ------------ | ------------ |
38+
| cora | 83.0(±0.7) | 83.54(±0.75) | 82.44(±0.43) | 83.26(±0.96) |
39+
| citeseer | 72.5(±0.7) | 72.74(±0.76) | 70.94(±0.43) | 72.5(±0.65) |
40+
| pubmed | 79.0(±0.3) | 78.82(±0.71) | 78.5(±0.75) | 78.2(±0.38) |

examples/gcnii/gcnii_trainer.py

+1
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,7 @@ def main(args):
7373
drop_rate=args.drop_rate,
7474
name="GCNII")
7575

76+
# Notice that we do not use the same regularization method as the paper do, as TensorlayerX currently do not support it.
7677
optimizer = tlx.optimizers.Adam(lr=args.lr, weight_decay=args.l2_coef)
7778
metrics = tlx.metrics.Accuracy()
7879
train_weights = net.trainable_weights

examples/gcnii/readme.md

+20-8
Original file line numberDiff line numberDiff line change
@@ -6,19 +6,31 @@
66
77
Dataset Statics
88
-------
9+
| Dataset | # Nodes | # Edges | # Classes |
10+
| -------- | ------- | ------- | --------- |
11+
| Cora | 2,708 | 10,556 | 7 |
12+
| Citeseer | 3,327 | 9,228 | 6 |
13+
| Pubmed | 19,717 | 88,651 | 3 |
914
Refer to [Planetoid](https://gammagl.readthedocs.io/en/latest/api/gammagl.datasets.html#gammagl.datasets.Planetoid).
1015

1116

1217
Results
1318
-------
1419
```bash
15-
python gcnii_trainer.py --dataset cora --lr 0.01 --num_layers 64 --alpha 0.1 --hidden_dim 64 --lambd 0.5 --drop_rate 0.3 --l2_coef 0.001
16-
python gcnii_trainer.py --dataset pubmed --lr 0.01 --num_layers 32 --alpha 0.1 --hidden_dim 256 --lambd 0.5 --drop_rate 0.3 --l2_coef 0.001
17-
python gcnii_trainer.py --dataset citeseer --lr 0.01 --num_layers 16 --alpha 0.1 --hidden_dim 256 --lambd 0.4 --drop_rate 0.3 --l2_coef 0.001
20+
TL_BACKEND="tensorflow" python gcnii_trainer.py --dataset cora --lr 0.01 --num_layers 64 --alpha 0.1 --hidden_dim 64 --lambd 0.5 --drop_rate 0.3 --l2_coef 0.001
21+
TL_BACKEND="tensorflow" python gcnii_trainer.py --dataset citeseer --lr 0.01 --num_layers 32 --alpha 0.1 --hidden_dim 256 --lambd 0.5 --drop_rate 0.3 --l2_coef 0.001
22+
TL_BACKEND="tensorflow" python gcnii_trainer.py --dataset pubmed --lr 0.01 --num_layers 16 --alpha 0.1 --hidden_dim 256 --lambd 0.4 --drop_rate 0.3 --l2_coef 0.001
23+
TL_BACKEND="paddle" python gcnii_trainer.py --dataset cora --lr 0.01 --num_layers 64 --alpha 0.1 --hidden_dim 64 --lambd 0.5 --drop_rate 0.3 --l2_coef 0.001
24+
TL_BACKEND="paddle" python gcnii_trainer.py --dataset citeseer --lr 0.01 --num_layers 32 --alpha 0.1 --hidden_dim 256 --lambd 0.4 --drop_rate 0.4 --l2_coef 0.001
25+
TL_BACKEND="paddle" python gcnii_trainer.py --dataset pubmed --lr 0.01 --num_layers 16 --alpha 0.1 --hidden_dim 256 --lambd 0.5 --drop_rate 0.7 --l2_coef 0.001
26+
TL_BACKEND="torch" python gcnii_trainer.py --dataset cora --lr --lr 0.01 --num_layers 64 --alpha 0.1 --hidden_dim 64 --lambd 0.5 --drop_rate 0.3 --l2_coef 0.001
27+
TL_BACKEND="torch" python gcnii_trainer.py --dataset citeseer --lr 0.01 --num_layers 64 --alpha 0.1 --hidden_dim 64 --lambd 0.6 --drop_rate 0.4 --l2_coef 0.001
28+
TL_BACKEND="torch" python gcnii_trainer.py --dataset pubmed --lr 0.01 --num_layers 64 --alpha 0.1 --hidden_dim 64 --lambd 0.4 --drop_rate 0.6 --l2_coef 0.001
1829
```
19-
| Dataset | Paper | Our(pd) | Our(tf) |
20-
|----------|-------|--------------|--------------|
21-
| cora | 85.5 | 83.12(±0.47) | 83.23(±0.76) |
22-
| pubmed | 73.4 | 72.04(±0.91) | 71.9(±0.7) |
23-
| citeseer | 80.3 | 80.36(±0.65) | 80.1(±0.5) |
30+
| Dataset | Paper | Our(pd) | Our(tf) | Our(tf) |
31+
|----------|-------|--------------|--------------|------------|
32+
| cora | 85.5 | 83.12(±0.47) | 83.23(±0.76) | 83.1(±0.9) |
33+
| pubmed | 73.4 | 72.04(±0.91) | 71.9(±0.7) | 71.4(±0.6) |
34+
| citeseer | 80.3 | 80.36(±0.65) | 80.1(±0.5) | 80.5(±0.3) |
2435

36+
> Notice that we do not use the same regularization method as the paper do, as TensorlayerX currently do not support it.

examples/rgcn/readme.md

+21-12
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
# Relational Graph Convolutional Network
2-
Paper: Modeling Relational Data with Graph Convolutional Networks
3-
Author's code for entity classification: [https://github.com/tkipf/relational-gcn](https://github.com/tkipf/relational-gcn)
4-
Author's code for link prediction: [https://github.com/MichSchli/RelationPrediction](https://github.com/MichSchli/RelationPrediction)
2+
- Paper: Modeling Relational Data with Graph Convolutional Networks
3+
4+
- Author's code for entity classification: [https://github.com/tkipf/relational-gcn](https://github.com/tkipf/relational-gcn)
5+
- Author's code for link prediction: [https://github.com/MichSchli/RelationPrediction](https://github.com/MichSchli/RelationPrediction)
56

67
# Dataset Statics
78
| Dataset | #Nodes | #Edges | #Relations | #Labeled |
8-
|---------|-----------|------------|------------|----------|
9+
| ------- | --------- | ---------- | ---------- | -------- |
910
| AIFB | 8,285 | 58,086 | 90 | 176 |
1011
| MUTAG | 23,644 | 148,454 | 46 | 340 |
1112
| BGS | 333,845 | 1,832,398 | 206 | 146 |
@@ -15,14 +16,22 @@ Results
1516
-------
1617

1718
```bash
19+
TL_BACKEND="pytorch" python rgcn_trainer.py --dataset aifb --l2 5e-5
20+
TL_BACKEND="pytorch" python rgcn_trainer.py --dataset mutag --l2_coef 5e-2
21+
TL_BACKEND="pytorch" python rgcn_trainer.py --dataset bgs --lr 0.0001 --l2_coef 5e-2
22+
23+
TL_BACKEND="tensorflow" python rgcn_trainer.py --dataset aifb
24+
TL_BACKEND="tensorflow" python rgcn_trainer.py --dataset mutag --l2_coef 5e-2
25+
TL_BACKEND="tensorflow" python rgcn_trainer.py --dataset bgs --l2_coef 5e-2
26+
1827
TL_BACKEND="paddle" python rgcn_trainer.py --dataset aifb
19-
TL_BACKEND="paddle" python rgcn_trainer.py --dataset mutag --lr 0.001 --l2_coef 5e-2
20-
TL_BACKEND="paddle" python rgcn_trainer.py --dataset bgs --lr 0.001 --l2_coef 1e-2
28+
TL_BACKEND="paddle" python rgcn_trainer.py --dataset mutag --l2_coef 5e-2
29+
TL_BACKEND="paddle" python rgcn_trainer.py --dataset bgs --l2_coef 5e-2
2130
```
2231

23-
| Dataset | Paper | Our(th) | Our(tf) |
24-
|---------|-------|------------|-----------|
25-
| AIFB | 95.83 | 93.8(±2.0) | 94.44(±0) |
26-
| MUTAG | 73.23 | 82.3(±1.8) | |
27-
| BGS | 83.10 | 74.1(±1.7) | |
28-
| AM | 89.29 | | |
32+
| Dataset | Paper | Our(th) | Our(tf) | Our(pd) |
33+
|---------|-------|--------------|--------------|-------------|
34+
| AIFB | 95.83 | 96.11(±1.52) | 94.17(±2.05) | 95.56(±2.3) |
35+
| MUTAG | 73.23 | 85.0(±0.66) | 85.29(±1.20) | 85.00(±1.9) |
36+
| BGS | 83.10 | 74.1(±1.7) | 73.79(±1.9) | 73.56(±3.8) |
37+
| AM | 89.29 | | | |

examples/sgc/readme.md

+21-10
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,35 @@
1-
Simple Graph Convolution (SGC)
2-
============
1+
# Simple Graph Convolution (SGC)
32

43
- Paper link: [Simplifying Graph Convolutional Networks](https://arxiv.org/abs/1902.07153)
54
- Author's code repo: [https://github.com/Tiiiger/SGC](https://github.com/Tiiiger/SGC).
65

76
Dataset Statics
87
-------
8+
9+
| Dataset | # Nodes | # Edges | # Classes |
10+
|----------|---------|---------|-----------|
11+
| Cora | 2,708 | 10,556 | 7 |
12+
| Citeseer | 3,327 | 9,228 | 6 |
13+
| Pubmed | 19,717 | 88,651 | 3 |
914
Refer to [Planetoid](https://gammagl.readthedocs.io/en/latest/api/gammagl.datasets.html#gammagl.datasets.Planetoid).
1015

1116
Results
1217
-------
1318
```bash
14-
TL_BACKEND="paddle" python3 sgc_trainer.py --dataset cora --lr 0.2 --n_epoch 250 --iter_K 2 --l2_coef 0.005
15-
TL_BACKEND="paddle" python3 sgc_trainer.py --dataset citeseer --lr 0.2 --n_epoch 250 --iter_K 2 --l2_coef 0.005
16-
TL_BACKEND="paddle" python3 sgc_trainer.py --dataset pubmed --lr 0.1 --n_epoch 250 --iter_K 2 --l2_coef 0.0005
19+
TL_BACKEND="paddle" python sgc_trainer.py --dataset cora --lr 0.2 --n_epoch 250 --iter_K 2 --l2_coef 0.005 --self_loops 1
20+
TL_BACKEND="paddle" python sgc_trainer.py --dataset citeseer --lr 0.01 --n_epoch 250 --iter_K 5 --l2_coef 0.05 --self_loops 1
21+
TL_BACKEND="paddle" python sgc_trainer.py --dataset pubmed --lr 0.1 --n_epoch 200 --iter_K 2 --l2_coef 0.00005 --self_loops 1
22+
TL_BACKEND="tensorflow" python sgc_trainer.py --dataset cora --lr 0.1 --n_epoch 250 --iter_K 5 --l2_coef 0.0005 --self_loops 5
23+
TL_BACKEND="tensorflow" python sgc_trainer.py --dataset citeseer --lr 0.01 --n_epoch 200 --iter_K 15 --l2_coef 0.00005 --self_loops 1
24+
TL_BACKEND="tensorflow" python sgc_trainer.py --dataset pubmed --lr 0.1 --n_epoch 200 --iter_K 15 --l2_coef 0.0005 --self_loops 1
25+
TL_BACKEND="torch" python sgc_trainer.py --dataset cora --lr 0.2 --n_epoch 250 --iter_K 2 --l2_coef 0.005 --self_loops 1
26+
TL_BACKEND="torch" python sgc_trainer.py --dataset citeseer --lr 0.1 --n_epoch 250 --iter_K 2 --l2_coef 0.00005 --self_loops 1
27+
TL_BACKEND="torch" python sgc_trainer.py --dataset pubmed --lr 0.1 --n_epoch 200 --iter_K 2 --l2_coef 0.00005 --self_loops 1
1728
```
1829

19-
| dataset | paper | our(tf) | our(pd) |
20-
|----------|------------|--------------|---------------|
21-
| cora | 81.0(±0) | 81.45(±0.37) | 81.65(±0.2) |
22-
| citeseer | 71.9(±0.1) | 69.03(±0.27) | 71.08(±0.04) |
23-
| pubmed | 78.9(±0) | 79.1(±0) | 79.71(±0.05) |
30+
| dataset | paper | our(tf) | our(pd) | our(th) |
31+
|----------|------------|--------------|--------------|---------------|
32+
| cora | 81.0(±0) | 81.45(±0.37) | 81.65(±0.2) | 81.69(±0.18) |
33+
| citeseer | 71.9(±0.1) | 69.03(±0.27) | 71.08(±0.04) | 71.63(±0.38) |
34+
| pubmed | 78.9(±0) | 79.1(±0) | 79.17(±0.05) | 79.16(±0.05) |
2435

0 commit comments

Comments
 (0)