Skip to content

Commit ec6dd72

Browse files
authored
[Update] fix bugs (#159)
* Update README.md * update * update paper name * remove comments --------- Co-authored-by: BuptTab <gyzhou2000@gmail.com>
1 parent 487c778 commit ec6dd72

File tree

5 files changed

+18
-33
lines changed

5 files changed

+18
-33
lines changed

README.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -383,7 +383,7 @@ CUDA_VISIBLE_DEVICES="1" TL_BACKEND="paddle" python gcn_trainer.py
383383
| [GaAN [UAI 2018]](./examples/gaan) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
384384
| [GRADE [NeurIPS 2022]](./examples/grade) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
385385
| [GMM [CVPR 2017]](./examples/gmm) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
386-
| [TADW [IJCAI 2015]](./examples/tadw) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
386+
| [TADW [IJCAI 2015]](./examples/tadw) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
387387
| [MGNNI [NeurIPS 2022]](./examples/mgnni) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
388388
| [MAGCL [AAAI 2023]](./examples/magcl) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
389389

@@ -404,6 +404,8 @@ CUDA_VISIBLE_DEVICES="1" TL_BACKEND="paddle" python gcn_trainer.py
404404
| [CompGCN [ICLR 2020]](./examples/compgcn) | | :heavy_check_mark: | :heavy_check_mark: | |
405405
| [HPN [TKDE 2021]](./examples/hpn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
406406
| [ieHGCN [TKDE 2021]](./examples/iehgcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
407+
| [MetaPath2Vec [KDD 2017]](./examples/metapath2vec) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
408+
| [HERec [TKDE 2018]](./examples/herec) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
407409

408410
> Note
409411
>

examples/herec/herec_trainer_aminer.py

-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616
from sklearn.linear_model import LogisticRegression
1717
import numpy as np
1818

19-
2019
if tlx.BACKEND == 'torch': # when the backend is torch and you want to use GPU
2120
try:
2221
tlx.set_device(device='GPU', id=0)

examples/herec/readme.md

+8-10
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
1-
# Heterogeneous Information Network Embedding
2-
for Recommendation
3-
1+
# Heterogeneous Information Network Embedding for Recommendation
42
- Paper link: [https://arxiv.org/pdf/1711.10730.pdf](https://arxiv.org/pdf/1711.10730.pdf)
53

64
- Author's code repo:
75

8-
https://github.com/librahu/HERec.
6+
https://github.com/librahu/HERec
97

108

119
Dataset Statics
@@ -21,14 +19,14 @@ Dataset Statics
2119

2220
```bash
2321
TL_BACKEND="torch" python herec_trainer_aminer.py --lr 0.1 --embedding_dim 32 --walk_length 60 --window_size 3 --num_walks 800 --n_epoch 5 --num_negative_samples 10 --batch_size 128 --train_ratio 0.5 --dataset aminer
24-
TL_BACKEND="torch" python herec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
25-
TL_BACKEND="torch" python herec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 64 --walk_length 100 --window_size 5 --num_walks 10 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
22+
TL_BACKEND="torch" python herec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
23+
TL_BACKEND="torch" python herec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 64 --walk_length 100 --window_size 5 --num_walks 10 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
2624
TL_BACKEND="tensorflow" python herec_trainer_aminer.py --lr 0.1 --embedding_dim 32 --walk_length 80 --window_size 3 --num_walks 800 --n_epoch 5 --num_negative_samples 15 --batch_size 128 --train_ratio 0.5 --dataset aminer
27-
TL_BACKEND="tensorflow" python herec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 20 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
28-
TL_BACKEND="tensorflow" python herec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 64 --walk_length 100 --window_size 5 --num_walks 10 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
25+
TL_BACKEND="tensorflow" python herec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 20 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
26+
TL_BACKEND="tensorflow" python herec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 64 --walk_length 100 --window_size 5 --num_walks 10 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
2927
TL_BACKEND="paddle" python herec_trainer_aminer.py --lr 0.1 --embedding_dim 32 --walk_length 200 --window_size 5 --num_walks 800 --n_epoch 5 --num_negative_samples 20 --batch_size 128 --train_ratio 0.5 --dataset aminer
30-
TL_BACKEND="paddle" python herec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
31-
TL_BACKEND="paddle" python herec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 64 --walk_length 100 --window_size 5 --num_walks 10 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
28+
TL_BACKEND="paddle" python herec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
29+
TL_BACKEND="paddle" python herec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 64 --walk_length 100 --window_size 5 --num_walks 10 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
3230
```
3331

3432

examples/metapath2vec/metapath2vec_trainer_aminer.py

-11
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,6 @@ def main(args, log_steps=10):
9090
model.set_train()
9191
train_loss = train_one_step(data, tlx.convert_to_tensor(0, dtype=tlx.float32))
9292
total_loss += train_loss.item()
93-
# print(i)
9493
if (i + 1) % log_steps == 0:
9594
model.set_eval()
9695
z = model.campute('author', batch=graph['author'].y_index)
@@ -99,15 +98,8 @@ def main(args, log_steps=10):
9998
z = tlx.convert_to_tensor(z)
10099
y = tlx.convert_to_tensor(y)
101100
perm = np.random.permutation(z.shape[0])
102-
# train_perm = perm[:int(z.size(0) * args.train_ratio)]
103101
train_perm = perm[:int(z.shape[0] * args.train_ratio)]
104-
# test_perm = perm[int(z.size(0) * args.train_ratio):]
105102
test_perm = perm[int(z.shape[0] * args.train_ratio):]
106-
# train_perm = tlx.convert_to_tensor(train_perm)
107-
# test_perm = tlx.convert_to_tensor(test_perm)
108-
# y = np.array(y)
109-
# val_acc = calculate_acc(z[train_perm], y[train_perm], z[test_perm], y[test_perm], max_iter=300)
110-
# val_acc = calculate_acc(tlx.gather(z, train_perm), tlx.gather(y, train_perm), tlx.gather(z, test_perm), tlx.gather(y, test_perm), max_iter=300)
111103
if tlx.BACKEND == "paddle":
112104
val_acc = calculate_acc(tlx.gather(z, tlx.convert_to_tensor(train_perm)),
113105
tlx.gather(y, tlx.convert_to_tensor(train_perm)),
@@ -138,11 +130,8 @@ def main(args, log_steps=10):
138130
z = tlx.convert_to_tensor(z)
139131
y = tlx.convert_to_tensor(y)
140132
perm = np.random.permutation(z.shape[0])
141-
# train_perm = perm[:int(z.size(0) * args.train_ratio)]
142133
train_perm = perm[:int(z.shape[0] * args.train_ratio)]
143-
# test_perm = perm[int(z.size(0) * args.train_ratio):]
144134
test_perm = perm[int(z.shape[0] * args.train_ratio):]
145-
# test_acc = calculate_acc(z[train_perm], y[train_perm], z[test_perm], y[test_perm], max_iter=300)
146135
if tlx.BACKEND == "paddle":
147136
test_acc = calculate_acc(tlx.gather(z, tlx.convert_to_tensor(train_perm)),
148137
tlx.gather(y, tlx.convert_to_tensor(train_perm)),

examples/metapath2vec/readme.md

+7-10
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,4 @@
1-
# metapath2vec: Scalable Representation Learning for
2-
Heterogeneous Networks
3-
for Recommendation
4-
1+
# metapath2vec: Scalable Representation Learning for Heterogeneous Networks
52
- Paper link: [https://ericdongyx.github.io/papers/KDD17-dong-chawla-swami-metapath2vec.pdf](https://ericdongyx.github.io/papers/KDD17-dong-chawla-swami-metapath2vec.pdf)
63

74
- Author's code repo (in Tensorflow):
@@ -22,14 +19,14 @@ Dataset Statics
2219

2320
```bash
2421
TL_BACKEND="torch" python metapath2vec_trainer_aminer.py --lr 0.1 --embedding_dim 16 --walk_length 60 --window_size 3 --num_walks 600 --n_epoch 5 --num_negative_samples 6 --batch_size 128 --train_ratio 0.5 --dataset aminer
25-
TL_BACKEND="torch" python metapath2vec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
26-
TL_BACKEND="torch" python metapath2vec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
22+
TL_BACKEND="torch" python metapath2vec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
23+
TL_BACKEND="torch" python metapath2vec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
2724
TL_BACKEND="tensorflow" python metapath2vec_trainer_aminer.py --lr 0.1 --embedding_dim 16 --walk_length 60 --window_size 3 --num_walks 500 --n_epoch 5 --num_negative_samples 6 --batch_size 128 --train_ratio 0.5 --dataset aminer
28-
TL_BACKEND="tensorflow" python metapath2vec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
29-
TL_BACKEND="tensorflow" python metapath2vec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
25+
TL_BACKEND="tensorflow" python metapath2vec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
26+
TL_BACKEND="tensorflow" python metapath2vec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
3027
TL_BACKEND="paddle" python metapath2vec_trainer_aminer.py --lr 0.1 --embedding_dim 16 --walk_length 60 --window_size 3 --num_walks 600 --n_epoch 5 --num_negative_samples 6 --batch_size 128 --train_ratio 0.5 --dataset aminer
31-
TL_BACKEND="paddle" python metapath2vec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
32-
TL_BACKEND="paddle" python metapath2vec_trainer_imdb&dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
28+
TL_BACKEND="paddle" python metapath2vec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset imdb
29+
TL_BACKEND="paddle" python metapath2vec_trainer_imdb_dblp.py --lr 0.01 --embedding_dim 16 --walk_length 50 --window_size 7 --num_walks 5 --n_epoch 50 --num_negative_samples 5 --batch_size 128 --dataset dblp
3330
```
3431

3532

0 commit comments

Comments
 (0)