Skip to content

Commit eaa51ac

Browse files
gyzhou2000dddg617
andauthored
[setup] fix bugs (#191)
* fix bugs * fix torch backend pytest bugs * update * update * add `Github Action` and update file name * add `gspmm` func * update * [Model] Update Examples * [Model] Update examples --------- Co-authored-by: BuptTab <gyzhou2000@gmail.com> Co-authored-by: dddg617 <996179900@qq.com>
1 parent c13439c commit eaa51ac

File tree

75 files changed

+724
-466
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+724
-466
lines changed

.github/workflows/test_push.yml

+51
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
name: Build and Test
2+
3+
on: [push, pull_request]
4+
5+
jobs:
6+
build-and-test:
7+
runs-on: ubuntu-latest
8+
steps:
9+
- name: Check out repository code
10+
uses: actions/checkout@v3
11+
with:
12+
fetch-depth: 0
13+
submodules: 'recursive'
14+
15+
- name: Checkout master and HEAD
16+
run: |
17+
git checkout ${{ github.event.pull_request.head.sha }}
18+
19+
- name: Set up Python 3.9
20+
uses: actions/setup-python@v4
21+
with:
22+
python-version: '3.9'
23+
24+
- name: Install Python dependencies
25+
run: |
26+
python -m pip install --upgrade pip
27+
pip install -r .circleci/requirements.txt
28+
29+
- name: Install TensorLyaerX
30+
run: |
31+
pip install git+https://github.com/dddg617/TensorLayerX.git@nightly
32+
33+
- name: Install PyTorch, torchvision and torchaudio
34+
run: |
35+
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
36+
37+
- name: Install llvmlite
38+
run: |
39+
pip install llvmlite
40+
41+
- name: Install package
42+
run: |
43+
python setup.py install build_ext --inplace
44+
45+
- name: Run TF tests
46+
run: |
47+
TL_BACKEND=tensorflow pytest
48+
49+
- name: Run TH tests
50+
run: |
51+
TL_BACKEND=torch pytest
+73
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
name: Test Pypi Package
2+
3+
on: [workflow_dispatch]
4+
5+
jobs:
6+
test-pypi:
7+
runs-on: ubuntu-latest
8+
steps:
9+
- name: Check out repository code
10+
uses: actions/checkout@v3
11+
with:
12+
fetch-depth: 0
13+
submodules: 'recursive'
14+
15+
- name: Set up Python 3.9
16+
uses: actions/setup-python@v4
17+
with:
18+
python-version: '3.9'
19+
20+
- name: Install Python dependencies
21+
run: |
22+
python -m pip install --upgrade pip
23+
pip install -r .circleci/requirements.txt
24+
25+
- name: Install TensorLayerx
26+
run: |
27+
pip install git+https://github.com/dddg617/TensorLayerX.git@nightly
28+
29+
- name: Install PyTorch, torchvision and torchaudio
30+
run: |
31+
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
32+
33+
- name: Install llvmlite
34+
run: |
35+
pip install llvmlite
36+
37+
- name: Install package
38+
run: |
39+
pip install gammagl
40+
41+
- name: Run Trainer Examples
42+
run: |
43+
FAILURES=""
44+
FILES=$(find examples/ -type f -name "*_trainer.py")
45+
for file in $FILES; do
46+
python "$file" --n_epoch 1 || FAILURES="$FAILURES$file "
47+
done
48+
if [ -n "$FAILURES" ]; then
49+
echo "The following trainer scripts failed: $FAILURES"
50+
exit 1
51+
fi
52+
shell: bash
53+
54+
- name: Run Sampler Examples
55+
run: |
56+
FAILURES=""
57+
FILES=$(find examples/ -type f -name "*_sampler.py")
58+
for file in $FILES; do
59+
python "$file" || FAILURES="$FAILURES$file "
60+
done
61+
if [ -n "$FAILURES" ]; then
62+
echo "The following sampler scripts failed: $FAILURES"
63+
exit 1
64+
fi
65+
shell: bash
66+
67+
- name: Check for Failures
68+
run: |
69+
if [ -n "$FAILURES" ]; then
70+
echo "Some examples failed to run: $FAILURES"
71+
exit 1
72+
fi
73+
shell: bash

README.md

+49-35
Original file line numberDiff line numberDiff line change
@@ -357,66 +357,68 @@ CUDA_VISIBLE_DEVICES="1" TL_BACKEND="paddle" python gcn_trainer.py
357357
> Set `CUDA_VISIBLE_DEVICES=" "` if you want to run it in CPU.
358358
359359
## Supported Models
360+
<details>
361+
<summary>
362+
Now, GammaGL supports over 50 models, we welcome everyone to use or contribute models.</summary>
360363

361364
| | TensorFlow | PyTorch | Paddle | MindSpore |
362365
| ------------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
363366
| [GCN [ICLR 2017]](./examples/gcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
364367
| [GAT [ICLR 2018]](./examples/gat) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
365368
| [GraphSAGE [NeurIPS 2017]](./examples/graphsage) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
366-
| [ChebNet [NeurIPS 2016]](./examples/chebnet) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
367-
| [GCNII [ICLR 2017]](./examples/gcnii) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
369+
| [ChebNet [NeurIPS 2016]](./examples/chebnet) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
370+
| [GCNII [ICLR 2017]](./examples/gcnii) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
368371
| [JKNet [ICML 2018]](./examples/jknet) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
369-
| [DiffPool [NeurIPS 2018]](./examples/diffpool) | | | | |
370372
| [SGC [ICML 2019]](./examples/sgc) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
371373
| [GIN [ICLR 2019]](./examples/gin) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
372374
| [APPNP [ICLR 2019]](./examples/appnp) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
373375
| [AGNN [arxiv]](./examples/agnn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
374376
| [SIGN [ICML 2020 Workshop]](./examples/sign) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
375377
| [DropEdge [ICLR 2020]](./examples/dropedge) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
376-
| [GPRGNN [ICLR 2021]](./examples/gprgnn) | :heavy_check_mark: | | | |
378+
| [GPRGNN [ICLR 2021]](./examples/gprgnn) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
377379
| [GNN-FiLM [ICML 2020]](./examples/film) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
378380
| [GraphGAN [AAAI 2018]](./examples/graphgan) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
379-
| [HardGAT [KDD 2019]](./examples/hardgat) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
381+
| [HardGAT [KDD 2019]](./examples/hardgat) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
380382
| [MixHop [ICML 2019]](./examples/mixhop) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
381383
| [PNA [NeurIPS 2020]](./examples/pna) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
382384
| [FAGCN [AAAI 2021]](./examples/fagcn) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
383-
| [GATv2 [ICLR 2021]](./examples/gatv2) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
384-
| [GEN [WWW 2021]](./examples/gen) | :heavy_check_mark: | :heavy_check_mark: | | |
385-
| [GAE [NeurIPS 2016]](./examples/vgae) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
386-
| [VGAE [NeurIPS 2016]](./examples/vgae) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
385+
| [GATv2 [ICLR 2021]](./examples/gatv2) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
386+
| [GEN [WWW 2021]](./examples/gen) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
387+
| [GAE [NeurIPS 2016]](./examples/vgae) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
388+
| [VGAE [NeurIPS 2016]](./examples/vgae) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
387389
| [HCHA [PR 2021]](./examples/hcha) | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
388390
| [Node2Vec [KDD 2016]](./examples/node2vec) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
389391
| [DeepWalk [KDD 2014]](./examples/deepwalk) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
390392
| [DGCNN [ACM T GRAPHIC 2019]](./examples/dgcnn) | :heavy_check_mark: | :heavy_check_mark: | | |
391-
| [GaAN [UAI 2018]](./examples/gaan) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
392-
| [GRADE [NeurIPS 2022]](./examples/grade) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
393+
| [GaAN [UAI 2018]](./examples/gaan) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
393394
| [GMM [CVPR 2017]](./examples/gmm) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
394395
| [TADW [IJCAI 2015]](./examples/tadw) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
395-
| [MGNNI [NeurIPS 2022]](./examples/mgnni) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
396-
| [MAGCL [AAAI 2023]](./examples/magcl) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
397-
| [CAGCN [NeurIPS 2021]](./examples/cagcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
396+
| [MGNNI [NeurIPS 2022]](./examples/mgnni) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
397+
| [CAGCN [NeurIPS 2021]](./examples/cagcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
398398
| [DR-GST [WWW 2022]](./examples/drgst) | :heavy_check_mark: | :heavy_check_mark: | | |
399399
| [Specformer [ICLR 2023]](./examples/specformer) | | :heavy_check_mark: | :heavy_check_mark: | |
400400
| [AM-GCN [KDD 2020]](./examples/amgcn) | | :heavy_check_mark: | | |
401401

402-
| Contrastive Learning | TensorFlow | PyTorch | Paddle | MindSpore |
403-
| ---------------------------------------------- | ------------------ | ------------------ | ------------------ | --------- |
404-
| [DGI [ICLR 2019]](./examples/dgi) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
405-
| [GRACE [ICML 2020 Workshop]](./examples/grace) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
406-
| [MVGRL [ICML 2020]](./examples/mvgrl) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
407-
| [InfoGraph [ICLR 2020]](./examples/infograph) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
408-
| [MERIT [IJCAI 2021]](./examples/merit) | :heavy_check_mark: | | :heavy_check_mark: | |
409-
| [GNN-POT [NeurIPS 2023]](./examples/grace_pot) | | :heavy_check_mark: | | |
410-
411-
| Heterogeneous Graph Learning | TensorFlow | PyTorch | Paddle | MindSpore |
412-
| -------------------------------------------- | ------------------ | ------------------ | ------------------ | --------- |
413-
| [RGCN [ESWC 2018]](./examples/rgcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
402+
| Contrastive Learning | TensorFlow | PyTorch | Paddle | MindSpore |
403+
| ------------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
404+
| [DGI [ICLR 2019]](./examples/dgi) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
405+
| [GRACE [ICML 2020 Workshop]](./examples/grace) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
406+
| [GRADE [NeurIPS 2022]](./examples/grade) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
407+
| [MVGRL [ICML 2020]](./examples/mvgrl) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
408+
| [InfoGraph [ICLR 2020]](./examples/infograph) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
409+
| [MERIT [IJCAI 2021]](./examples/merit) | :heavy_check_mark: | | :heavy_check_mark: | |
410+
| [GNN-POT [NeurIPS 2023]](./examples/grace_pot) | | :heavy_check_mark: | | |
411+
| [MAGCL [AAAI 2023]](./examples/magcl) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
412+
413+
| Heterogeneous Graph Learning | TensorFlow | PyTorch | Paddle | MindSpore |
414+
| -------------------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
415+
| [RGCN [ESWC 2018]](./examples/rgcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
414416
| [HAN [WWW 2019]](./examples/han) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
415-
| [HGT [WWW 2020]](./examples/hgt/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
416-
| [SimpleHGN [KDD 2021]](./examples/simplehgn) | :heavy_check_mark: | | | |
417-
| [CompGCN [ICLR 2020]](./examples/compgcn) | | :heavy_check_mark: | :heavy_check_mark: | |
417+
| [HGT [WWW 2020]](./examples/hgt/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
418+
| [SimpleHGN [KDD 2021]](./examples/simplehgn) | :heavy_check_mark: | | | |
419+
| [CompGCN [ICLR 2020]](./examples/compgcn) | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
418420
| [HPN [TKDE 2021]](./examples/hpn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
419-
| [ieHGCN [TKDE 2021]](./examples/iehgcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
421+
| [ieHGCN [TKDE 2021]](./examples/iehgcn) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
420422
| [MetaPath2Vec [KDD 2017]](./examples/metapath2vec) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
421423
| [HERec [TKDE 2018]](./examples/herec) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
422424
| [CoGSL [WWW 2022]](./examples/cogsl) | | :heavy_check_mark: | :heavy_check_mark: | |
@@ -425,6 +427,7 @@ CUDA_VISIBLE_DEVICES="1" TL_BACKEND="paddle" python gcn_trainer.py
425427
>
426428
> The models can be run in mindspore backend. Howerver, the results of experiments are not satisfying due to training component issue,
427429
> which will be fixed in future.
430+
</details>
428431
429432
## Contributors
430433

@@ -438,10 +441,21 @@ Contribution is always welcomed. Please feel free to open an issue or email to y
438441
If you use GammaGL in a scientific publication, we would appreciate citations to the following paper:
439442

440443
```
441-
@inproceedings{Liu2023gammagl,
442-
title={GammaGL: A Multi-Backend Library for Graph Neural Networks},
443-
author={Yaoqi Liu, Cheng Yang, Tianyu Zhao, Hui Han, Siyuan Zhang, Jing Wu, Guangyu Zhou, Hai Huang, Hui Wang, Chuan Shi},
444-
booktitle={SIGIR},
445-
year={2023}
444+
@inproceedings{10.1145/3539618.3591891,
445+
author = {Liu, Yaoqi and Yang, Cheng and Zhao, Tianyu and Han, Hui and Zhang, Siyuan and Wu, Jing and Zhou, Guangyu and Huang, Hai and Wang, Hui and Shi, Chuan},
446+
title = {GammaGL: A Multi-Backend Library for Graph Neural Networks},
447+
year = {2023},
448+
isbn = {9781450394086},
449+
publisher = {Association for Computing Machinery},
450+
address = {New York, NY, USA},
451+
url = {https://doi.org/10.1145/3539618.3591891},
452+
doi = {10.1145/3539618.3591891},
453+
booktitle = {Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval},
454+
pages = {2861–2870},
455+
numpages = {10},
456+
keywords = {graph neural networks, frameworks, deep learning},
457+
location = {, Taipei, Taiwan, },
458+
series = {SIGIR '23}
446459
}
460+
447461
```

examples/agnn/agnn_trainer.py

+6-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"""
88
import os
99
# os.environ['CUDA_VISIBLE_DEVICES'] = '0'
10-
os.environ['TL_BACKEND'] = 'torch'
10+
# os.environ['TL_BACKEND'] = 'torch'
1111
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
1212
# 0:Output all; 1:Filter out INFO; 2:Filter out INFO and WARNING; 3:Filter out INFO, WARNING, and ERROR
1313

@@ -133,8 +133,13 @@ def main(args):
133133
parser.add_argument("--dataset", type = str, default = "cora")
134134
parser.add_argument("--dataset_path", type = str, default = r"")
135135
parser.add_argument("--best_model_path", type = str, default = r"./")
136+
parser.add_argument("--gpu", type=int, default=0)
136137

137138
args = parser.parse_args()
139+
if args.gpu >= 0:
140+
tlx.set_device("GPU", args.gpu)
141+
else:
142+
tlx.set_device("CPU")
138143

139144
main(args)
140145

0 commit comments

Comments
 (0)