Skip to content

Commit f951e14

Browse files
committed
update docs
1 parent 68308c7 commit f951e14

File tree

1 file changed

+17
-6
lines changed

1 file changed

+17
-6
lines changed

README.md

+17-6
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,15 @@
11
# g2-MLP
22

3+
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/a-proposal-of-multi-layer-perceptron-with/node-classification-on-ppi)](https://paperswithcode.com/sota/node-classification-on-ppi?p=a-proposal-of-multi-layer-perceptron-with)
4+
5+
This is an implementation of g2-MLP.
6+
7+
## paper details
8+
9+
[A Proposal of Multi-Layer Perceptron with Graph Gating Unit for Graph Representation Learning and its Application to Surrogate Model for FEM](https://www.jstage.jst.go.jp/article/pjsai/JSAI2022/0/JSAI2022_1G4OS22a03/_article/-char/ja/)
10+
11+
GNNs are the neural networks for the representation learning of graph-structured data, most of which are constructed by stacking graph convolutional layers. As stacking n-layers of ones is equivalent to propagating n-hop of neighbor nodes' information, GNNs require enough large number of layers to learn large graphs. However, it tends to degrade the model performance due to the problem called over-smoothing. In this paper, by presenting a novel GNN model, based on stacking feedforward neural networks with gating structures using GCNs, I tried to solve the over-smoothing problem and thereby overcome the difficulty of GNNs learning large graphs. The experimental results showed that the proposed method monotonically improved the prediction accuracy up to 20 layers without over-smoothing, whereas the conventional method caused it at 4 to 8 layers. In two experiments on large graphs, the PPI dataset, a benchmark for inductive node classification, and the application to the surrogate model for finite element methods, the proposed method achieved the highest accuracy of the existing methods compared, especially with a state-of-the-art accuracy of 99.71% on the PPI dataset.
12+
313
## Results
414

515
### PPI (iductive node classification)
@@ -19,7 +29,7 @@
1929
| g2-MLP (20 layers, pb 1.0, 1500 epochs) | 99.598% (±0.012) |
2030

2131
<details>
22-
<summary>ハイパラ詳細</summary>
32+
<summary>hyper parameters</summary>
2333
<div>
2434

2535
| parameters | value |
@@ -46,7 +56,7 @@
4656
| g2-MLP (12 layers, pb 0.8, 1500 epochs) | 70.49% (±0.75) | 69.68<br>71.04<br>71.68<br>69.88<br>70.18 |
4757

4858
<details>
49-
<summary>ハイパラ詳細</summary>
59+
<summary>hyper parameters</summary>
5060
<div>
5161

5262
| parameters | value |
@@ -63,14 +73,16 @@
6373
</details>
6474

6575
<details>
66-
<summary>データセット詳細</summary>
76+
<summary>About FEM dataset</summary>
6777
<div>
6878

69-
次のようなフィレット構造を対象とする。
79+
The fillet structure like the follows
7080

7181
![fillet](./docs/fillet.png)
7282

73-
次のようなパターンに対して、 229 個のデータを用意した。
83+
We prepared 229 datasets in the following conditions.
84+
85+
TODO : translate
7486

7587
- 各長方形の高さをそれぞれ 10 ~ 100 (10刻み) でランダムに変更
7688
- フィレット径を 5 ~ 45 (5刻み) でランダムに変更
@@ -308,4 +320,3 @@ $ pip install torch-geometric
308320
Yu, Nakai. The University of Tokyo.
309321

310322
Contact : nakai-yu623@g.ecc.u-tokyo.ac.jp
311-

0 commit comments

Comments
 (0)