Skip to content

Commit 0c4bac5

Browse files
committed
started port to pyswmm
1 parent 07d40c2 commit 0c4bac5

17 files changed

+33
-2286
lines changed

.DS_Store

100755100644
0 Bytes
Binary file not shown.

.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
*.npy
21
__pycache__/
32
.ipynb_checkpoints
43
.ropeproject
54
.DS_Store
5+
.pyc
6+
.DS_Store

README.md

+15-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ Source code and data used in the *Deep Reinforcement Learning for the Real Time
44

55
![RLagent](./data/RL_main_fig_1.png)
66

7-
## Dependencies
7+
## Dependencies
8+
Code supports python 3.7 and python 2.7.
9+
810
Python dependencies for the project can be installed using **requirements.txt**
911

1012
Storm water network is simulated using EPA-SWMM and pyswmm/matswmm. Matswmm has been deprecated and we strongly suggest using pyswmm.
@@ -13,6 +15,13 @@ pyswmm/matswmm makes function calls to a static c library. Hence, we advice gcc-
1315

1416
## Agents
1517

18+
#### Classical Q Learning
19+
20+
Classical Q learning algorithm implementation for controlling the water level a single tank is provided in **classical_q**
21+
22+
This version observers the water level and sets the gate position between 0-100.
23+
24+
#### DQN
1625
There are two types of deep rl agents.
1726
1. Centralized controller that can observe multiple states across the network and control the ponds in the network
1827
2. Localised controller that can observe the state of an individual assent and control it
@@ -21,6 +30,11 @@ To use these agents to control a storm water network, you would just need to kno
2130

2231
Refer to the example implementation for further details.
2332

33+
#### Training
34+
35+
University of Michigan's FLUX was used to train the agents. They can be run on any local computer too, but we **strongly** recommend a GPU.
36+
37+
2438
## Data presented in the paper
2539

2640
Weights of the neural network used for plots can be found in **./data**

centralized_controller.py

-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@
55
from keras.optimizers import RMSprop
66
import itertools
77
import sys
8-
import swmm
98

109
# Simulation parameters
1110
epi_start = float(sys.argv[1])

lib/__init__.pyc

-161 Bytes
Binary file not shown.

lib/core_network.pyc

-3.1 KB
Binary file not shown.

lib/dqn_agent.pyc

-3.66 KB
Binary file not shown.

lib/ger_fun.py

-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
import numpy as np
2-
#import matplotlib.pyplot as plt
32
import swmm
43
from keras.models import Sequential
54
from keras.layers import Dense, Activation, Dropout
@@ -69,7 +68,6 @@ def build_network(input_states,
6968
model.compile(loss='mean_squared_error', optimizer=sgd)
7069
return model
7170

72-
"""
7371
def plot_network(Ponds_network, components_tracking,
7472
components_bookkeeping, figure_num=1, show=True):
7573
rows = len(components_tracking) + len(components_bookkeeping)
@@ -95,7 +93,6 @@ def plot_network(Ponds_network, components_tracking,
9593
# else:
9694
# return fig
9795

98-
"""
9996
# SWMM Network finder
10097
def swmm_states(Network, state):
10198
temp = []

lib/ger_fun.pyc

-3.09 KB
Binary file not shown.

lib/pond_net.pyc

-2.82 KB
Binary file not shown.

lib/rewards.py

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
import numpy as np
2+
3+
# Single tank reward functions
4+
def reward1():
5+
return pass
6+
7+
def reward1():
8+
return pass
9+
10+
def reward1():
11+
return pass
12+
13+
# Reward Function: System Scale Control
14+
def reward_sys(depth, outflow, gate_postions_rate, flood):
15+
return pass

requirements.txt

+1
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,4 @@ numpy
22
matplotlib
33
tensorflow
44
keras
5+
pyswmm

0 commit comments

Comments
 (0)