Skip to content

Commit e4dd2db

Browse files
authored
Merge pull request #90 from kleok/dev
Oct 2024 improves
2 parents 8615f8f + d23d710 commit e4dd2db

File tree

8 files changed

+287
-161
lines changed

8 files changed

+287
-161
lines changed

Floodpyapp_Vit.ipynb

+155-126
Large diffs are not rendered by default.

README.md

+10-15
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
# <img src="https://github.com/kleok/FLOODPY/blob/main/figures/Floodpy_logo.png" width="58"> FLOODPY - FLOOD PYthon toolbox
22
[![GitHub license](https://img.shields.io/badge/License-GNU3-green.svg)](https://github.com/kleok/FLOODPY)
3-
[![Release](https://img.shields.io/badge/Release-0.7.0-brightgreen)](https://github.com/kleok/FLOODPY)
3+
[![Release](https://img.shields.io/badge/Release-Floodpy_Oct_2024-brightgreen)](https://github.com/kleok/FLOODPY)
44
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/kleok/FLOODPY/issues)
55
[![Documentation](https://readthedocs.org/projects/floodpy/badge/?version=latest)](https://floodpy.readthedocs.io/en/latest/)
66

77
## Introduction
88

9-
The FLOod Mapping PYthon toolbox is a free and open-source python toolbox for mapping of floodwater. It exploits the dense Sentinel-1 GRD intensity time series and is based on four processing steps. In the first step, a selection of Sentinel-1 images related to pre-flood (baseline) state and flood state is performed. In the second step, the preprocessing of the selected images is performed in order to create a co-registered stack with all the pre-flood and flood images. In the third step, a statistical temporal analysis is performed and a t-score map that represents the changes due to flood event is calculated. Finally, in the fourth step, a multi-scale iterative thresholding algorithm based on t-score map is performed to extract the final flood map. We believe that the end-user community can benefit by exploiting the FLOODPY's floodwater maps.
9+
The Flood mapping python toolbox (Floodpy) is a free and open-source python toolbox for mapping the non-urban flooded regions. It exploits the dense Sentinel-1 GRD intensity time series using a statistical or a ViT (Visual Transfomer) approach. Before running Floodpy make use you know the following information of the flood event of your interest
10+
- Date and time of the flood event
11+
- Spatial information (e.g. min,max latitude and min,max longitude) of the flood event
1012

1113
This is research code provided to you "as is" with NO WARRANTIES OF CORRECTNESS. Use at your own risk.
1214

@@ -36,21 +38,14 @@ traffic.
3638

3739
### 1.3 Account setup for downloading global atmospheric model data
3840

39-
Currently, FloodPy is based on ERA-5 data. ERA-5 data set is redistributed over the Copernicus Climate Data Store (CDS).
40-
You have to create a new account [here](https://cds.climate.copernicus.eu/user/register?destination=%2F%23!%2Fhome) if you don't own a user account yet.
41-
After the creation of your profile, you will find your user id (UID) and your personal API Key on your User profile page.
41+
FloodPy can download meteorological data from based on ERA-5 data.
42+
You have to create a new account [here](https://cds.climate.copernicus.eu/) if you don't own a user account yet.
43+
After the creation of your profile, you will find your Personal Access Token on your User profile page.
44+
Create manually a ```.cdsapirc``` file under your ```HOME``` directory with the following information:
4245

43-
- Option 1: create manually a ```.cdsapirc``` file under your ```HOME``` directory with the following information:
44-
45-
```
46-
url: https://cds.climate.copernicus.eu/api/v2
47-
key: UID:personal API Key
4846
```
49-
- Option 2: Run [aux/install_CDS_key.sh](https://github.com/kleok/FLOODPY/blob/main/aux/install_CDS_key.sh) script as follows:
50-
51-
```bash
52-
chmod +x install_CDS_key.sh
53-
./install_CDS_key.sh
47+
url: https://cds.climate.copernicus.eu/api
48+
key: Your Personal Access Token
5449
```
5550

5651
### 1.4 Download FLOODPY

floodpy/Download/Download_ERA5_precipitation.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -163,9 +163,9 @@ def Get_ERA5_data(ERA5_variables:list,
163163
df_dict={}
164164
for ERA5_variable in ERA5_variables:
165165

166-
if ERA5_variable in ['longitude', 'latitude']:
166+
if ERA5_variable in ['longitude', 'latitude', 'number']:
167167
pass
168-
elif ERA5_variable=='time':
168+
elif ERA5_variable=='valid_time':
169169
time_var=ERA5_data.variables[ERA5_variable]
170170
t_cal = ERA5_data.variables[ERA5_variable].calendar
171171
dtime = netCDF4.num2date(time_var[:],time_var.units, calendar = t_cal)

floodpy/Download/Query_Sentinel_1_products.py

+21-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,20 @@
11
import requests
22
import geopandas as gpd
33
import pandas as pd
4+
from datetime import timedelta
5+
6+
def filter_datetimes(datetime_list, seconds_thres = 60):
7+
if not datetime_list:
8+
return []
9+
10+
filtered_list = [datetime_list[0]] # Always keep the first element
11+
12+
for i in range(1, len(datetime_list)):
13+
time_diff = datetime_list[i] - filtered_list[-1] # Difference with the last kept element
14+
if time_diff >= timedelta(seconds=seconds_thres):
15+
filtered_list.append(datetime_list[i])
16+
17+
return filtered_list
418

519
def get_attribute_value(attribute_column, attr_name):
620
for attr_dict in attribute_column:
@@ -50,5 +64,10 @@ def query_Sentinel_1(Floodpy_app):
5064
query_df.index = pd.to_datetime(query_df['beginningDateTime'])
5165
query_df = query_df.drop_duplicates('beginningDateTime').sort_index().tz_localize(None)
5266

53-
flood_candidate_dates = query_df['relativeOrbitNumber'][Floodpy_app.flood_datetime_start:Floodpy_app.flood_datetime_end].index.values
54-
return query_df, flood_candidate_dates
67+
flood_datetimes = query_df['relativeOrbitNumber'][Floodpy_app.flood_datetime_start:Floodpy_app.flood_datetime_end].index.values
68+
69+
sorted_flood_datetimes = sorted([pd.to_datetime(flood_datetime) for flood_datetime in flood_datetimes])
70+
71+
filtered_flood_datetimes = filter_datetimes(sorted_flood_datetimes)
72+
73+
return query_df, filtered_flood_datetimes

floodpy/FLOODPYapp.py

+8-9
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ def __init__(self, params_dict:dict):
4242

4343
# Project Definition
4444
self.projectfolder = params_dict['projectfolder']
45+
self.flood_event = params_dict['flood_event']
4546
self.src = params_dict['src_dir']
4647
self.gpt = params_dict['GPTBIN_PATH']
4748
self.snap_orbit_dir = params_dict['snap_orbit_dir']
@@ -137,11 +138,11 @@ def plot_ERA5_precipitation_data(self):
137138
self.era5_fig = plot_ERA5(self)
138139

139140
def query_S1_data(self):
140-
self.query_S1_df, self.flood_candidate_dates = query_Sentinel_1(self)
141+
self.query_S1_df, self.flood_datetimes = query_Sentinel_1(self)
141142

142143
def sel_S1_data(self, sel_flood_date):
143-
if pd.to_datetime(sel_flood_date) not in self.flood_candidate_dates:
144-
print('Please select one of the available dates for flood mapping: {}'.format(self.flood_candidate_dates))
144+
if pd.to_datetime(sel_flood_date) not in self.flood_datetimes:
145+
print('Please select one of the available dates for flood mapping: {}'.format(self.flood_datetimes))
145146

146147
self.flood_datetime = sel_flood_date
147148
self.flood_datetime_str = pd.to_datetime(self.flood_datetime).strftime('%Y%m%dT%H%M%S')
@@ -193,8 +194,10 @@ def calc_floodmap_dataset(self):
193194
def calc_flooded_regions_ViT(self, ViT_model_filename, device = 'cuda', generate_vector = True, overwrite = True):
194195
assert device in ['cuda', 'cpu'], 'device parameter must be cuda or cpu'
195196

196-
self.Flood_map_dataset_filename = os.path.join(self.Results_dir, 'Flood_map_ViT_{}.nc'.format(self.flood_datetime_str))
197-
self.Flood_map_vector_dataset_filename = os.path.join(self.Results_dir, 'Flood_map_ViT_{}.geojson'.format(self.flood_datetime_str))
197+
self.Flood_map_dataset_filename = os.path.join(self.Results_dir, 'Flooded_regions_{}_{}(UTC).nc'.format(self.flood_event,
198+
self.flood_datetime_str))
199+
self.Flood_map_vector_dataset_filename = os.path.join(self.Results_dir, 'Flooded_regions_{}_{}(UTC).geojson'.format(self.flood_event,
200+
self.flood_datetime_str))
198201

199202
if os.path.exists(self.Flood_map_dataset_filename):
200203
if overwrite:
@@ -211,9 +214,5 @@ def calc_flooded_regions_ViT(self, ViT_model_filename, device = 'cuda', generate
211214
else:
212215
convert_to_vector(self)
213216

214-
def plot_flood_map(self):
215-
self.interactive_map = plot_interactive_map(self)
216-
return self.interactive_map
217-
218217

219218

floodpy/Preprocessing_S1_data/Preprocessing_S1_data.py

+6-6
Original file line numberDiff line numberDiff line change
@@ -37,15 +37,15 @@ def Run_Preprocessing(Floodpy_app, overwrite):
3737
'preprocessing_pair_primary_2GRD_secondary_2GRD.xml'),
3838
}
3939

40-
# Find the S1 unique dates
41-
S1_datetimes = Floodpy_app.query_S1_sel_df.sort_index().index.values
42-
S1_dates = [pd.to_datetime(S1_datetime).date() for S1_datetime in S1_datetimes]
43-
S1_unique_dates = np.unique(S1_dates)
4440

41+
S1_datetimes = Floodpy_app.query_S1_sel_df.sort_index().index.values
42+
Pre_flood_indices = pd.to_datetime(S1_datetimes)<Floodpy_app.pre_flood_datetime_end
43+
Pre_flood_datetimes = S1_datetimes[Pre_flood_indices]
44+
4545
# Find the dates for Flood and Pre-flood S1 images
4646
Flood_date = pd.to_datetime(Floodpy_app.flood_datetime).date()
47-
assert Flood_date in S1_unique_dates
48-
Pre_flood_dates = np.delete(S1_unique_dates, np.where(S1_unique_dates == Flood_date))
47+
S1_dates = [pd.to_datetime(Pre_flood_datetime).date() for Pre_flood_datetime in Pre_flood_datetimes]
48+
Pre_flood_dates = np.unique(S1_dates)
4949

5050
S1_flood_rows = Floodpy_app.query_S1_sel_df.loc[pd.to_datetime(Flood_date): pd.to_datetime(Flood_date) + pd.Timedelta(hours=24)]
5151
AOI_polygon = gpd.read_file(Floodpy_app.geojson_bbox)['geometry'][0]
+51
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
import geopandas as gpd
2+
import pandas as pd
3+
import os
4+
import matplotlib.pyplot as plt
5+
6+
def plot_flooded_area_over_time(Floodpy_app, Floodpy_app_objs):
7+
8+
colorTones = {
9+
6: '#CC3A5D', # dark pink
10+
5: '#555555', # dark grey
11+
4: '#A17C44', # dark brown
12+
3: '#8751A1', # dark purple
13+
2: '#C1403D', # dark red
14+
1: '#2E5A87', # dark blue
15+
0: '#57A35D', # dark green
16+
}
17+
18+
Flooded_regions_areas_km2 = {}
19+
for flood_date in Floodpy_app_objs.keys():
20+
# calculate the area of flooded regions
21+
Flood_map_vector_data = gpd.read_file(Floodpy_app_objs[flood_date].Flood_map_vector_dataset_filename)
22+
Flood_map_vector_data_projected = Flood_map_vector_data.to_crs(Flood_map_vector_data.estimate_utm_crs())
23+
area_km2 = round(Flood_map_vector_data_projected.area.sum()/1000000,2 )
24+
Flooded_regions_areas_km2[flood_date] = area_km2
25+
26+
27+
def getcolor(val):
28+
return colorTones[Floodpy_app.flood_datetimes.index(val)]
29+
30+
Flooded_regions_areas_km2_df = pd.DataFrame.from_dict(Flooded_regions_areas_km2, orient='index', columns=['Flooded area (km2)'])
31+
Flooded_regions_areas_km2_df['Datetime'] = pd.to_datetime(Flooded_regions_areas_km2_df.index)
32+
Flooded_regions_areas_km2_df['color'] = Flooded_regions_areas_km2_df['Datetime'].apply(getcolor)
33+
34+
df = Flooded_regions_areas_km2_df.copy()
35+
# Plot the data
36+
fig = plt.figure(figsize=(6, 5))
37+
plt.bar(df['Datetime'].astype(str), df['Flooded area (km2)'], color=df['color'], width=0.7)
38+
39+
# Adjust the plot
40+
plt.ylabel('Flooded area (km²)', fontsize=16)
41+
plt.title('Flooded Area(km²) Over Time', fontsize=16)
42+
plt.xticks(df['Datetime'].astype(str), df['Datetime'].dt.strftime('%d-%b-%Y'), rotation=30, ha='right', fontsize=16) # Set custom date format
43+
plt.yticks(fontsize=16)
44+
plt.tight_layout() # Adjust layout for better fit
45+
46+
# Display the plot
47+
fig_filename = os.path.join(Floodpy_app.Results_dir, '{}.svg'.format(Floodpy_app.flood_event))
48+
plt.savefig(fig_filename,format="svg")
49+
# plt.close()
50+
print('The figure can be found at: {}'.format(fig_filename))
51+

floodpy/utils/geo_utils.py

+34-1
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,19 @@
44
from shapely.geometry import shape
55
import rasterio
66
import numpy as np
7+
import datetime
8+
import json
9+
10+
colorTones = {
11+
6: '#CC3A5D', # dark pink
12+
5: '#555555', # dark grey
13+
4: '#A17C44', # dark brown
14+
3: '#8751A1', # dark purple
15+
2: '#C1403D', # dark red
16+
1: '#2E5A87', # dark blue
17+
0: '#57A35D', # dark green
18+
}
19+
720

821
def create_polygon(coordinates):
922
return Polygon(coordinates['coordinates'][0])
@@ -25,4 +38,24 @@ def convert_to_vector(Floodpy_app):
2538
gdf = gdf.loc[gdf.flooded_regions == 1,:]
2639
gdf.datetime = Floodpy_app.flood_datetime_str
2740

28-
gdf.to_file(Floodpy_app.Flood_map_vector_dataset_filename, driver='GeoJSON')
41+
#Convert GeoDataFrame to GeoJSON format (as a dictionary)
42+
geojson_str = gdf.to_json() # This gives the GeoJSON as a string
43+
geojson_dict = json.loads(geojson_str) # Convert the string to a dictionary
44+
45+
# find the color of plotting
46+
color_ind = Floodpy_app.flood_datetimes.index(Floodpy_app.flood_datetime)
47+
plot_color = colorTones[color_ind]
48+
#Add top-level metadata (e.g., title, description, etc.)
49+
geojson_dict['flood_event'] = Floodpy_app.flood_event
50+
geojson_dict['description'] = "This GeoJSON contains polygons of flooded regions using Sentinel-1 data."
51+
geojson_dict['produced_by'] = "Floodpy"
52+
geojson_dict['creation_date_UTC'] = datetime.datetime.now(datetime.timezone.utc).strftime('%Y%m%dT%H%M%S')
53+
geojson_dict['flood_datetime_UTC'] = Floodpy_app.flood_datetime_str
54+
geojson_dict['plot_color'] = plot_color
55+
geojson_dict['bbox'] = Floodpy_app.bbox
56+
57+
#Save the modified GeoJSON with metadata to a file
58+
with open(Floodpy_app.Flood_map_vector_dataset_filename, "w") as f:
59+
json.dump(geojson_dict, f, indent=2)
60+
61+
#gdf.to_file(Floodpy_app.Flood_map_vector_dataset_filename, driver='GeoJSON')

0 commit comments

Comments
 (0)