update from Github

master
Kilian Vos 5 years ago
parent 0939021f3c
commit b32d7d1239

@ -1,27 +1,28 @@
# CoastSat # CoastSat
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2779294.svg)](https://doi.org/10.5281/zenodo.2779294)
CoastSat is an open-source software toolkit written in Python that enables users to obtain time-series of shoreline position at any coastline worldwide from 30+ years (and growing) of publicly available satellite imagery. CoastSat is an open-source software toolkit written in Python that enables users to obtain time-series of shoreline position at any coastline worldwide from 30+ years (and growing) of publicly available satellite imagery.
![Alt text](https://github.com/kvos/CoastSat/blob/development/examples/doc/example.gif) ![Alt text](https://github.com/kvos/CoastSat/blob/development/examples/doc/example.gif)
The underlying approach and application of the CoastSat toolkit are described in detail in: The underlying approach of the CoastSat toolkit are described in detail in:
*Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (submitted). CoastSat: a Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery, Environmental Modelling and Software*.
There are three main steps: * Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (2019). CoastSat: a Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery. *Environmental Modelling and Software*. 122, 104528. https://doi.org/10.1016/j.envsoft.2019.104528
- assisted retrieval from Google Earth Engine of all available satellite images spanning the user-defined region of interest and time period
- automated extraction of shorelines from all the selected images using a sub-pixel resolution technique
- intersection of the 2D shorelines with user-defined shore-normal transects
Example applications and accuracy of the resulting satellite-derived shorelines are discussed in:
* Vos K., Harley M.D., Splinter K.D., Simmons J.A., Turner I.L. (2019). Sub-annual to multi-decadal shoreline variability from publicly available satellite imagery. *Coastal Engineering*. 150, 160174. https://doi.org/10.1016/j.coastaleng.2019.04.004
### Description ### Description
Satellite remote sensing can provide low-cost long-term shoreline data capable of resolving the temporal scales of interest to coastal scientists and engineers at sites where no in-situ field measurements are available. Satellite imagery spanning the last 30 years with constant revisit periods is publicly available and suitable to extract repeated measurements of the shoreline position. Satellite remote sensing can provide low-cost long-term shoreline data capable of resolving the temporal scales of interest to coastal scientists and engineers at sites where no in-situ field measurements are available. CoastSat enables the non-expert user to extract shorelines from Landsat 5, Landsat 7, Landsat 8 and Sentinel-2 images.
CoastSat enables the non-expert user to extract shorelines from Landsat 5, Landsat 7, Landsat 8 and Sentinel-2 images.
The shoreline detection algorithm implemented in CoastSat is optimised for sandy beach coastlines. It combines a sub-pixel border segmentation and an image classification component, which refines the segmentation into four distinct categories such that the shoreline detection is specific to the sand/water interface. The shoreline detection algorithm implemented in CoastSat is optimised for sandy beach coastlines. It combines a sub-pixel border segmentation and an image classification component, which refines the segmentation into four distinct categories such that the shoreline detection is specific to the sand/water interface.
The toolbox has three main functionalities:
- assisted retrieval from Google Earth Engine of all available satellite images spanning the user-defined region of interest and time period
- automated extraction of shorelines from all the selected images using a sub-pixel resolution technique
- intersection of the 2D shorelines with user-defined shore-normal transects
**If you like the repo put a star on it!**
## 1. Installation ## 1. Installation
### 1.1 Create an environment with Anaconda ### 1.1 Create an environment with Anaconda
@ -44,21 +45,17 @@ conda activate coastsat
To confirm that you have successfully activated CoastSat, your terminal command line prompt should now start with (coastsat). To confirm that you have successfully activated CoastSat, your terminal command line prompt should now start with (coastsat).
**In case errors are raised:**: you should create a new environment and manually install the required packages, which are listed in the environment.yml file. The following [link](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) shows how to create and manage an environment with Anaconda. **In case errors are raised:**: open the **Anaconda Navigator**, in the *Environments* tab click on *Import* and select the *environment.yml* file. For more details, the following [link](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) shows how to create and manage an environment with Anaconda.
### 1.2 Activate Google Earth Engine Python API ### 1.2 Activate Google Earth Engine Python API
Go to https://earthengine.google.com and sign up to Google Earth Engine (GEE). With the `coastsat` environment activated, run the following command on the Anaconda Prompt to link your environment to the GEE server:
![gee_capture](https://user-images.githubusercontent.com/7217258/49348457-a9271300-f6f9-11e8-8c0b-407383940e94.jpg)
Once you have created a Google Earth Engine account, go back to the Anaconda prompt and link your GEE credentials to the Python API:
``` ```
earthengine authenticate earthengine authenticate
``` ```
A web browser will open, login with your GEE credential and accept the terms and conditions. Then copy the authorization code into the Anaconda terminal. A web browser will open, login with a gmail account and accept the terms and conditions. Then copy the authorization code into the Anaconda terminal.
Now you are ready to start using the CoastSat toolbox! Now you are ready to start using the CoastSat toolbox!
@ -78,9 +75,9 @@ The following sections guide the reader through the different functionalities of
If using `example.py` on **Spyder**, make sure that the Graphics Backend is set to **Automatic** and not **Inline** (as this mode doesn't allow to interact with the figures). To change this setting go under Preferences>IPython console>Graphics. If using `example.py` on **Spyder**, make sure that the Graphics Backend is set to **Automatic** and not **Inline** (as this mode doesn't allow to interact with the figures). To change this setting go under Preferences>IPython console>Graphics.
To run a Jupyter Notebook, place your cursor inside one of the code sections and then click on the `run` button up in the top menu to run that section and progress forward (as shown in the animation below). A Jupyter Notebook combines formatted text and code. To run the code, place your cursor inside one of the code sections and click on the `run cell` button (shown below) and progress forward.
![example_jupyter](https://user-images.githubusercontent.com/7217258/49705486-8dc88480-fc72-11e8-8300-c342baaf54eb.gif) ![run_cell](https://user-images.githubusercontent.com/7217258/60766570-c2100080-a0ee-11e9-9675-e2aeba87e4a7.png)
### 2.1 Retrieval of the satellite images ### 2.1 Retrieval of the satellite images
@ -104,24 +101,22 @@ The screenshot below shows an example of inputs that will retrieve all the image
To map the shorelines, the following user-defined settings are needed: To map the shorelines, the following user-defined settings are needed:
- `cloud_thresh`: threshold on maximum cloud cover that is acceptable on the images (value between 0 and 1 - this may require some initial experimentation). - `cloud_thresh`: threshold on maximum cloud cover that is acceptable on the images (value between 0 and 1 - this may require some initial experimentation).
- `output_epsg`: epsg code defining the spatial reference system of the shoreline coordinates. It has to be a cartesian coordinate system (i.e. projected) and not a geographical coordinate system (in latitude and longitude angles). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. - `output_epsg`: epsg code defining the spatial reference system of the shoreline coordinates. It has to be a cartesian coordinate system (i.e. projected) and not a geographical coordinate system (in latitude and longitude angles). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system.
- `check_detection`: if set to `True` the user can quality control each shoreline detection interactively. - `check_detection`: if set to `True` the user can quality control each shoreline detection interactively (recommended when mapping shorelines for the first time).
- `save_figure`: if set to `True` a figure of each mapped shoreline is saved (under *filepath/sitename/jpg_files/detection*). Note that this may slow down the process. - `save_figure`: if set to `True` a figure of each mapped shoreline is saved (under *filepath/sitename/jpg_files/detection*). Note that this may slow down the process.
The setting `check_detection` is recommended when using the tool for the first time as it will show the user how CoastSat is mapping the shorelines. There are additional parameters (`min_beach_size`, `buffer_size`, `min_length_sl`, `cloud_mask_issue` and `sand_color`) that can be tuned to optimise the shoreline detection (for Advanced users only). For the moment leave these parameters set to their default values, we will see later how they can be modified.
There are additional parameters (`min_beach_size`, `buffer_size`, `min_length_sl`, `cloud_mask_issue` and `dark sand`) that can be tuned to optimise the shoreline detection (for Advanced users only). For the moment leave these parameters set to their default values, we will see later how they can be modified.
An example of settings is provided here: An example of settings is provided here:
![doc2](https://user-images.githubusercontent.com/7217258/56278918-7a5e8600-614a-11e9-9184-77b69427b834.PNG) ![settings](https://user-images.githubusercontent.com/7217258/65950715-f68f2080-e481-11e9-80b6-19e13f2ec179.PNG)
Once all the settings have been defined, the batch shoreline detection can be launched by calling: Once all the settings have been defined, the batch shoreline detection can be launched by calling:
``` ```
output = SDS_shoreline.extract_shorelines(metadata, settings) output = SDS_shoreline.extract_shorelines(metadata, settings)
``` ```
When `check_detection` is set to `True`, a figure like the one below appears and asks the user to manually accept/reject each detection by pressing the `right arrow` (⇨) to `keep` the shoreline or `left arrow` (⇦) to `skip` the mapped shoreline. The user can break the loop at any time by pressing `escape` (nothing will be saved though). When `check_detection` is set to `True`, a figure like the one below appears and asks the user to manually accept/reject each detection by pressing **on the keyboard** the `right arrow` (⇨) to `keep` the shoreline or `left arrow` (⇦) to `skip` the mapped shoreline. The user can break the loop at any time by pressing `escape` (nothing will be saved though).
![Alt text](https://github.com/kvos/CoastSat/blob/development/examples/doc/batch_detection.gif) ![map_shorelines](https://user-images.githubusercontent.com/7217258/60766769-fafda480-a0f1-11e9-8f91-419d848ff98d.gif)
Once all the shorelines have been mapped, the output is available in two different formats (saved under *.\data\sitename*): Once all the shorelines have been mapped, the output is available in two different formats (saved under *.\data\sitename*):
- `sitename_output.pkl`: contains a list with the shoreline coordinates, the exact timestamp at which the image was captured (UTC time), the geometric accuracy and the cloud cover of each individual image. This list can be manipulated with Python, a snippet of code to plot the results is provided in the example script. - `sitename_output.pkl`: contains a list with the shoreline coordinates, the exact timestamp at which the image was captured (UTC time), the geometric accuracy and the cloud cover of each individual image. This list can be manipulated with Python, a snippet of code to plot the results is provided in the example script.
@ -133,7 +128,7 @@ The figure below shows how the satellite-derived shorelines can be opened in a G
#### Reference shoreline #### Reference shoreline
There is also an option to manually digitize a reference shoreline before running the batch shoreline detection on all the images. This reference shoreline helps to reject outliers and false detections when mapping shorelines as it only considers as valid shorelines the points that are within a distance from this reference shoreline. Before running the batch shoreline detection, there is the option to manually digitize a reference shoreline on one cloud-free image. This reference shoreline helps to reject outliers and false detections when mapping shorelines as it only considers as valid shorelines the points that are within a defined distance from this reference shoreline.
The user can manually digitize a reference shoreline on one of the images by calling: The user can manually digitize a reference shoreline on one of the images by calling:
``` ```
@ -142,7 +137,7 @@ settings['max_dist_ref'] = 100 # max distance (in meters) allowed from the refer
``` ```
This function allows the user to click points along the shoreline on one of the satellite images, as shown in the animation below. This function allows the user to click points along the shoreline on one of the satellite images, as shown in the animation below.
![ref_shoreline](https://user-images.githubusercontent.com/7217258/49710753-94b1c000-fc8f-11e8-9b6c-b5e96aadc5c9.gif) ![reference_shoreline](https://user-images.githubusercontent.com/7217258/60766913-6c8a2280-a0f3-11e9-89e5-865e11aa26cd.gif)
The maximum distance (in metres) allowed from the reference shoreline is defined by the parameter `max_dist_ref`. This parameter is set to a default value of 100 m. If you think that 100 m buffer from the reference shoreline will not capture the shoreline variability at your site, increase the value of this parameter. This may be the case for large nourishments or eroding/accreting coastlines. The maximum distance (in metres) allowed from the reference shoreline is defined by the parameter `max_dist_ref`. This parameter is set to a default value of 100 m. If you think that 100 m buffer from the reference shoreline will not capture the shoreline variability at your site, increase the value of this parameter. This may be the case for large nourishments or eroding/accreting coastlines.
@ -153,7 +148,7 @@ As mentioned above, there are some additional parameters that can be modified to
- `buffer_size`: radius (in metres) that defines the buffer around sandy pixels that is considered to calculate the sand/water threshold. The default value of `buffer_size` is 150 m. This parameter should be increased if you have a very wide (>150 m) surf zone or inter-tidal zone. - `buffer_size`: radius (in metres) that defines the buffer around sandy pixels that is considered to calculate the sand/water threshold. The default value of `buffer_size` is 150 m. This parameter should be increased if you have a very wide (>150 m) surf zone or inter-tidal zone.
- `min_length_sl`: minimum length (in metres) of shoreline perimeter to be valid. This can be used to discard small features that are detected but do not correspond to the actual shoreline. The default value is 200 m. If the shoreline that you are trying to map is shorter than 200 m, decrease the value of this parameter. - `min_length_sl`: minimum length (in metres) of shoreline perimeter to be valid. This can be used to discard small features that are detected but do not correspond to the actual shoreline. The default value is 200 m. If the shoreline that you are trying to map is shorter than 200 m, decrease the value of this parameter.
- `cloud_mask_issue`: the cloud mask algorithm applied to Landsat images by USGS, namely CFMASK, does have difficulties sometimes with very bright features such as beaches or white-water in the ocean. This may result in pixels corresponding to a beach being identified as clouds and appear as masked pixels on your images. If this issue seems to be present in a large proportion of images from your local beach, you can switch this parameter to `True` and CoastSat will remove from the cloud mask the pixels that form very thin linear features, as often these are beaches and not clouds. Only activate this parameter if you observe this very specific cloud mask issue, otherwise leave to the default value of `False`. - `cloud_mask_issue`: the cloud mask algorithm applied to Landsat images by USGS, namely CFMASK, does have difficulties sometimes with very bright features such as beaches or white-water in the ocean. This may result in pixels corresponding to a beach being identified as clouds and appear as masked pixels on your images. If this issue seems to be present in a large proportion of images from your local beach, you can switch this parameter to `True` and CoastSat will remove from the cloud mask the pixels that form very thin linear features, as often these are beaches and not clouds. Only activate this parameter if you observe this very specific cloud mask issue, otherwise leave to the default value of `False`.
- `dark_sand`: if your beach has dark sand (grey/black sand beaches), you can set this parameter to `True` and the classifier will be able to pick up the dark sand. At this stage this option is only available for Landsat images (soon for Sentinel-2 as well). - `sand_color`: this parameter can take 3 values: `default`, `dark` or `bright`. Only change this parameter if you are seing that with the `default` the sand pixels are not being classified as sand (in orange). If your beach has dark sand (grey/black sand beaches), you can set this parameter to `dark` and the classifier will be able to pick up the dark sand. On the other hand, if your beach has white sand and the `default` classifier is not picking it up, switch this parameter to `bright`. At this stage this option is only available for Landsat images (soon for Sentinel-2 as well).
### 2.3 Shoreline change analysis ### 2.3 Shoreline change analysis
@ -191,12 +186,17 @@ An example is shown in the animation below:
Having a problem? Post an issue in the [Issues page](https://github.com/kvos/coastsat/issues) (please do not email). Having a problem? Post an issue in the [Issues page](https://github.com/kvos/coastsat/issues) (please do not email).
## Contributing ## Contributing
If you are willing to contribute, check out our todo list in the [Projects page](https://github.com/kvos/CoastSat/projects/1).
1. Fork the repository (https://github.com/kvos/coastsat/fork). 1. Fork the repository (https://github.com/kvos/coastsat/fork).
A fork is a copy on which you can make your changes. A fork is a copy on which you can make your changes.
2. Create a new branch on your fork 2. Create a new branch on your fork
3. Commit your changes and push them to your branch 3. Commit your changes and push them to your branch
4. When the branch is ready to be merged, create a Pull Request 4. When the branch is ready to be merged, create a Pull Request (how to make a clean pull request explained [here](https://gist.github.com/MarcDiethelm/7303312))
## References
1. Vos K., Harley M.D., Splinter K.D., Simmons J.A., Turner I.L. (2019). Sub-annual to multi-decadal shoreline variability from publicly available satellite imagery. *Coastal Engineering*. 150, 160174. https://doi.org/10.1016/j.coastaleng.2019.04.004
Check the following link for more information on how to make a clean pull request: https://gist.github.com/MarcDiethelm/7303312). 2. Vos K., Splinter K.D.,Harley M.D., Simmons J.A., Turner I.L. (2019). CoastSat: a Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery. *Environmental Modelling and Software*. 122, 104528. https://doi.org/10.1016/j.envsoft.2019.104528
If you like the repo put a star on it! 3. Training dataset used for pixel classification in CoastSat: https://doi.org/10.5281/zenodo.3334147

@ -29,9 +29,6 @@ from coastsat import SDS_preprocess, SDS_tools
np.seterr(all='ignore') # raise/ignore divisions by 0 and nans np.seterr(all='ignore') # raise/ignore divisions by 0 and nans
# initialise connection with GEE server
ee.Initialize()
def download_tif(image, polygon, bandsId, filepath): def download_tif(image, polygon, bandsId, filepath):
""" """
Downloads a .TIF image from the ee server and stores it in a temp file Downloads a .TIF image from the ee server and stores it in a temp file
@ -99,6 +96,9 @@ def retrieve_images(inputs):
""" """
# initialise connection with GEE server
ee.Initialize()
# read inputs dictionnary # read inputs dictionnary
sitename = inputs['sitename'] sitename = inputs['sitename']
polygon = inputs['polygon'] polygon = inputs['polygon']
@ -134,11 +134,18 @@ def retrieve_images(inputs):
os.makedirs(filepath_meta) os.makedirs(filepath_meta)
# Landsat 5 collection # Landsat 5 collection
count_loop = 0
while count_loop < 1:
try:
input_col = ee.ImageCollection('LANDSAT/LT05/C01/T1_TOA') input_col = ee.ImageCollection('LANDSAT/LT05/C01/T1_TOA')
# filter by location and dates # filter by location and dates
flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1]) flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1])
# get all images in the filtered collection # get all images in the filtered collection
im_all = flt_col.getInfo().get('features') im_all = flt_col.getInfo().get('features')
count_loop = 1
except:
count_loop = 0
# remove very cloudy images (>95% cloud) # remove very cloudy images (>95% cloud)
cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all] cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all]
if np.any([_ > 95 for _ in cloud_cover]): if np.any([_ > 95 for _ in cloud_cover]):
@ -157,9 +164,14 @@ def retrieve_images(inputs):
all_names = [] all_names = []
im_epsg = [] im_epsg = []
for i in range(n_img): for i in range(n_img):
count_loop = 0
while count_loop < 1:
try:
# find each image in ee database # find each image in ee database
im = ee.Image(im_col[i]['id']) im = ee.Image(im_col[i]['id'])
count_loop = 1
except:
count_loop = 0
# read metadata # read metadata
im_dic = im_col[i] im_dic = im_col[i]
# get bands # get bands
@ -189,7 +201,13 @@ def retrieve_images(inputs):
all_names.append(filename) all_names.append(filename)
filenames.append(filename) filenames.append(filename)
# download .TIF image # download .TIF image
count_loop = 0
while count_loop < 1:
try:
local_data = download_tif(im, polygon, ms_bands, filepath) local_data = download_tif(im, polygon, ms_bands, filepath)
count_loop = 1
except:
count_loop = 0
# update filename # update filename
try: try:
os.rename(local_data, os.path.join(filepath, filename)) os.rename(local_data, os.path.join(filepath, filename))
@ -237,11 +255,18 @@ def retrieve_images(inputs):
os.makedirs(filepath_meta) os.makedirs(filepath_meta)
# landsat 7 collection # landsat 7 collection
count_loop = 0
while count_loop < 1:
try:
input_col = ee.ImageCollection('LANDSAT/LE07/C01/T1_RT_TOA') input_col = ee.ImageCollection('LANDSAT/LE07/C01/T1_RT_TOA')
# filter by location and dates # filter by location and dates
flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1]) flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1])
# get all images in the filtered collection # get all images in the filtered collection
im_all = flt_col.getInfo().get('features') im_all = flt_col.getInfo().get('features')
count_loop = 1
except:
count_loop = 0
# remove very cloudy images (>95% cloud) # remove very cloudy images (>95% cloud)
cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all] cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all]
if np.any([_ > 95 for _ in cloud_cover]): if np.any([_ > 95 for _ in cloud_cover]):
@ -261,8 +286,14 @@ def retrieve_images(inputs):
im_epsg = [] im_epsg = []
for i in range(n_img): for i in range(n_img):
count_loop = 0
while count_loop < 1:
try:
# find each image in ee database # find each image in ee database
im = ee.Image(im_col[i]['id']) im = ee.Image(im_col[i]['id'])
count_loop = 1
except:
count_loop = 0
# read metadata # read metadata
im_dic = im_col[i] im_dic = im_col[i]
# get bands # get bands
@ -295,8 +326,14 @@ def retrieve_images(inputs):
all_names.append(filename_pan) all_names.append(filename_pan)
filenames.append(filename_pan) filenames.append(filename_pan)
# download .TIF image # download .TIF image
count_loop = 0
while count_loop < 1:
try:
local_data_pan = download_tif(im, polygon, pan_band, filepath_pan) local_data_pan = download_tif(im, polygon, pan_band, filepath_pan)
local_data_ms = download_tif(im, polygon, ms_bands, filepath_ms) local_data_ms = download_tif(im, polygon, ms_bands, filepath_ms)
count_loop = 1
except:
count_loop = 0
# update filename # update filename
try: try:
os.rename(local_data_pan, os.path.join(filepath_pan, filename_pan)) os.rename(local_data_pan, os.path.join(filepath_pan, filename_pan))
@ -349,11 +386,18 @@ def retrieve_images(inputs):
os.makedirs(filepath_meta) os.makedirs(filepath_meta)
# landsat 8 collection # landsat 8 collection
count_loop = 0
while count_loop < 1:
try:
input_col = ee.ImageCollection('LANDSAT/LC08/C01/T1_RT_TOA') input_col = ee.ImageCollection('LANDSAT/LC08/C01/T1_RT_TOA')
# filter by location and dates # filter by location and dates
flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1]) flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1])
# get all images in the filtered collection # get all images in the filtered collection
im_all = flt_col.getInfo().get('features') im_all = flt_col.getInfo().get('features')
count_loop = 1
except:
count_loop = 0
# remove very cloudy images (>95% cloud) # remove very cloudy images (>95% cloud)
cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all] cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all]
if np.any([_ > 95 for _ in cloud_cover]): if np.any([_ > 95 for _ in cloud_cover]):
@ -373,8 +417,14 @@ def retrieve_images(inputs):
im_epsg = [] im_epsg = []
for i in range(n_img): for i in range(n_img):
count_loop = 0
while count_loop < 1:
try:
# find each image in ee database # find each image in ee database
im = ee.Image(im_col[i]['id']) im = ee.Image(im_col[i]['id'])
count_loop = 1
except:
count_loop = 0
# read metadata # read metadata
im_dic = im_col[i] im_dic = im_col[i]
# get bands # get bands
@ -407,8 +457,15 @@ def retrieve_images(inputs):
all_names.append(filename_pan) all_names.append(filename_pan)
filenames.append(filename_pan) filenames.append(filename_pan)
# download .TIF image # download .TIF image
count_loop = 0
while count_loop < 1:
try:
local_data_pan = download_tif(im, polygon, pan_band, filepath_pan) local_data_pan = download_tif(im, polygon, pan_band, filepath_pan)
local_data_ms = download_tif(im, polygon, ms_bands, filepath_ms) local_data_ms = download_tif(im, polygon, ms_bands, filepath_ms)
count_loop = 1
except:
count_loop = 0
# update filename # update filename
try: try:
os.rename(local_data_pan, os.path.join(filepath_pan, filename_pan)) os.rename(local_data_pan, os.path.join(filepath_pan, filename_pan))
@ -461,11 +518,18 @@ def retrieve_images(inputs):
os.makedirs(filepath_meta) os.makedirs(filepath_meta)
# Sentinel2 collection # Sentinel2 collection
count_loop = 0
while count_loop < 1:
try:
input_col = ee.ImageCollection('COPERNICUS/S2') input_col = ee.ImageCollection('COPERNICUS/S2')
# filter by location and dates # filter by location and dates
flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1]) flt_col = input_col.filterBounds(ee.Geometry.Polygon(polygon)).filterDate(dates[0],dates[1])
# get all images in the filtered collection # get all images in the filtered collection
im_all = flt_col.getInfo().get('features') im_all = flt_col.getInfo().get('features')
count_loop = 1
except:
count_loop = 0
# remove duplicates in the collection (there are many in S2 collection) # remove duplicates in the collection (there are many in S2 collection)
timestamps = [datetime.fromtimestamp(_['properties']['system:time_start']/1000, timestamps = [datetime.fromtimestamp(_['properties']['system:time_start']/1000,
tz=pytz.utc) for _ in im_all] tz=pytz.utc) for _ in im_all]
@ -516,9 +580,14 @@ def retrieve_images(inputs):
all_names = [] all_names = []
im_epsg = [] im_epsg = []
for i in range(n_img): for i in range(n_img):
count_loop = 0
while count_loop < 1:
try:
# find each image in ee database # find each image in ee database
im = ee.Image(im_col[i]['id']) im = ee.Image(im_col[i]['id'])
count_loop = 1
except:
count_loop = 0
# read metadata # read metadata
im_dic = im_col[i] im_dic = im_col[i]
# get bands # get bands
@ -547,21 +616,42 @@ def retrieve_images(inputs):
filenames.append(filename10) filenames.append(filename10)
# download .TIF image and update filename # download .TIF image and update filename
count_loop = 0
while count_loop < 1:
try:
local_data = download_tif(im, polygon, bands10, os.path.join(filepath, '10m')) local_data = download_tif(im, polygon, bands10, os.path.join(filepath, '10m'))
count_loop = 1
except:
count_loop = 0
try: try:
os.rename(local_data, os.path.join(filepath, '10m', filename10)) os.rename(local_data, os.path.join(filepath, '10m', filename10))
except: except:
os.remove(os.path.join(filepath, '10m', filename10)) os.remove(os.path.join(filepath, '10m', filename10))
os.rename(local_data, os.path.join(filepath, '10m', filename10)) os.rename(local_data, os.path.join(filepath, '10m', filename10))
count_loop = 0
while count_loop < 1:
try:
local_data = download_tif(im, polygon, bands20, os.path.join(filepath, '20m')) local_data = download_tif(im, polygon, bands20, os.path.join(filepath, '20m'))
count_loop = 1
except:
count_loop = 0
try: try:
os.rename(local_data, os.path.join(filepath, '20m', filename20)) os.rename(local_data, os.path.join(filepath, '20m', filename20))
except: except:
os.remove(os.path.join(filepath, '20m', filename20)) os.remove(os.path.join(filepath, '20m', filename20))
os.rename(local_data, os.path.join(filepath, '20m', filename20)) os.rename(local_data, os.path.join(filepath, '20m', filename20))
count_loop = 0
while count_loop < 1:
try:
local_data = download_tif(im, polygon, bands60, os.path.join(filepath, '60m')) local_data = download_tif(im, polygon, bands60, os.path.join(filepath, '60m'))
count_loop = 1
except:
count_loop = 0
try: try:
os.rename(local_data, os.path.join(filepath, '60m', filename60)) os.rename(local_data, os.path.join(filepath, '60m', filename60))
except: except:

@ -298,7 +298,15 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_inf = np.isin(im_ms[:,:,k], -np.inf) im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k]) im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan) cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
im_nodata = np.logical_or(im_nodata, im_inf) im_nodata = np.logical_or(np.logical_or(im_nodata, im_inf), im_nan)
# check if there are pixels with 0 intensity in the Green, NIR and SWIR bands and add those
# to the cloud mask as otherwise they will cause errors when calculating the NDWI and MNDWI
im_zeros = np.ones(cloud_mask.shape).astype(bool)
for k in [1,3,4]: # loop through the Green, NIR and SWIR bands
im_zeros = np.logical_and(np.isin(im_ms[:,:,k],0), im_zeros)
# update cloud mask and nodata
cloud_mask = np.logical_or(im_zeros, cloud_mask)
im_nodata = np.logical_or(im_zeros, im_nodata)
# no extra image for Landsat 5 (they are all 30 m bands) # no extra image for Landsat 5 (they are all 30 m bands)
im_extra = [] im_extra = []
@ -337,15 +345,19 @@ def preprocess_single(fn, satname, cloud_mask_issue):
mode='constant').astype('bool_') mode='constant').astype('bool_')
# check if -inf or nan values on any band and eventually add those pixels to cloud mask # check if -inf or nan values on any band and eventually add those pixels to cloud mask
im_nodata = np.zeros(cloud_mask.shape).astype(bool) im_nodata = np.zeros(cloud_mask.shape).astype(bool)
for k in range(im_ms.shape[2]+1): for k in range(im_ms.shape[2]):
if k == 5:
im_inf = np.isin(im_pan, -np.inf)
im_nan = np.isnan(im_pan)
else:
im_inf = np.isin(im_ms[:,:,k], -np.inf) im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k]) im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan) cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
im_nodata = np.logical_or(im_nodata, im_inf) im_nodata = np.logical_or(np.logical_or(im_nodata, im_inf), im_nan)
# check if there are pixels with 0 intensity in the Green, NIR and SWIR bands and add those
# to the cloud mask as otherwise they will cause errors when calculating the NDWI and MNDWI
im_zeros = np.ones(cloud_mask.shape).astype(bool)
for k in [1,3,4]: # loop through the Green, NIR and SWIR bands
im_zeros = np.logical_and(np.isin(im_ms[:,:,k],0), im_zeros)
# update cloud mask and nodata
cloud_mask = np.logical_or(im_zeros, cloud_mask)
im_nodata = np.logical_or(im_zeros, im_nodata)
# pansharpen Green, Red, NIR (where there is overlapping with pan band in L7) # pansharpen Green, Red, NIR (where there is overlapping with pan band in L7)
try: try:
@ -395,15 +407,19 @@ def preprocess_single(fn, satname, cloud_mask_issue):
mode='constant').astype('bool_') mode='constant').astype('bool_')
# check if -inf or nan values on any band and eventually add those pixels to cloud mask # check if -inf or nan values on any band and eventually add those pixels to cloud mask
im_nodata = np.zeros(cloud_mask.shape).astype(bool) im_nodata = np.zeros(cloud_mask.shape).astype(bool)
for k in range(im_ms.shape[2]+1): for k in range(im_ms.shape[2]):
if k == 5:
im_inf = np.isin(im_pan, -np.inf)
im_nan = np.isnan(im_pan)
else:
im_inf = np.isin(im_ms[:,:,k], -np.inf) im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k]) im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan) cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
im_nodata = np.logical_or(im_nodata, im_inf) im_nodata = np.logical_or(np.logical_or(im_nodata, im_inf), im_nan)
# check if there are pixels with 0 intensity in the Green, NIR and SWIR bands and add those
# to the cloud mask as otherwise they will cause errors when calculating the NDWI and MNDWI
im_zeros = np.ones(cloud_mask.shape).astype(bool)
for k in [1,3,4]: # loop through the Green, NIR and SWIR bands
im_zeros = np.logical_and(np.isin(im_ms[:,:,k],0), im_zeros)
# update cloud mask and nodata
cloud_mask = np.logical_or(im_zeros, cloud_mask)
im_nodata = np.logical_or(im_zeros, im_nodata)
# pansharpen Blue, Green, Red (where there is overlapping with pan band in L8) # pansharpen Blue, Green, Red (where there is overlapping with pan band in L8)
try: try:
@ -474,7 +490,16 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_inf = np.isin(im_ms[:,:,k], -np.inf) im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k]) im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan) cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
im_nodata = np.logical_or(im_nodata, im_inf) im_nodata = np.logical_or(np.logical_or(im_nodata, im_inf), im_nan)
# check if there are pixels with 0 intensity in the Green, NIR and SWIR bands and add those
# to the cloud mask as otherwise they will cause errors when calculating the NDWI and MNDWI
im_zeros = np.ones(cloud_mask.shape).astype(bool)
for k in [1,3,4]: # loop through the Green, NIR and SWIR bands
im_zeros = np.logical_and(np.isin(im_ms[:,:,k],0), im_zeros)
# update cloud mask and nodata
cloud_mask = np.logical_or(im_zeros, cloud_mask)
im_nodata = np.logical_or(im_zeros, im_nodata)
# the extra image is the 20m SWIR band # the extra image is the 20m SWIR band
im_extra = im20 im_extra = im20

@ -623,7 +623,7 @@ def show_detection(im_ms, cloud_mask, im_labels, shoreline,image_epsg, georef,
# if save_figure is True, save a .jpg under /jpg_files/detection # if save_figure is True, save a .jpg under /jpg_files/detection
if settings['save_figure'] and not skip_image: if settings['save_figure'] and not skip_image:
fig.savefig(os.path.join(filepath, date + '_' + satname + '.jpg'), dpi=200) fig.savefig(os.path.join(filepath, date + '_' + satname + '.jpg'), dpi=150)
# Don't close the figure window, but remove all axes and settings, ready for next plot # Don't close the figure window, but remove all axes and settings, ready for next plot
for ax in fig.axes: for ax in fig.axes:
@ -699,8 +699,10 @@ def extract_shorelines(metadata, settings):
# load classifiers and # load classifiers and
if satname in ['L5','L7','L8']: if satname in ['L5','L7','L8']:
pixel_size = 15 pixel_size = 15
if settings['dark_sand']: if settings['sand_color'] == 'dark':
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat_dark.pkl')) clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat_dark.pkl'))
elif settings['sand_color'] == 'bright':
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat_bright.pkl'))
else: else:
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat.pkl')) clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat.pkl'))
@ -723,21 +725,33 @@ def extract_shorelines(metadata, settings):
im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata = SDS_preprocess.preprocess_single(fn, satname, settings['cloud_mask_issue']) im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata = SDS_preprocess.preprocess_single(fn, satname, settings['cloud_mask_issue'])
# get image spatial reference system (epsg code) from metadata dict # get image spatial reference system (epsg code) from metadata dict
image_epsg = metadata[satname]['epsg'][i] image_epsg = metadata[satname]['epsg'][i]
# define an advanced cloud mask (for L7 it takes into account the fact that diagonal
# bands of no data are not clouds)
if not satname == 'L7' or sum(sum(im_nodata)) == 0 or sum(sum(im_nodata)) > 0.5*im_nodata.size:
cloud_mask_adv = cloud_mask
else:
cloud_mask_adv = np.logical_xor(cloud_mask, im_nodata)
# calculate cloud cover # calculate cloud cover
cloud_cover = np.divide(sum(sum(cloud_mask.astype(int))), cloud_cover = np.divide(sum(sum(cloud_mask_adv.astype(int))),
(cloud_mask.shape[0]*cloud_mask.shape[1])) (cloud_mask.shape[0]*cloud_mask.shape[1]))
# skip image if cloud cover is above threshold # skip image if cloud cover is above threshold
if cloud_cover > settings['cloud_thresh']: if cloud_cover > settings['cloud_thresh']:
continue continue
# classify image in 4 classes (sand, whitewater, water, other) with NN classifier
im_classif, im_labels = classify_image_NN(im_ms, im_extra, cloud_mask,
min_beach_area_pixels, clf)
# calculate a buffer around the reference shoreline (if any has been digitised) # calculate a buffer around the reference shoreline (if any has been digitised)
im_ref_buffer = create_shoreline_buffer(cloud_mask.shape, georef, image_epsg, im_ref_buffer = create_shoreline_buffer(cloud_mask.shape, georef, image_epsg,
pixel_size, settings) pixel_size, settings)
# when running the automated mode, skip image if cloudy pixels are found in the shoreline buffer
if not settings['check_detection'] and 'reference_shoreline' in settings.keys():
if sum(sum(np.logical_and(im_ref_buffer, cloud_mask_adv))) > 0:
continue
# classify image in 4 classes (sand, whitewater, water, other) with NN classifier
im_classif, im_labels = classify_image_NN(im_ms, im_extra, cloud_mask,
min_beach_area_pixels, clf)
# there are two options to map the contours: # there are two options to map the contours:
# if there are pixels in the 'sand' class --> use find_wl_contours2 (enhanced) # if there are pixels in the 'sand' class --> use find_wl_contours2 (enhanced)
# otherwise use find_wl_contours2 (traditional) # otherwise use find_wl_contours2 (traditional)

@ -464,10 +464,12 @@ def output_to_gdf(output):
""" """
# loop through the mapped shorelines # loop through the mapped shorelines
counter = 0
for i in range(len(output['shorelines'])): for i in range(len(output['shorelines'])):
# skip if there shoreline is empty # skip if there shoreline is empty
if len(output['shorelines'][i]) == 0: if len(output['shorelines'][i]) == 0:
continue continue
else:
# save the geometry + attributes # save the geometry + attributes
geom = geometry.LineString(output['shorelines'][i]) geom = geometry.LineString(output['shorelines'][i])
gdf = gpd.GeoDataFrame(geometry=gpd.GeoSeries(geom)) gdf = gpd.GeoDataFrame(geometry=gpd.GeoSeries(geom))
@ -477,10 +479,11 @@ def output_to_gdf(output):
gdf.loc[i,'geoaccuracy'] = output['geoaccuracy'][i] gdf.loc[i,'geoaccuracy'] = output['geoaccuracy'][i]
gdf.loc[i,'cloud_cover'] = output['cloud_cover'][i] gdf.loc[i,'cloud_cover'] = output['cloud_cover'][i]
# store into geodataframe # store into geodataframe
if i == 0: if counter == 0:
gdf_all = gdf gdf_all = gdf
else: else:
gdf_all = gdf_all.append(gdf) gdf_all = gdf_all.append(gdf)
counter = counter + 1
return gdf_all return gdf_all

@ -71,7 +71,7 @@ settings = {
'buffer_size': 150, # radius (in metres) of the buffer around sandy pixels considered in the shoreline detection 'buffer_size': 150, # radius (in metres) of the buffer around sandy pixels considered in the shoreline detection
'min_length_sl': 200, # minimum length (in metres) of shoreline perimeter to be valid 'min_length_sl': 200, # minimum length (in metres) of shoreline perimeter to be valid
'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images 'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images
'dark_sand': False, # only switch to True if your site has dark sand (e.g. black sand beach) 'sand_color': 'default', # 'default', 'dark' (for grey/black sand beaches) or 'bright' (for white sand beaches)
} }
# [OPTIONAL] preprocess images (cloud masking, pansharpening/down-sampling) # [OPTIONAL] preprocess images (cloud masking, pansharpening/down-sampling)

@ -6,14 +6,16 @@
"source": [ "source": [
"# *CoastSat*: example at Narrabeen-Collaroy, Australia\n", "# *CoastSat*: example at Narrabeen-Collaroy, Australia\n",
"\n", "\n",
"This software is described in *Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (submitted). CoastSat: a Google Earth Engine-enabled software to extract shorelines from publicly available satellite imagery, Environmental Modelling and Software*. It enables the users to extract time-series of shoreline change over the last 30+ years at their site of interest.\n", "This software is described in details in:\n",
"* Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (2019). CoastSat: a Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery. Environmental Modelling and Software. 122, 104528. https://doi.org/10.1016/j.envsoft.2019.104528\n",
"\n", "\n",
"It enables the users to extract time-series of shoreline change over the last 30+ years at their site of interest.\n",
"There are three main steps:\n", "There are three main steps:\n",
"- retrieval of the satellite images of the region of interest from Google Earth Engine\n", "1. Retrieval of the satellite images of the region of interest from Google Earth Engine\n",
"- extraction of the shorelines from the images using a sub-pixel resolution technique\n", "2. Shoreline extraction at sub-pixel resolution\n",
"- intersection of the 2D shorelines with shore-normal transects\n", "3. Intersection of the shorelines with cross-shore transects\n",
"\n", "\n",
"## 1. Initial settings\n", "## Initial settings\n",
"\n", "\n",
"Refer to the **Installation** section of the README for instructions on how to install the Python packages necessary to run the software, including Google Earth Engine Python API. If that step has been completed correctly, the following packages should be imported without any problem." "Refer to the **Installation** section of the README for instructions on how to install the Python packages necessary to run the software, including Google Earth Engine Python API. If that step has been completed correctly, the following packages should be imported without any problem."
] ]
@ -37,7 +39,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 2. Retrieval of the images from GEE\n", "## 1. Retrieval of the images from GEE\n",
"\n", "\n",
"Define the region of interest (`polygon`), the date range (`dates`) and the satellite missions (`sat_list`) from which you wish to retrieve the satellite images. The images will be cropped on the Google Earth Engine server and only the region of interest will be downloaded as a .tif file. The files will stored in the directory defined in `filepath`.\n", "Define the region of interest (`polygon`), the date range (`dates`) and the satellite missions (`sat_list`) from which you wish to retrieve the satellite images. The images will be cropped on the Google Earth Engine server and only the region of interest will be downloaded as a .tif file. The files will stored in the directory defined in `filepath`.\n",
"\n", "\n",
@ -104,7 +106,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 3. Shoreline extraction\n", "## 2. Shoreline extraction\n",
"\n", "\n",
"This section maps the position of the shoreline on the satellite images. The user can define the cloud threhold (`cloud_thresh`) and select the spatial reference system in which to output the coordinates of the mapped shorelines (`output_epsg`). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. Make sure that your are using cartesian coordinates and not spherical coordinates (lat,lon) like WGS84. \n", "This section maps the position of the shoreline on the satellite images. The user can define the cloud threhold (`cloud_thresh`) and select the spatial reference system in which to output the coordinates of the mapped shorelines (`output_epsg`). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. Make sure that your are using cartesian coordinates and not spherical coordinates (lat,lon) like WGS84. \n",
"\n", "\n",
@ -133,7 +135,7 @@
" 'buffer_size': 150, # radius (in metres) of the buffer around sandy pixels considered in the shoreline detection\n", " 'buffer_size': 150, # radius (in metres) of the buffer around sandy pixels considered in the shoreline detection\n",
" 'min_length_sl': 200, # minimum length (in metres) of shoreline perimeter to be valid\n", " 'min_length_sl': 200, # minimum length (in metres) of shoreline perimeter to be valid\n",
" 'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images \n", " 'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images \n",
" 'dark_sand': False, # only switch to True if your site has dark sand (e.g. black sand beach)\n", " 'sand_color': 'default', # 'default', 'dark' (for grey/black sand beaches) or 'bright' (for white sand beaches)\n",
"}" "}"
] ]
}, },
@ -178,7 +180,9 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Batch shoreline detection\n", "### Batch shoreline detection\n",
"Extracts the 2D shorelines from the images in the spatial reference system specified by the user in `'output_epsg'`. The mapped shorelines are saved into `output.pkl` (under *./data/sitename*) and `output.geojson` to use in GIS softwares." "Extracts the 2D shorelines from the images in the spatial reference system specified by the user in `'output_epsg'`. The mapped shorelines are saved into `output.pkl` (under *./data/sitename*) and `output.geojson` (to be used in a GIS software).\n",
"\n",
"If you see that the sand pixels on the images are not being identified, change the parameter `sand_color` from `default` to `dark` or `bright` depending on the color of your beach. "
] ]
}, },
{ {
@ -225,7 +229,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 4. Shoreline analysis\n", "## 3. Shoreline analysis\n",
"\n", "\n",
"In this section we show how to compute time-series of cross-shore distance along user-defined shore-normal transects." "In this section we show how to compute time-series of cross-shore distance along user-defined shore-normal transects."
] ]
@ -371,7 +375,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.7.3" "version": "3.6.7"
}, },
"varInspector": { "varInspector": {
"cols": { "cols": {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 MiB

Loading…
Cancel
Save