update from GitHub.com

master
Kilian Vos 6 years ago
parent 8c2ba23cff
commit 0939021f3c

2
.gitignore vendored

@ -8,4 +8,6 @@
*.jpg
*.pkl
*.kml
*.txt
*.geojson
*checkpoint.ipynb

@ -1,16 +1,19 @@
# CoastSat
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2779294.svg)](https://doi.org/10.5281/zenodo.2779294)
CoastSat is an open-source software toolkit written in Python that enables users to obtain time-series of shoreline position at any coastline worldwide from 30+ years (and growing) of publicly available satellite imagery.
![Alt text](https://github.com/kvos/CoastSat/blob/development/classifiers/doc/example.gif?raw=true)
![Alt text](https://github.com/kvos/CoastSat/blob/development/examples/doc/example.gif)
The underlying approach and application of the CoastSat toolkit are described in detail in:
*Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (submitted). CoastSat: a Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery, Environmental Modelling and Software*.
There are two main steps:
- assisted retrieval from Google Earth Engine of all avaiable satellite images spanning the user-defined region of interest and time period
There are three main steps:
- assisted retrieval from Google Earth Engine of all available satellite images spanning the user-defined region of interest and time period
- automated extraction of shorelines from all the selected images using a sub-pixel resolution technique
- intersection of the 2D shorelines with user-defined shore-normal transects
### Description
@ -21,82 +24,47 @@ The shoreline detection algorithm implemented in CoastSat is optimised for sandy
## 1. Installation
CoastSat requires the following Python packages to run:
```
conda-forge: python=3.6 | matplotlib | scikit-image | scikit-learn | gdal | earthengine-api | oauth2client | spyder | jupyter | simplekml
PyPi: shapely
```
If you are not a regular Python user and are not sure how to install these packages from *conda-forge* and *PyPi*, the section below explains how to install them step-by-step using Anaconda. More experinced Python users can proceed to install these packages and go directly to section **1.2 Activating Google Earth Engine Python API**.
### 1.1 Create an environment with Anaconda
### 1.1 Installing the packages (Anaconda)
To run the toolbox you first need to install the required Python packages in an environment. To do this we will use **Anaconda**, which can be downloaded freely [here](https://www.anaconda.com/download/).
If Anaconda is not already installed on your PC, it can be freely downloaded at https://www.anaconda.com/download/.
Open the *Anaconda prompt* (in Mac and Linux, open a terminal window) and select the folder where you have downloaded/cloned this repository.
Once you have it installed on your PC, open the Anaconda prompt (in Mac and Linux, open a terminal window) and use the `cd` command (change directory) to go the folder where you have downloaded this repository.
Create a new environment named *coastsat*:
Create a new environment named `coastsat` with all the required packages:
```
conda create -n coastsat
conda env create -f environment.yml -n coastsat
```
Activate the new environment:
All the required packages have now been installed in an environment called `coastsat`. Now, activate the new environment:
```
conda activate coastsat
```
On Linux systems, type `source activate coastsat` instead.
To confrim that you have successfully activated CoastSat, your terminal command line prompt should now start with (coastsat).
Now you need to populate the environment with the packages needed to run the CoastSat toolkit. All the necessary packages are contained in three platform specific files: `requirements_win64.txt`, `requirements_osx64.txt`, `requirements_linux64.txt`. To install the package for your pc platform, run one of the following commands, depending on which platform you are operating:
To confirm that you have successfully activated CoastSat, your terminal command line prompt should now start with (coastsat).
#### Windows 64 bits (win64)
**In case errors are raised:**: you should create a new environment and manually install the required packages, which are listed in the environment.yml file. The following [link](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) shows how to create and manage an environment with Anaconda.
```
conda install --name coastsat --file requirements_win64.txt
```
#### Mac 64 bits (osx64)
```
conda install --name coastsat --file requirements_osx64.txt
```
#### Linux 64 bits (linux64)
```
conda install --name coastsat --file requirements_linux64.txt
```
This might take a few minutes... once it is finished, run the following command:
```
pip install shapely
```
All the packages have now been install in an environment called `coastsat`.
### 1.2 Activating Google Earth Engine Python API
### 1.2 Activate Google Earth Engine Python API
Go to https://earthengine.google.com and sign up to Google Earth Engine (GEE).
![gee_capture](https://user-images.githubusercontent.com/7217258/49348457-a9271300-f6f9-11e8-8c0b-407383940e94.jpg)
Once you have created a Google Earth Engine account, go back to Anaconda and link your GEE credentials to the Python API:
Once you have created a Google Earth Engine account, go back to the Anaconda prompt and link your GEE credentials to the Python API:
```
earthengine authenticate
```
A web browser will open. Login with your GEE credentials, read and accept the terms and conditions, and copy the authorization code into the Anaconda terminal.
A web browser will open, login with your GEE credential and accept the terms and conditions. Then copy the authorization code into the Anaconda terminal.
Now you are ready to start using the CoastSat toolbox!
## 2. Usage
**Note**: remember to always activate the environment with `conda activate coastsat` each time you are preparing to use the toolbox.
**Note**: remeber to always activate the `coastsat` environment with `conda activate coastsat` each time you are preparing to use it.
Your terminal command line prompt should always start with (coastsat) to confirm that it is activated.
## 2. Usage
An example of how to run the software in a Jupyter Notebook is provided in the repository (`example_jupyter.ipynb`). To run this, first activate your `coastsat` environment with `conda activate coastsat` (if not already active), and then type:
@ -104,66 +72,65 @@ An example of how to run the software in a Jupyter Notebook is provided in the r
jupyter notebook
```
A web browser window will open. Point to the directory where you downloaded/cloned this repository and click on `example_jupyter.ipynb`.
The following sections guide the reader through the different functionalities of CoastSat with an example at Narrabeen-Collaroy beach (Australia). If you prefer to use Spyder or PyCharm or other integrated development environments (IDEs), a Python script `main.py` is also included in the repository. If using `main.py` on Spyder, make sure that the Graphics Backend is set to **Automatic** and not **Inline** (as this mode doesn't allow to interact with the figures). To change this setting go under Preferences>IPython console>Graphics.
A web browser window will open. Point to the directory where you downloaded this repository and click on `example_jupyter.ipynb`.
The following sections guide the reader through the different functionalities of CoastSat with an example at Narrabeen-Collaroy beach (Australia). If you prefer to use **Spyder**, **PyCharm** or other integrated development environments (IDEs), a Python script named `example.py` is also included in the repository.
To run a Jupyter Notebook, place your cursor inside one of the code sections and then clikc on the 'run' button up in the top menu to run that section and progress forward (as shown in the animation below).
If using `example.py` on **Spyder**, make sure that the Graphics Backend is set to **Automatic** and not **Inline** (as this mode doesn't allow to interact with the figures). To change this setting go under Preferences>IPython console>Graphics.
To run a Jupyter Notebook, place your cursor inside one of the code sections and then click on the `run` button up in the top menu to run that section and progress forward (as shown in the animation below).
![example_jupyter](https://user-images.githubusercontent.com/7217258/49705486-8dc88480-fc72-11e8-8300-c342baaf54eb.gif)
### 2.1 Retrieval of the satellite images
To retrieve from the GEE server the avaiable satellite images cropped around the required region of coasltine for the particular time period of interest, the following user-defined variables are required:
- `polygon`: the coordinates of the region of interest (longitude/latitude pairs)
To retrieve from the GEE server the avaiable satellite images cropped around the user-defined region of coasltine for the particular time period of interest, the following variables are required:
- `polygon`: the coordinates of the region of interest (longitude/latitude pairs in WGS84)
- `dates`: dates over which the images will be retrieved (e.g., `dates = ['2017-12-01', '2018-01-01']`)
- `sat_list`: satellite missions to consider (e.g., `sat_list = ['L5', 'L7', 'L8', 'S2']` for Landsat 5, 7, 8 and Sentinel-2 collections)
- `sitename`: name of the site (user-defined name of the subfolder where the images and other accompanying files will be stored)
- `sitename`: name of the site (this is the name of the subfolder where the images and other accompanying files will be stored)
- `filepath`: filepath to the directory where the data will be stored
The call `metadata = SDS_download.retrieve_images(inputs)` will launch the retrieval of the images and store them as .TIF files (under *.data\sitename*). The metadata contains the exact time of acquisition (UTC) and geometric accuracy of each downloaded image and is saved as `metadata_sitename.pkl`. If the images have already been downloaded previously and the user only wants to run the shoreline detection, the metadata can be loaded directly from this file. The screenshot below shows an example where all the images of Collaroy-Narrrabeen (Australia) acquired in December 2017 are retrieved.
The call `metadata = SDS_download.retrieve_images(inputs)` will launch the retrieval of the images and store them as .TIF files (under *filepath\sitename*). The metadata contains the exact time of acquisition (in UTC time) and geometric accuracy of each downloaded image and is saved as `metadata_sitename.pkl`. If the images have already been downloaded previously and the user only wants to run the shoreline detection, the metadata can be loaded directly by running `metadata = SDS_download.get_metadata(inputs)`.
![retrieval](https://user-images.githubusercontent.com/7217258/49353105-0037e280-f710-11e8-9454-c03ce6116c54.PNG)
The screenshot below shows an example of inputs that will retrieve all the images of Collaroy-Narrrabeen (Australia) acquired by Sentinel-2 in December 2017.
### 2.2 Shoreline detection
![doc1](https://user-images.githubusercontent.com/7217258/56278746-20f65700-614a-11e9-8715-ba5b8f938063.PNG)
It is now time to map the sandy shorelines!
**Note:** The area of the polygon should not exceed 100 km2, so for very long beaches split it into multiple smaller polygons.
### 2.2 Shoreline detection
The following user-defined settings are required:
- `cloud_thresh`: threshold on maximum cloud cover that is acceptable on the images (value between 0 and 1 - this may require some initial experimentation)
- `output_epsg`: epsg code defining the spatial reference system of the shoreline coordinates. It has to be a cartesion coordinate system (i.e. projected) and not a geographical coordinate system (in latitude and longitude angles).
- `check_detection`: if set to `True` allows the user to quality control each shoreline detection
To map the shorelines, the following user-defined settings are needed:
- `cloud_thresh`: threshold on maximum cloud cover that is acceptable on the images (value between 0 and 1 - this may require some initial experimentation).
- `output_epsg`: epsg code defining the spatial reference system of the shoreline coordinates. It has to be a cartesian coordinate system (i.e. projected) and not a geographical coordinate system (in latitude and longitude angles). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system.
- `check_detection`: if set to `True` the user can quality control each shoreline detection interactively.
- `save_figure`: if set to `True` a figure of each mapped shoreline is saved (under *filepath/sitename/jpg_files/detection*). Note that this may slow down the process.
See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. If the user wants to quality control the mapped shorelines and manually validate each detection, the parameter `check_detection` should be set to `True`.
The setting `check_detection` is recommended when using the tool for the first time as it will show the user how CoastSat is mapping the shorelines.
In addition, there are extra parameters (`min_beach_size`, `buffer_size`, `min_length_sl`, `cloud_mask_issue`) that can be tuned to optimise the shoreline detection (for Advanced users only). For the moment leave these parameters set to their default values, we will see later how they can be modified.
There are additional parameters (`min_beach_size`, `buffer_size`, `min_length_sl`, `cloud_mask_issue` and `dark sand`) that can be tuned to optimise the shoreline detection (for Advanced users only). For the moment leave these parameters set to their default values, we will see later how they can be modified.
An example of settings is provided here:
![settings_v2](https://user-images.githubusercontent.com/7217258/52684207-876bc700-2f99-11e9-9e5c-086f523bcdc2.PNG)
![doc2](https://user-images.githubusercontent.com/7217258/56278918-7a5e8600-614a-11e9-9184-77b69427b834.PNG)
Once all the settings have been defined, the batch shoreline detection can be launched by calling:
```
output = SDS_shoreline.extract_shorelines(metadata, settings)
```
When `check_detection` is set to `True`, a figure like the one below appears and asks the user to manually accept/reject each detection by clicking on `keep` or `skip`.
When `check_detection` is set to `True`, a figure like the one below appears and asks the user to manually accept/reject each detection by pressing the `right arrow` (⇨) to `keep` the shoreline or `left arrow` (⇦) to `skip` the mapped shoreline. The user can break the loop at any time by pressing `escape` (nothing will be saved though).
![Alt text](https://github.com/kvos/CoastSat/blob/development/classifiers/doc/batch_detection.gif?raw=true)
![Alt text](https://github.com/kvos/CoastSat/blob/development/examples/doc/batch_detection.gif)
Once all the shorelines have been mapped, the output is available in two different formats (saved under *.\data\sitename*):
- `sitename_output.pkl`: contains a list with the shoreline coordinates and the exact timestamp at which the image was captured (UTC time) as well as the geometric accuracy and the cloud cover of each indivdual image. This list can be manipulated with Python, a snippet of code to plot the results is provided in the main script.
- `sitename_output.kml`: this output can be visualised in a GIS software (e.g., QGIS, ArcGIS).
- `sitename_output.pkl`: contains a list with the shoreline coordinates, the exact timestamp at which the image was captured (UTC time), the geometric accuracy and the cloud cover of each individual image. This list can be manipulated with Python, a snippet of code to plot the results is provided in the example script.
- `sitename_output.geojson`: this output can be visualised in a GIS software (e.g., QGIS, ArcGIS).
The figure below shows how the satellite-derived shorelines can be opened in a GIS software (QGIS) using the `.kml` output.
The figure below shows how the satellite-derived shorelines can be opened in a GIS software (QGIS) using the `.geojson` output. Note that the coordinates in the `.geojson` file are in the spatial reference system defined by the `output_epsg`.
![gis_output](https://user-images.githubusercontent.com/7217258/49361401-15bd0480-f730-11e8-88a8-a127f87ca64a.jpeg)
#### Advanced shoreline detection parameters
As mentioned above, there are some additional parameters that can be modified to optimise the shoreline detection:
- `min_beach_area`: minimum allowable object area (in metres^2) for the class 'sand'. During the image classification, some features (for example, building roofs) may be incorrectly labelled as sand. To correct this, all the objects classified as sand containing less than a certain number of connected pixels are removed from the sand class. The default value of `min_beach_area` is 4500 m^2, which corresponds to 20 connected pixels of 15 m^2. If you are looking at a very small beach (<20 connected pixels on the images), try decreasing the value of this parameter.
- `buffer_size`: radius (in metres) that defines the buffer around sandy pixels that is considered for the shoreline detection. The default value of `buffer_size` is 150 m. This parameter should be increased if you have a very wide (>150 m) surf zone or inter-tidal zone.
- `min_length_sl`: minimum length (in metres) of shoreline perimeter to be valid. This can be used to discard small features that are detected but do not correspond to the sand-water shoreline. The default value is 200 m. If the shoreline that you are trying to map is shorter than 200 m, decrease the value of this parameter.
- `cloud_mask_issue`: the cloud mask algorithm applied to Landsat images by USGS, namely CFMASK, does have difficulties sometimes with very bright features such as beaches or white-water in the ocean. This may result in pixels corresponding to a beach being identified as clouds in the cloud mask (appear as black pixels on your images). If this issue seems to be present in a large proportion of images from your local beach, you can switch this parameter to `True` and CoastSat will remove from the cloud mask the pixels that form very thin linear features (as often these are beaches and not clouds). Only activate this parameter if you observe this very specific cloud mask issue, otherwise leave to the default value of `False`.
#### Reference shoreline
There is also an option to manually digitize a reference shoreline before running the batch shoreline detection on all the images. This reference shoreline helps to reject outliers and false detections when mapping shorelines as it only considers as valid shorelines the points that are within a distance from this reference shoreline.
@ -177,18 +144,27 @@ This function allows the user to click points along the shoreline on one of the
![ref_shoreline](https://user-images.githubusercontent.com/7217258/49710753-94b1c000-fc8f-11e8-9b6c-b5e96aadc5c9.gif)
The maximum distance (in metres) allowed from the reference shoreline is defined by the parameter `max_dist_ref`. This parameter is set to a default value of 100 m. If you think that your shoreline will move more than 100 m, please change this parameter to an appropriate distance. This may be the case for large nourishments or eroding/accreting coastlines.
The maximum distance (in metres) allowed from the reference shoreline is defined by the parameter `max_dist_ref`. This parameter is set to a default value of 100 m. If you think that 100m buffer from the reference shoreline will not capture the shoreline variability at your site, increase the value of this parameter. This may be the case for large nourishments or eroding/accreting coastlines.
#### Advanced shoreline detection parameters
As mentioned above, there are some additional parameters that can be modified to optimise the shoreline detection:
- `min_beach_area`: minimum allowable object area (in metres^2) for the class 'sand'. During the image classification, some features (for example, building roofs) may be incorrectly labelled as sand. To correct this, all the objects classified as sand containing less than a certain number of connected pixels are removed from the sand class. The default value is 4500 m^2, which corresponds to 20 connected pixels of 15 m^2. If you are looking at a very small beach (<20 connected pixels on the images), try decreasing the value of this parameter.
- `buffer_size`: radius (in metres) that defines the buffer around sandy pixels that is considered to calculate the sand/water threshold. The default value of `buffer_size` is 150 m. This parameter should be increased if you have a very wide (>150 m) surf zone or inter-tidal zone.
- `min_length_sl`: minimum length (in metres) of shoreline perimeter to be valid. This can be used to discard small features that are detected but do not correspond to the actual shoreline. The default value is 200 m. If the shoreline that you are trying to map is shorter than 200 m, decrease the value of this parameter.
- `cloud_mask_issue`: the cloud mask algorithm applied to Landsat images by USGS, namely CFMASK, does have difficulties sometimes with very bright features such as beaches or white-water in the ocean. This may result in pixels corresponding to a beach being identified as clouds and appear as masked pixels on your images. If this issue seems to be present in a large proportion of images from your local beach, you can switch this parameter to `True` and CoastSat will remove from the cloud mask the pixels that form very thin linear features, as often these are beaches and not clouds. Only activate this parameter if you observe this very specific cloud mask issue, otherwise leave to the default value of `False`.
- `dark_sand`: if your beach has dark sand (grey/black sand beaches), you can set this parameter to `True` and the classifier will be able to pick up the dark sand. At this stage this option is only available for Landsat images (soon for Sentinel-2 as well).
### 2.3 Shoreline change analysis
This section shows how to obtain time-series of shoreline change along shore-normal transects. Each transect is defined by two points, its origin and a second point that defines its orientation. The parameter `transect_length` determines how far (in metres) from the origin the transect will span. There are 3 options to define the coordinates of the transects:
1. The user can interactively draw shore-normal transects along the beach:
This section shows how to obtain time-series of shoreline change along shore-normal transects. Each transect is defined by two points, its origin and a second point that defines its length and orientation. There are 3 options to define the coordinates of the transects:
1. Interactively draw shore-normal transects along the mapped shorelines:
```
transects = SDS_transects.draw_transects(output, settings)
```
2. Load the transect coordinates from a KML file:
2. Load the transect coordinates from a .geojson file:
```
transects = SDS_transects.load_transects_from_kml('transects.kml')
transects = SDS_tools.transects_from_geojson(path_to_geojson_file)
```
3. Create the transects by manually providing the coordinates of two points:
```
@ -205,13 +181,22 @@ Once the shore-normal transects have been defined, the intersection between the
settings['along_dist'] = 25
cross_distance = SDS_transects.compute_intersection(output, transects, settings)
```
The parameter `along_dist` defines the along-shore distance around the transect over which shoreline points are selected to compute the intersection. The default value is 25 m, which means that the intersection is computed as the median of the points located within 25 m of the transect (50 m alongshore-median).
The parameter `along_dist` defines the along-shore distance around the transect over which shoreline points are selected to compute the intersection. The default value is 25 m, which means that the intersection is computed as the median of the points located within 25 m of the transect (50 m alongshore-median). This helps to smooth out localised water levels in the swash zone.
An example is illustrated below:
An example is shown in the animation below:
![transects](https://user-images.githubusercontent.com/7217258/49990925-8b985a00-ffd3-11e8-8c54-57e4bf8082dd.gif)
## Issues
Having a problem? Post an issue in the [Issues page](https://github.com/kvos/coastsat/issues) (please do not email).
## Contributing
1. Fork the repository (https://github.com/kvos/coastsat/fork).
A fork is a copy on which you can make your changes.
2. Create a new branch on your fork
3. Commit your changes and push them to your branch
4. When the branch is ready to be merged, create a Pull Request
## Issues and Contributions
Check the following link for more information on how to make a clean pull request: https://gist.github.com/MarcDiethelm/7303312).
Having a problem or looking to contribute to the code? Please see the [Issues page](https://github.com/kvos/coastsat/issues).
If you like the repo put a star on it!

@ -15,7 +15,7 @@ import ee
from urllib.request import urlretrieve
import zipfile
import copy
import gdal_merge
from coastsat import gdal_merge
# additional modules
from datetime import datetime
@ -24,7 +24,7 @@ import pickle
import skimage.morphology as morphology
# own modules
import SDS_preprocess, SDS_tools
from coastsat import SDS_preprocess, SDS_tools
np.seterr(all='ignore') # raise/ignore divisions by 0 and nans
@ -77,19 +77,25 @@ def retrieve_images(inputs):
'sitename': str
String containig the name of the site
'polygon': list
polygon containing the lon/lat coordinates to be extracted
longitudes in the first column and latitudes in the second column
polygon containing the lon/lat coordinates to be extracted,
longitudes in the first column and latitudes in the second column,
there are 5 pairs of lat/lon with the fifth point equal to the first point.
e.g. [[[151.3, -33.7],[151.4, -33.7],[151.4, -33.8],[151.3, -33.8],
[151.3, -33.7]]]
'dates': list of str
list that contains 2 strings with the initial and final dates in format 'yyyy-mm-dd'
e.g. ['1987-01-01', '2018-01-01']
'sat_list': list of str
list that contains the names of the satellite missions to include
e.g. ['L5', 'L7', 'L8', 'S2']
'filepath_data': str
Filepath to the directory where the images are downloaded
Returns:
-----------
metadata: dict
contains all the information about the satellite images that were downloaded
contains the information about the satellite images that were downloaded: filename,
georeferencing accuracy and image coordinate reference system
"""
@ -98,18 +104,19 @@ def retrieve_images(inputs):
polygon = inputs['polygon']
dates = inputs['dates']
sat_list= inputs['sat_list']
filepath_data = inputs['filepath']
# format in which the images are downloaded
suffix = '.tif'
# initialize metadata dictionnary (stores timestamps and georefencing accuracy of each image)
# initialize metadata dictionnary (stores information about each image)
metadata = dict([])
# create directories
try:
os.makedirs(os.path.join(os.getcwd(), 'data',sitename))
except:
print('')
# create a new directory for this site
if not os.path.exists(os.path.join(filepath_data,sitename)):
os.makedirs(os.path.join(filepath_data,sitename))
print('Downloading images:')
#=============================================================================================#
# download L5 images
@ -119,11 +126,12 @@ def retrieve_images(inputs):
satname = 'L5'
# create a subfolder to store L5 images
filepath = os.path.join(os.getcwd(), 'data', sitename, satname, '30m')
try:
filepath = os.path.join(filepath_data, sitename, satname, '30m')
filepath_meta = os.path.join(filepath_data, sitename, satname, 'meta')
if not os.path.exists(filepath):
os.makedirs(filepath)
except:
print('')
if not os.path.exists(filepath_meta):
os.makedirs(filepath_meta)
# Landsat 5 collection
input_col = ee.ImageCollection('LANDSAT/LT05/C01/T1_TOA')
@ -135,12 +143,12 @@ def retrieve_images(inputs):
cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all]
if np.any([_ > 95 for _ in cloud_cover]):
idx_delete = np.where([_ > 95 for _ in cloud_cover])[0]
im_all_cloud = [x for k,x in enumerate(im_all) if k not in idx_delete]
im_col = [x for k,x in enumerate(im_all) if k not in idx_delete]
else:
im_all_cloud = im_all
n_img = len(im_all_cloud)
im_col = im_all
n_img = len(im_col)
# print how many images there are
print('Number of ' + satname + ' images covering ' + sitename + ':', n_img)
print('%s: %d images'%(satname,n_img))
# loop trough images
timestamps = []
@ -151,11 +159,11 @@ def retrieve_images(inputs):
for i in range(n_img):
# find each image in ee database
im = ee.Image(im_all_cloud[i].get('id'))
im = ee.Image(im_col[i]['id'])
# read metadata
im_dic = im.getInfo()
im_dic = im_col[i]
# get bands
im_bands = im_dic.get('bands')
im_bands = im_dic['bands']
# get time of acquisition (UNIX time)
t = im_dic['properties']['system:time_start']
# convert to datetime
@ -165,11 +173,10 @@ def retrieve_images(inputs):
# get EPSG code of reference system
im_epsg.append(int(im_dic['bands'][0]['crs'][5:]))
# get geometric accuracy
try:
if 'GEOMETRIC_RMSE_MODEL' in im_dic['properties'].keys():
acc_georef.append(im_dic['properties']['GEOMETRIC_RMSE_MODEL'])
except:
# default value of accuracy (RMSE = 12m)
acc_georef.append(12)
else:
acc_georef.append(12) # default value of accuracy (RMSE = 12m)
# delete dimensions key from dictionnary, otherwise the entire image is extracted
for j in range(len(im_bands)): del im_bands[j]['dimensions']
# bands for L5
@ -189,10 +196,18 @@ def retrieve_images(inputs):
except:
os.remove(os.path.join(filepath, filename))
os.rename(local_data, os.path.join(filepath, filename))
# write metadata in .txt file
filename_txt = filename.replace('.tif','')
metadict = {'filename':filename,'acc_georef':acc_georef[i],
'epsg':im_epsg[i]}
with open(os.path.join(filepath_meta,filename_txt + '.txt'), 'w') as f:
for key in metadict.keys():
f.write('%s\t%s\n'%(key,metadict[key]))
print('\r%d%%' % (int(((i+1)/n_img)*100)), end='')
print('')
print(i+1, end='..')
# sort timestamps and georef accuracy (downloaded images are sorted by date in directory)
# sort metadata (downloaded images are sorted by date in directory)
timestamps_sorted = sorted(timestamps)
idx_sorted = sorted(range(len(timestamps)), key=timestamps.__getitem__)
acc_georef_sorted = [acc_georef[j] for j in idx_sorted]
@ -201,9 +216,6 @@ def retrieve_images(inputs):
# save into dict
metadata[satname] = {'dates':timestamps_sorted, 'acc_georef':acc_georef_sorted,
'epsg':im_epsg_sorted, 'filenames':filenames_sorted}
print('\nFinished with ' + satname)
#=============================================================================================#
# download L7 images
@ -213,14 +225,16 @@ def retrieve_images(inputs):
satname = 'L7'
# create subfolders (one for 30m multispectral bands and one for 15m pan bands)
filepath = os.path.join(os.getcwd(), 'data', sitename, 'L7')
filepath = os.path.join(filepath_data, sitename, 'L7')
filepath_pan = os.path.join(filepath, 'pan')
filepath_ms = os.path.join(filepath, 'ms')
try:
filepath_meta = os.path.join(filepath, 'meta')
if not os.path.exists(filepath_pan):
os.makedirs(filepath_pan)
if not os.path.exists(filepath_ms):
os.makedirs(filepath_ms)
except:
print('')
if not os.path.exists(filepath_meta):
os.makedirs(filepath_meta)
# landsat 7 collection
input_col = ee.ImageCollection('LANDSAT/LE07/C01/T1_RT_TOA')
@ -232,12 +246,12 @@ def retrieve_images(inputs):
cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all]
if np.any([_ > 95 for _ in cloud_cover]):
idx_delete = np.where([_ > 95 for _ in cloud_cover])[0]
im_all_cloud = [x for k,x in enumerate(im_all) if k not in idx_delete]
im_col = [x for k,x in enumerate(im_all) if k not in idx_delete]
else:
im_all_cloud = im_all
n_img = len(im_all_cloud)
im_col = im_all
n_img = len(im_col)
# print how many images there are
print('Number of ' + satname + ' images covering ' + sitename + ':', n_img)
print('%s: %d images'%(satname,n_img))
# loop trough images
timestamps = []
@ -248,11 +262,11 @@ def retrieve_images(inputs):
for i in range(n_img):
# find each image in ee database
im = ee.Image(im_all_cloud[i].get('id'))
im = ee.Image(im_col[i]['id'])
# read metadata
im_dic = im.getInfo()
im_dic = im_col[i]
# get bands
im_bands = im_dic.get('bands')
im_bands = im_dic['bands']
# get time of acquisition (UNIX time)
t = im_dic['properties']['system:time_start']
# convert to datetime
@ -262,11 +276,10 @@ def retrieve_images(inputs):
# get EPSG code of reference system
im_epsg.append(int(im_dic['bands'][0]['crs'][5:]))
# get geometric accuracy
try:
if 'GEOMETRIC_RMSE_MODEL' in im_dic['properties'].keys():
acc_georef.append(im_dic['properties']['GEOMETRIC_RMSE_MODEL'])
except:
# default value of accuracy (RMSE = 12m)
acc_georef.append(12)
else:
acc_georef.append(12) # default value of accuracy (RMSE = 12m)
# delete dimensions key from dictionnary, otherwise the entire image is extracted
for j in range(len(im_bands)): del im_bands[j]['dimensions']
# bands for L7
@ -295,10 +308,18 @@ def retrieve_images(inputs):
except:
os.remove(os.path.join(filepath_ms, filename_ms))
os.rename(local_data_ms, os.path.join(filepath_ms, filename_ms))
# write metadata in .txt file
filename_txt = filename_pan.replace('_pan','').replace('.tif','')
metadict = {'filename':filename_pan,'acc_georef':acc_georef[i],
'epsg':im_epsg[i]}
with open(os.path.join(filepath_meta,filename_txt + '.txt'), 'w') as f:
for key in metadict.keys():
f.write('%s\t%s\n'%(key,metadict[key]))
print('\r%d%%' % (int(((i+1)/n_img)*100)), end='')
print('')
print(i+1, end='..')
# sort timestamps and georef accuracy (dowloaded images are sorted by date in directory)
# sort metadata (dowloaded images are sorted by date in directory)
timestamps_sorted = sorted(timestamps)
idx_sorted = sorted(range(len(timestamps)), key=timestamps.__getitem__)
acc_georef_sorted = [acc_georef[j] for j in idx_sorted]
@ -307,8 +328,6 @@ def retrieve_images(inputs):
# save into dict
metadata[satname] = {'dates':timestamps_sorted, 'acc_georef':acc_georef_sorted,
'epsg':im_epsg_sorted, 'filenames':filenames_sorted}
print('\nFinished with ' + satname)
#=============================================================================================#
# download L8 images
@ -318,14 +337,16 @@ def retrieve_images(inputs):
satname = 'L8'
# create subfolders (one for 30m multispectral bands and one for 15m pan bands)
filepath = os.path.join(os.getcwd(), 'data', sitename, 'L8')
filepath = os.path.join(filepath_data, sitename, 'L8')
filepath_pan = os.path.join(filepath, 'pan')
filepath_ms = os.path.join(filepath, 'ms')
try:
filepath_meta = os.path.join(filepath, 'meta')
if not os.path.exists(filepath_pan):
os.makedirs(filepath_pan)
if not os.path.exists(filepath_ms):
os.makedirs(filepath_ms)
except:
print('')
if not os.path.exists(filepath_meta):
os.makedirs(filepath_meta)
# landsat 8 collection
input_col = ee.ImageCollection('LANDSAT/LC08/C01/T1_RT_TOA')
@ -337,12 +358,12 @@ def retrieve_images(inputs):
cloud_cover = [_['properties']['CLOUD_COVER'] for _ in im_all]
if np.any([_ > 95 for _ in cloud_cover]):
idx_delete = np.where([_ > 95 for _ in cloud_cover])[0]
im_all_cloud = [x for k,x in enumerate(im_all) if k not in idx_delete]
im_col = [x for k,x in enumerate(im_all) if k not in idx_delete]
else:
im_all_cloud = im_all
n_img = len(im_all_cloud)
im_col = im_all
n_img = len(im_col)
# print how many images there are
print('Number of ' + satname + ' images covering ' + sitename + ':', n_img)
print('%s: %d images'%(satname,n_img))
# loop trough images
timestamps = []
@ -353,11 +374,11 @@ def retrieve_images(inputs):
for i in range(n_img):
# find each image in ee database
im = ee.Image(im_all_cloud[i].get('id'))
im = ee.Image(im_col[i]['id'])
# read metadata
im_dic = im.getInfo()
im_dic = im_col[i]
# get bands
im_bands = im_dic.get('bands')
im_bands = im_dic['bands']
# get time of acquisition (UNIX time)
t = im_dic['properties']['system:time_start']
# convert to datetime
@ -367,11 +388,10 @@ def retrieve_images(inputs):
# get EPSG code of reference system
im_epsg.append(int(im_dic['bands'][0]['crs'][5:]))
# get geometric accuracy
try:
if 'GEOMETRIC_RMSE_MODEL' in im_dic['properties'].keys():
acc_georef.append(im_dic['properties']['GEOMETRIC_RMSE_MODEL'])
except:
# default value of accuracy (RMSE = 12m)
acc_georef.append(12)
else:
acc_georef.append(12) # default value of accuracy (RMSE = 12m)
# delete dimensions key from dictionnary, otherwise the entire image is extracted
for j in range(len(im_bands)): del im_bands[j]['dimensions']
# bands for L8
@ -400,10 +420,18 @@ def retrieve_images(inputs):
except:
os.remove(os.path.join(filepath_ms, filename_ms))
os.rename(local_data_ms, os.path.join(filepath_ms, filename_ms))
# write metadata in .txt file
filename_txt = filename_pan.replace('_pan','').replace('.tif','')
metadict = {'filename':filename_pan,'acc_georef':acc_georef[i],
'epsg':im_epsg[i]}
with open(os.path.join(filepath_meta,filename_txt + '.txt'), 'w') as f:
for key in metadict.keys():
f.write('%s\t%s\n'%(key,metadict[key]))
print('\r%d%%' % (int(((i+1)/n_img)*100)), end='')
print('')
print(i+1, end='..')
# sort timestamps and georef accuracy (dowloaded images are sorted by date in directory)
# sort metadata (dowloaded images are sorted by date in directory)
timestamps_sorted = sorted(timestamps)
idx_sorted = sorted(range(len(timestamps)), key=timestamps.__getitem__)
acc_georef_sorted = [acc_georef[j] for j in idx_sorted]
@ -412,7 +440,6 @@ def retrieve_images(inputs):
metadata[satname] = {'dates':timestamps_sorted, 'acc_georef':acc_georef_sorted,
'epsg':im_epsg_sorted, 'filenames':filenames_sorted}
print('\nFinished with ' + satname)
#=============================================================================================#
# download S2 images
@ -422,13 +449,16 @@ def retrieve_images(inputs):
satname = 'S2'
# create subfolders for the 10m, 20m and 60m multipectral bands
filepath = os.path.join(os.getcwd(), 'data', sitename, 'S2')
try:
filepath = os.path.join(filepath_data, sitename, 'S2')
if not os.path.exists(os.path.join(filepath, '10m')):
os.makedirs(os.path.join(filepath, '10m'))
if not os.path.exists(os.path.join(filepath, '20m')):
os.makedirs(os.path.join(filepath, '20m'))
if not os.path.exists(os.path.join(filepath, '60m')):
os.makedirs(os.path.join(filepath, '60m'))
except:
print('')
filepath_meta = os.path.join(filepath, 'meta')
if not os.path.exists(filepath_meta):
os.makedirs(filepath_meta)
# Sentinel2 collection
input_col = ee.ImageCollection('COPERNICUS/S2')
@ -471,13 +501,13 @@ def retrieve_images(inputs):
cloud_cover = [_['properties']['CLOUDY_PIXEL_PERCENTAGE'] for _ in im_all_updated]
if np.any([_ > 95 for _ in cloud_cover]):
idx_delete = np.where([_ > 95 for _ in cloud_cover])[0]
im_all_cloud = [x for k,x in enumerate(im_all_updated) if k not in idx_delete]
im_col = [x for k,x in enumerate(im_all_updated) if k not in idx_delete]
else:
im_all_cloud = im_all_updated
im_col = im_all_updated
n_img = len(im_all_cloud)
n_img = len(im_col)
# print how many images there are
print('Number of ' + satname + ' images covering ' + sitename + ':', n_img)
print('%s: %d images'%(satname,n_img))
# loop trough images
timestamps = []
@ -488,11 +518,11 @@ def retrieve_images(inputs):
for i in range(n_img):
# find each image in ee database
im = ee.Image(im_all_cloud[i].get('id'))
im = ee.Image(im_col[i]['id'])
# read metadata
im_dic = im.getInfo()
im_dic = im_col[i]
# get bands
im_bands = im_dic.get('bands')
im_bands = im_dic['bands']
# get time of acquisition (UNIX time)
t = im_dic['properties']['system:time_start']
# convert to datetime
@ -556,10 +586,17 @@ def retrieve_images(inputs):
acc_georef.append(-1)
else:
acc_georef.append(-1)
print(i+1, end='..')
# sort timestamps and georef accuracy (dowloaded images are sorted by date in directory)
# write metadata in .txt file
filename_txt = filename10.replace('_10m','').replace('.tif','')
metadict = {'filename':filename10,'acc_georef':acc_georef[i],
'epsg':im_epsg[i]}
with open(os.path.join(filepath_meta,filename_txt + '.txt'), 'w') as f:
for key in metadict.keys():
f.write('%s\t%s\n'%(key,metadict[key]))
print('\r%d%%' % (int(((i+1)/n_img)*100)), end='')
print('')
# sort metadata (dowloaded images are sorted by date in directory)
timestamps_sorted = sorted(timestamps)
idx_sorted = sorted(range(len(timestamps)), key=timestamps.__getitem__)
acc_georef_sorted = [acc_georef[j] for j in idx_sorted]
@ -568,14 +605,13 @@ def retrieve_images(inputs):
metadata[satname] = {'dates':timestamps_sorted, 'acc_georef':acc_georef_sorted,
'epsg':im_epsg_sorted, 'filenames':filenames_sorted}
print('\nFinished with ' + satname)
# merge overlapping images (necessary only if the polygon is at the boundary of an image)
if 'S2' in metadata.keys():
metadata = merge_overlapping_images(metadata,inputs)
# save metadata dict
filepath = os.path.join(os.getcwd(), 'data', sitename)
filepath = os.path.join(filepath_data, sitename)
with open(os.path.join(filepath, sitename + '_metadata' + '.pkl'), 'wb') as f:
pickle.dump(metadata, f)
@ -599,30 +635,34 @@ def merge_overlapping_images(metadata,inputs):
'sitename': str
String containig the name of the site
'polygon': list
polygon containing the lon/lat coordinates to be extracted
longitudes in the first column and latitudes in the second column
polygon containing the lon/lat coordinates to be extracted,
longitudes in the first column and latitudes in the second column,
there are 5 pairs of lat/lon with the fifth point equal to the first point.
e.g. [[[151.3, -33.7],[151.4, -33.7],[151.4, -33.8],[151.3, -33.8],
[151.3, -33.7]]]
'dates': list of str
list that contains 2 strings with the initial and final dates in format 'yyyy-mm-dd'
e.g. ['1987-01-01', '2018-01-01']
'sat_list': list of str
list that contains the names of the satellite missions to include
e.g. ['L5', 'L7', 'L8', 'S2']
'filepath_data': str
Filepath to the directory where the images are downloaded
Returns:
-----------
metadata: dict
metadata_updated: dict
updated metadata with the information of the merged images
"""
# only for Sentinel-2 at this stage (could be implemented for Landsat as well)
# only for Sentinel-2 at this stage (not sure if this is needed for Landsat images)
sat = 'S2'
filepath = os.path.join(os.getcwd(), 'data', inputs['sitename'])
filepath = os.path.join(inputs['filepath'], inputs['sitename'])
# find the images that are overlapping (same date in S2 filenames)
filenames = metadata[sat]['filenames']
filenames_copy = filenames.copy()
# loop through all the filenames and find the pairs of overlapping images (same date and time of acquisition)
pairs = []
for i,fn in enumerate(filenames):
@ -636,20 +676,17 @@ def merge_overlapping_images(metadata,inputs):
else:
pairs.append([i,idx_dup])
msg = 'Merging %d pairs of overlapping images...' % len(pairs)
print(msg)
# for each pair of images, merge them into one complete image
for i,pair in enumerate(pairs):
print(i+1, end='..')
fn_im = []
for index in range(len(pair)):
# read image
fn_im.append([os.path.join(filepath, 'S2', '10m', filenames[pair[index]]),
os.path.join(filepath, 'S2', '20m', filenames[pair[index]].replace('10m','20m')),
os.path.join(filepath, 'S2', '60m', filenames[pair[index]].replace('10m','60m'))])
im_ms, georef, cloud_mask, im_extra, imQA = SDS_preprocess.preprocess_single(fn_im[index], sat, False)
os.path.join(filepath, 'S2', '60m', filenames[pair[index]].replace('10m','60m')),
os.path.join(filepath, 'S2', 'meta', filenames[pair[index]].replace('_10m','').replace('.tif','.txt'))])
im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata = SDS_preprocess.preprocess_single(fn_im[index], sat, False)
# in Sentinel2 images close to the edge of the image there are some artefacts,
# that are squares with constant pixel intensities. They need to be masked in the
@ -717,89 +754,82 @@ def merge_overlapping_images(metadata,inputs):
os.remove(fn_im[1][2])
os.rename(fn_merged, fn_im[0][2])
# remove the metadata .txt file of the duplicate image
os.chmod(fn_im[1][3], 0o777)
os.remove(fn_im[1][3])
print('%d pairs of overlapping Sentinel-2 images were merged' % len(pairs))
# update the metadata dict (delete all the duplicates)
metadata2 = copy.deepcopy(metadata)
filenames_copy = metadata2[sat]['filenames']
metadata_updated = copy.deepcopy(metadata)
filenames_copy = metadata_updated[sat]['filenames']
index_list = []
for i in range(len(filenames_copy)):
if filenames_copy[i].find('dup') == -1:
index_list.append(i)
for key in metadata2[sat].keys():
metadata2[sat][key] = [metadata2[sat][key][_] for _ in index_list]
for key in metadata_updated[sat].keys():
metadata_updated[sat][key] = [metadata_updated[sat][key][_] for _ in index_list]
return metadata2
return metadata_updated
def remove_cloudy_images(metadata,inputs,cloud_thresh):
def get_metadata(inputs):
"""
Deletes the .TIF file of images that have a cloud cover percentage that is above the cloud
threshold.
Gets the metadata from the downloaded .txt files in the \meta folders.
KV WRL 2018
Arguments:
-----------
metadata: dict
contains all the information about the satellite images that were downloaded
inputs: dict
dictionnary that contains the following fields:
'sitename': str
String containig the name of the site
'polygon': list
polygon containing the lon/lat coordinates to be extracted
longitudes in the first column and latitudes in the second column
'dates': list of str
list that contains 2 strings with the initial and final dates in format 'yyyy-mm-dd'
e.g. ['1987-01-01', '2018-01-01']
'sat_list': list of str
list that contains the names of the satellite missions to include
e.g. ['L5', 'L7', 'L8', 'S2']
cloud_thresh: float
value between 0 and 1 indicating the maximum cloud fraction in the image that is accepted
'filepath_data': str
Filepath to the directory where the images are downloaded
Returns:
-----------
metadata: dict
updated metadata with the information of the merged images
contains the information about the satellite images that were downloaded: filename,
georeferencing accuracy and image coordinate reference system
"""
# directory containing the images
filepath = os.path.join(inputs['filepath'],inputs['sitename'])
# initialize metadata dict
metadata = dict([])
# loop through the satellite missions
for satname in ['L5','L7','L8','S2']:
# if a folder has been created for the given satellite mission
if satname in os.listdir(filepath):
# update the metadata dict
metadata[satname] = {'filenames':[], 'acc_georef':[], 'epsg':[], 'dates':[]}
# directory where the metadata .txt files are stored
filepath_meta = os.path.join(filepath, satname, 'meta')
# get the list of filenames and sort it chronologically
filenames_meta = os.listdir(filepath_meta)
filenames_meta.sort()
# loop through the .txt files
for im_meta in filenames_meta:
# read them and extract the metadata info: filename, georeferencing accuracy
# epsg code and date
with open(os.path.join(filepath_meta, im_meta), 'r') as f:
filename = f.readline().split('\t')[1].replace('\n','')
acc_georef = float(f.readline().split('\t')[1].replace('\n',''))
epsg = int(f.readline().split('\t')[1].replace('\n',''))
date_str = filename[0:19]
date = pytz.utc.localize(datetime(int(date_str[:4]),int(date_str[5:7]),
int(date_str[8:10]),int(date_str[11:13]),
int(date_str[14:16]),int(date_str[17:19])))
# store the information in the metadata dict
metadata[satname]['filenames'].append(filename)
metadata[satname]['acc_georef'].append(acc_georef)
metadata[satname]['epsg'].append(epsg)
metadata[satname]['dates'].append(date)
# save a .pkl file containing the metadata dict
with open(os.path.join(filepath, inputs['sitename'] + '_metadata' + '.pkl'), 'wb') as f:
pickle.dump(metadata, f)
# create a deep copy
metadata2 = copy.deepcopy(metadata)
for satname in metadata.keys():
# get the image filenames
filepath = SDS_tools.get_filepath(inputs,satname)
filenames = metadata[satname]['filenames']
# loop through images
idx_good = []
for i in range(len(filenames)):
# image filename
fn = SDS_tools.get_filenames(filenames[i],filepath, satname)
# preprocess image (cloud mask + pansharpening/downsampling)
im_ms, georef, cloud_mask, im_extra, imQA = SDS_preprocess.preprocess_single(fn, satname, False)
# calculate cloud cover
cloud_cover = np.divide(sum(sum(cloud_mask.astype(int))),
(cloud_mask.shape[0]*cloud_mask.shape[1]))
# skip image if cloud cover is above threshold
if cloud_cover > cloud_thresh or cloud_cover == 1:
# remove image files
if satname == 'L5':
os.chmod(fn, 0o777)
os.remove(fn)
else:
for j in range(len(fn)):
os.chmod(fn[j], 0o777)
os.remove(fn[j])
else:
idx_good.append(i)
msg = '\n%d cloudy images were removed for %s.' % (len(filenames)-len(idx_good), satname)
print(msg)
# update the metadata dict (delete all cloudy images)
for key in metadata2[satname].keys():
metadata2[satname][key] = [metadata2[satname][key][_] for _ in idx_good]
return metadata
return metadata2

@ -18,17 +18,18 @@ import sklearn.decomposition as decomposition
import skimage.exposure as exposure
# other modules
from osgeo import gdal, ogr, osr
from osgeo import gdal
from pylab import ginput
import pickle
import matplotlib.path as mpltPath
import geopandas as gpd
from shapely import geometry
# own modules
import SDS_tools
from coastsat import SDS_tools
np.seterr(all='ignore') # raise/ignore divisions by 0 and nans
def create_cloud_mask(im_qa, satname, cloud_mask_issue):
def create_cloud_mask(im_QA, satname, cloud_mask_issue):
"""
Creates a cloud mask using the information contained in the QA band.
@ -36,7 +37,7 @@ def create_cloud_mask(im_qa, satname, cloud_mask_issue):
Arguments:
-----------
im_qa: np.array
im_QA: np.array
Image containing the QA band
satname: string
short name for the satellite (L5, L7, L8 or S2)
@ -58,12 +59,13 @@ def create_cloud_mask(im_qa, satname, cloud_mask_issue):
cloud_values = [1024, 2048] # 1024 = dense cloud, 2048 = cirrus clouds
# find which pixels have bits corresponding to cloud values
cloud_mask = np.isin(im_qa, cloud_values)
cloud_mask = np.isin(im_QA, cloud_values)
# remove cloud pixels that form very thin features. These are beach or swash pixels that are
# erroneously identified as clouds by the CFMASK algorithm applied to the images by the USGS.
if sum(sum(cloud_mask)) > 0 and sum(sum(~cloud_mask)) > 0:
morphology.remove_small_objects(cloud_mask, min_size=10, connectivity=1, in_place=True)
if cloud_mask_issue:
elem = morphology.square(3) # use a square of width 3 pixels
cloud_mask = morphology.binary_opening(cloud_mask,elem) # perform image opening
@ -249,8 +251,10 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_extra : np.array
2D array containing the 20m resolution SWIR band for Sentinel-2 and the 15m resolution
panchromatic band for Landsat 7 and Landsat 8. This field is empty for Landsat 5.
imQA: np.array
im_QA: np.array
2D array containing the QA band, from which the cloud_mask can be computed.
im_nodata: np.array
2D array with True where no data values (-inf) are located
"""
@ -270,9 +274,9 @@ def preprocess_single(fn, satname, cloud_mask_issue):
ncols = im_ms.shape[1]*2
# create cloud mask
im_qa = im_ms[:,:,5]
im_QA = im_ms[:,:,5]
im_ms = im_ms[:,:,:-1]
cloud_mask = create_cloud_mask(im_qa, satname, cloud_mask_issue)
cloud_mask = create_cloud_mask(im_QA, satname, cloud_mask_issue)
# resize the image using bilinear interpolation (order 1)
im_ms = transform.resize(im_ms,(nrows, ncols), order=1, preserve_range=True,
@ -289,16 +293,14 @@ def preprocess_single(fn, satname, cloud_mask_issue):
georef[3] = georef[3] - 7.5
# check if -inf or nan values on any band and add to cloud mask
im_nodata = np.zeros(cloud_mask.shape).astype(bool)
for k in range(im_ms.shape[2]):
im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
# calculate cloud cover
cloud_cover = sum(sum(cloud_mask.astype(int)))/(cloud_mask.shape[0]*cloud_mask.shape[1])
im_nodata = np.logical_or(im_nodata, im_inf)
# no extra image for Landsat 5 (they are all 30 m bands)
im_extra = []
imQA = im_qa
#=============================================================================================#
# L7 images
@ -323,8 +325,8 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_ms = np.stack(bands, 2)
# create cloud mask
im_qa = im_ms[:,:,5]
cloud_mask = create_cloud_mask(im_qa, satname, cloud_mask_issue)
im_QA = im_ms[:,:,5]
cloud_mask = create_cloud_mask(im_QA, satname, cloud_mask_issue)
# resize the image using bilinear interpolation (order 1)
im_ms = im_ms[:,:,:5]
@ -334,6 +336,7 @@ def preprocess_single(fn, satname, cloud_mask_issue):
cloud_mask = transform.resize(cloud_mask, (nrows, ncols), order=0, preserve_range=True,
mode='constant').astype('bool_')
# check if -inf or nan values on any band and eventually add those pixels to cloud mask
im_nodata = np.zeros(cloud_mask.shape).astype(bool)
for k in range(im_ms.shape[2]+1):
if k == 5:
im_inf = np.isin(im_pan, -np.inf)
@ -342,9 +345,7 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
# calculate cloud cover
cloud_cover = sum(sum(cloud_mask.astype(int)))/(cloud_mask.shape[0]*cloud_mask.shape[1])
im_nodata = np.logical_or(im_nodata, im_inf)
# pansharpen Green, Red, NIR (where there is overlapping with pan band in L7)
try:
@ -358,7 +359,6 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_ms = im_ms_ps.copy()
# the extra image is the 15m panchromatic band
im_extra = im_pan
imQA = im_qa
#=============================================================================================#
# L8 images
@ -383,8 +383,8 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_ms = np.stack(bands, 2)
# create cloud mask
im_qa = im_ms[:,:,5]
cloud_mask = create_cloud_mask(im_qa, satname, cloud_mask_issue)
im_QA = im_ms[:,:,5]
cloud_mask = create_cloud_mask(im_QA, satname, cloud_mask_issue)
# resize the image using bilinear interpolation (order 1)
im_ms = im_ms[:,:,:5]
@ -394,6 +394,7 @@ def preprocess_single(fn, satname, cloud_mask_issue):
cloud_mask = transform.resize(cloud_mask, (nrows, ncols), order=0, preserve_range=True,
mode='constant').astype('bool_')
# check if -inf or nan values on any band and eventually add those pixels to cloud mask
im_nodata = np.zeros(cloud_mask.shape).astype(bool)
for k in range(im_ms.shape[2]+1):
if k == 5:
im_inf = np.isin(im_pan, -np.inf)
@ -402,9 +403,7 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
# calculate cloud cover
cloud_cover = sum(sum(cloud_mask.astype(int)))/(cloud_mask.shape[0]*cloud_mask.shape[1])
im_nodata = np.logical_or(im_nodata, im_inf)
# pansharpen Blue, Green, Red (where there is overlapping with pan band in L8)
try:
@ -417,7 +416,6 @@ def preprocess_single(fn, satname, cloud_mask_issue):
im_ms = im_ms_ps.copy()
# the extra image is the 15m panchromatic band
im_extra = im_pan
imQA = im_qa
#=============================================================================================#
# S2 images
@ -438,7 +436,7 @@ def preprocess_single(fn, satname, cloud_mask_issue):
georef = []
# skip the image by giving it a full cloud_mask
cloud_mask = np.ones((im10.shape[0],im10.shape[1])).astype('bool')
return im_ms, georef, cloud_mask, [], []
return im_ms, georef, cloud_mask, [], [], []
# size of 10m bands
nrows = im10.shape[0]
@ -465,28 +463,29 @@ def preprocess_single(fn, satname, cloud_mask_issue):
data = gdal.Open(fn60, gdal.GA_ReadOnly)
bands = [data.GetRasterBand(k + 1).ReadAsArray() for k in range(data.RasterCount)]
im60 = np.stack(bands, 2)
imQA = im60[:,:,0]
cloud_mask = create_cloud_mask(imQA, satname, cloud_mask_issue)
im_QA = im60[:,:,0]
cloud_mask = create_cloud_mask(im_QA, satname, cloud_mask_issue)
# resize the cloud mask using nearest neighbour interpolation (order 0)
cloud_mask = transform.resize(cloud_mask,(nrows, ncols), order=0, preserve_range=True,
mode='constant')
# check if -inf or nan values on any band and add to cloud mask
im_nodata = np.zeros(cloud_mask.shape).astype(bool)
for k in range(im_ms.shape[2]):
im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
im_inf = np.isin(im_ms[:,:,k], -np.inf)
im_nan = np.isnan(im_ms[:,:,k])
cloud_mask = np.logical_or(np.logical_or(cloud_mask, im_inf), im_nan)
im_nodata = np.logical_or(im_nodata, im_inf)
# calculate cloud cover
cloud_cover = sum(sum(cloud_mask.astype(int)))/(cloud_mask.shape[0]*cloud_mask.shape[1])
# the extra image is the 20m SWIR band
im_extra = im20
return im_ms, georef, cloud_mask, im_extra, imQA
return im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata
def create_jpg(im_ms, cloud_mask, date, satname, filepath):
"""
Saves a .jpg file with the RGB image as well as the NIR and SWIR1 grayscale images.
This functions can be modified to obtain different visualisations of the multispectral images.
KV WRL 2018
@ -509,10 +508,10 @@ def create_jpg(im_ms, cloud_mask, date, satname, filepath):
# rescale image intensity for display purposes
im_RGB = rescale_image_intensity(im_ms[:,:,[2,1,0]], cloud_mask, 99.9)
im_NIR = rescale_image_intensity(im_ms[:,:,3], cloud_mask, 99.9)
im_SWIR = rescale_image_intensity(im_ms[:,:,4], cloud_mask, 99.9)
# im_NIR = rescale_image_intensity(im_ms[:,:,3], cloud_mask, 99.9)
# im_SWIR = rescale_image_intensity(im_ms[:,:,4], cloud_mask, 99.9)
# make figure
# make figure (just RGB)
fig = plt.figure()
fig.set_size_inches([18,9])
fig.set_tight_layout(True)
@ -575,13 +574,12 @@ def save_jpg(metadata, settings):
sitename = settings['inputs']['sitename']
cloud_thresh = settings['cloud_thresh']
filepath_data = settings['inputs']['filepath']
# create subfolder to store the jpg files
filepath_jpg = os.path.join(os.getcwd(), 'data', sitename, 'jpg_files', 'preprocessed')
try:
os.makedirs(filepath_jpg)
except:
print('')
filepath_jpg = os.path.join(filepath_data, sitename, 'jpg_files', 'preprocessed')
if not os.path.exists(filepath_jpg):
os.makedirs(filepath_jpg)
# loop through satellite list
for satname in metadata.keys():
@ -594,7 +592,7 @@ def save_jpg(metadata, settings):
# image filename
fn = SDS_tools.get_filenames(filenames[i],filepath, satname)
# read and preprocess image
im_ms, georef, cloud_mask, im_extra, imQA = preprocess_single(fn, satname, settings['cloud_mask_issue'])
im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata = preprocess_single(fn, satname, settings['cloud_mask_issue'])
# calculate cloud cover
cloud_cover = np.divide(sum(sum(cloud_mask.astype(int))),
(cloud_mask.shape[0]*cloud_mask.shape[1]))
@ -602,14 +600,14 @@ def save_jpg(metadata, settings):
if cloud_cover > cloud_thresh or cloud_cover == 1:
continue
# save .jpg with date and satellite in the title
date = filenames[i][:10]
date = filenames[i][:19]
create_jpg(im_ms, cloud_mask, date, satname, filepath_jpg)
# print the location where the images have been saved
print('Satellite images saved as .jpg in ' + os.path.join(os.getcwd(), 'data', sitename,
print('Satellite images saved as .jpg in ' + os.path.join(filepath_data, sitename,
'jpg_files', 'preprocessed'))
def get_reference_sl_manual(metadata, settings):
def get_reference_sl(metadata, settings):
"""
Allows the user to manually digitize a reference shoreline that is used seed the shoreline
detection algorithm. The reference shoreline helps to detect the outliers, making the shoreline
@ -638,9 +636,10 @@ def get_reference_sl_manual(metadata, settings):
"""
sitename = settings['inputs']['sitename']
filepath_data = settings['inputs']['filepath']
# check if reference shoreline already exists in the corresponding folder
filepath = os.path.join(os.getcwd(), 'data', sitename)
filepath = os.path.join(filepath_data, sitename)
filename = sitename + '_reference_shoreline.pkl'
if filename in os.listdir(filepath):
print('Reference shoreline already exists and was loaded')
@ -673,21 +672,26 @@ def get_reference_sl_manual(metadata, settings):
# read image
fn = SDS_tools.get_filenames(filenames[i],filepath, satname)
im_ms, georef, cloud_mask, im_extra, imQA = preprocess_single(fn, satname, settings['cloud_mask_issue'])
im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata = preprocess_single(fn, satname, settings['cloud_mask_issue'])
# calculate cloud cover
cloud_cover = np.divide(sum(sum(cloud_mask.astype(int))),
(cloud_mask.shape[0]*cloud_mask.shape[1]))
# skip image if cloud cover is above threshold
if cloud_cover > settings['cloud_thresh']:
continue
# rescale image intensity for display purposes
im_RGB = rescale_image_intensity(im_ms[:,:,[2,1,0]], cloud_mask, 99.9)
# plot the image RGB on a figure
fig = plt.figure()
fig.set_size_inches([18,9])
fig.set_tight_layout(True)
plt.axis('off')
plt.imshow(im_RGB)
# decide if the image if good enough for digitizing the shoreline
plt.title('click <keep> if image is clear enough to digitize the shoreline.\n' +
'If not (too cloudy) click on <skip> to get another image', fontsize=14)
@ -699,13 +703,16 @@ def get_reference_sl_manual(metadata, settings):
bbox=dict(boxstyle="square", ec='k',fc='w'))
mng = plt.get_current_fig_manager()
mng.window.showMaximized()
# let user click on the image once
pt_input = ginput(n=1, timeout=1e9, show_clicks=False)
pt_input = np.array(pt_input)
# if clicks next to <skip>, show another image
if pt_input[0][0] > im_ms.shape[1]/2:
plt.close()
continue
else:
# remove keep and skip buttons
keep_button.set_visible(False)
@ -717,8 +724,10 @@ def get_reference_sl_manual(metadata, settings):
end_button = plt.text(1, 0.9, 'end', size=16, ha="right", va="top",
transform=plt.gca().transAxes,
bbox=dict(boxstyle="square", ec='k',fc='w'))
# add multiple reference shorelines (until user clicks on <end> button)
pts_sl = np.expand_dims(np.array([np.nan, np.nan]),axis=0)
geoms = []
while 1:
add_button.set_visible(False)
end_button.set_visible(False)
@ -727,11 +736,13 @@ def get_reference_sl_manual(metadata, settings):
'Start at one end of the beach.\n' + 'When finished digitizing, click <ENTER>',
fontsize=14)
plt.draw()
# let user click on the shoreline
pts = ginput(n=50000, timeout=1e9, show_clicks=True)
pts_pix = np.array(pts)
# convert pixel coordinates to world coordinates
pts_world = SDS_tools.convert_pix2world(pts_pix[:,[1,0]], georef)
# interpolate between points clicked by the user (1m resolution)
pts_world_interp = np.expand_dims(np.array([np.nan, np.nan]),axis=0)
for k in range(len(pts_world)-1):
@ -748,39 +759,65 @@ def get_reference_sl_manual(metadata, settings):
tf = transform.EuclideanTransform(rotation=phi, translation=pts_world[k,:])
pts_world_interp = np.append(pts_world_interp,tf(pt_coords), axis=0)
pts_world_interp = np.delete(pts_world_interp,0,axis=0)
# save as geometry (to create .geojson file later)
geoms.append(geometry.LineString(pts_world_interp))
# convert to pixel coordinates and plot
pts_pix_interp = SDS_tools.convert_world2pix(pts_world_interp, georef)
pts_sl = np.append(pts_sl, pts_world_interp, axis=0)
plt.plot(pts_pix_interp[:,0], pts_pix_interp[:,1], 'r--')
plt.plot(pts_pix_interp[0,0], pts_pix_interp[0,1],'ko')
plt.plot(pts_pix_interp[-1,0], pts_pix_interp[-1,1],'ko')
# update title and buttons
add_button.set_visible(True)
end_button.set_visible(True)
plt.title('click <add> to digitize another shoreline or <end> to finish and save the shoreline(s)',
fontsize=14)
plt.draw()
# let the user click again (<add> another shoreline or <end>)
pt_input = ginput(n=1, timeout=1e9, show_clicks=False)
pt_input = np.array(pt_input)
# if user clicks on <end>, save the points and break the loop
if pt_input[0][0] > im_ms.shape[1]/2:
add_button.set_visible(False)
end_button.set_visible(False)
plt.title('Reference shoreline saved as ' + sitename + '_reference_shoreline.pkl')
plt.title('Reference shoreline saved as ' + sitename + '_reference_shoreline.pkl and ' + sitename + '_reference_shoreline.geojson')
plt.draw()
ginput(n=1, timeout=5, show_clicks=False)
ginput(n=1, timeout=3, show_clicks=False)
plt.close()
break
pts_sl = np.delete(pts_sl,0,axis=0)
# convert world coordinates to user-defined coordinates
pts_sl = np.delete(pts_sl,0,axis=0)
# convert world image coordinates to user-defined coordinate system
image_epsg = metadata[satname]['epsg'][i]
pts_coords = SDS_tools.convert_epsg(pts_sl, image_epsg, settings['output_epsg'])
# save the reference shoreline
filepath = os.path.join(os.getcwd(), 'data', sitename)
# save the reference shoreline as .pkl
filepath = os.path.join(filepath_data, sitename)
with open(os.path.join(filepath, sitename + '_reference_shoreline.pkl'), 'wb') as f:
pickle.dump(pts_coords, f)
# also store as .geojson in case user wants to drag-and-drop on GIS for verification
for k,line in enumerate(geoms):
gdf = gpd.GeoDataFrame(geometry=gpd.GeoSeries(line))
gdf.index = [k]
gdf.loc[k,'name'] = 'reference shoreline ' + str(k+1)
# store into geodataframe
if k == 0:
gdf_all = gdf
else:
gdf_all = gdf_all.append(gdf)
gdf_all.crs = {'init':'epsg:'+str(image_epsg)}
# convert from image_epsg to user-defined coordinate system
gdf_all = gdf_all.to_crs({'init': 'epsg:'+str(settings['output_epsg'])})
# save as geojson
gdf_all.to_file(os.path.join(filepath, sitename + '_reference_shoreline.geojson'),
driver='GeoJSON', encoding='utf-8')
print('Reference shoreline has been saved in ' + filepath)
break

@ -11,9 +11,6 @@ import pdb
# image processing modules
import skimage.filters as filters
import skimage.exposure as exposure
import skimage.transform as transform
import sklearn.decomposition as decomposition
import skimage.measure as measure
import skimage.morphology as morphology
@ -22,56 +19,21 @@ from sklearn.externals import joblib
from shapely.geometry import LineString
# other modules
from osgeo import gdal, ogr, osr
import scipy.interpolate as interpolate
from datetime import datetime, timedelta
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
import matplotlib.cm as cm
from matplotlib import gridspec
from pylab import ginput
import pickle
import simplekml
# own modules
import SDS_tools, SDS_preprocess
from coastsat import SDS_tools, SDS_preprocess
np.seterr(all='ignore') # raise/ignore divisions by 0 and nans
def nd_index(im1, im2, cloud_mask):
"""
Computes normalised difference index on 2 images (2D), given a cloud mask (2D).
KV WRL 2018
Arguments:
-----------
im1, im2: np.array
Images (2D) with which to calculate the ND index
cloud_mask: np.array
2D cloud mask with True where cloud pixels are
Returns: -----------
im_nd: np.array
Image (2D) containing the ND index
"""
# reshape the cloud mask
vec_mask = cloud_mask.reshape(im1.shape[0] * im1.shape[1])
# initialise with NaNs
vec_nd = np.ones(len(vec_mask)) * np.nan
# reshape the two images
vec1 = im1.reshape(im1.shape[0] * im1.shape[1])
vec2 = im2.reshape(im2.shape[0] * im2.shape[1])
# compute the normalised difference index
temp = np.divide(vec1[~vec_mask] - vec2[~vec_mask],
vec1[~vec_mask] + vec2[~vec_mask])
vec_nd[~vec_mask] = temp
# reshape into image
im_nd = vec_nd.reshape(im1.shape[0], im1.shape[1])
return im_nd
###################################################################################################
# IMAGE CLASSIFICATION FUNCTIONS
###################################################################################################
def calculate_features(im_ms, cloud_mask, im_bool):
"""
@ -101,19 +63,19 @@ def calculate_features(im_ms, cloud_mask, im_bool):
feature = np.expand_dims(im_ms[im_bool,k],axis=1)
features = np.append(features, feature, axis=-1)
# NIR-G
im_NIRG = nd_index(im_ms[:,:,3], im_ms[:,:,1], cloud_mask)
im_NIRG = SDS_tools.nd_index(im_ms[:,:,3], im_ms[:,:,1], cloud_mask)
features = np.append(features, np.expand_dims(im_NIRG[im_bool],axis=1), axis=-1)
# SWIR-G
im_SWIRG = nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
im_SWIRG = SDS_tools.nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
features = np.append(features, np.expand_dims(im_SWIRG[im_bool],axis=1), axis=-1)
# NIR-R
im_NIRR = nd_index(im_ms[:,:,3], im_ms[:,:,2], cloud_mask)
im_NIRR = SDS_tools.nd_index(im_ms[:,:,3], im_ms[:,:,2], cloud_mask)
features = np.append(features, np.expand_dims(im_NIRR[im_bool],axis=1), axis=-1)
# SWIR-NIR
im_SWIRNIR = nd_index(im_ms[:,:,4], im_ms[:,:,3], cloud_mask)
im_SWIRNIR = SDS_tools.nd_index(im_ms[:,:,4], im_ms[:,:,3], cloud_mask)
features = np.append(features, np.expand_dims(im_SWIRNIR[im_bool],axis=1), axis=-1)
# B-R
im_BR = nd_index(im_ms[:,:,0], im_ms[:,:,2], cloud_mask)
im_BR = SDS_tools.nd_index(im_ms[:,:,0], im_ms[:,:,2], cloud_mask)
features = np.append(features, np.expand_dims(im_BR[im_bool],axis=1), axis=-1)
# calculate standard deviation of individual bands
for k in range(im_ms.shape[2]):
@ -133,7 +95,7 @@ def calculate_features(im_ms, cloud_mask, im_bool):
return features
def classify_image_NN(im_ms, im_extra, cloud_mask, min_beach_area, satname):
def classify_image_NN(im_ms, im_extra, cloud_mask, min_beach_area, clf):
"""
Classifies every pixel in the image in one of 4 classes:
- sand --> label = 1
@ -141,7 +103,7 @@ def classify_image_NN(im_ms, im_extra, cloud_mask, min_beach_area, satname):
- water --> label = 3
- other (vegetation, buildings, rocks...) --> label = 0
The classifier is a Neural Network, trained on several sites in New South Wales, Australia.
The classifier is a Neural Network previously trained.
KV WRL 2018
@ -154,7 +116,8 @@ def classify_image_NN(im_ms, im_extra, cloud_mask, min_beach_area, satname):
cloud_mask: np.array
2D cloud mask with True where cloud pixels are
min_beach_area: int
minimum number of pixels that have to be connected in the SAND class
minimum number of pixels that have to be connected to belong to the SAND class
clf: classifier
Returns: -----------
im_classif: np.array
@ -164,14 +127,6 @@ def classify_image_NN(im_ms, im_extra, cloud_mask, min_beach_area, satname):
"""
if satname == 'S2':
# load classifier (special classifier for Sentinel-2 images)
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_S2.pkl'))
else:
# load classifier (special classifier for Landsat images)
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat.pkl'))
# calculate features
vec_features = calculate_features(im_ms, cloud_mask, np.ones(cloud_mask.shape).astype(bool))
vec_features[np.isnan(vec_features)] = 1e-9 # NaN values are create when std is too close to 0
@ -202,7 +157,11 @@ def classify_image_NN(im_ms, im_extra, cloud_mask, min_beach_area, satname):
return im_classif, im_labels
def find_wl_contours1(im_ndwi, cloud_mask):
###################################################################################################
# CONTOUR MAPPING FUNCTIONS
###################################################################################################
def find_wl_contours1(im_ndwi, cloud_mask, im_ref_buffer):
"""
Traditional method for shorelien detection.
Finds the water line by thresholding the Normalized Difference Water Index and applying
@ -216,6 +175,8 @@ def find_wl_contours1(im_ndwi, cloud_mask):
Image (2D) with the NDWI (water index)
cloud_mask: np.ndarray
2D cloud mask with True where cloud pixels are
im_ref_buffer: np.array
Binary image containing a buffer around the reference shoreline
Returns: -----------
contours_wl: list of np.arrays
@ -231,7 +192,9 @@ def find_wl_contours1(im_ndwi, cloud_mask):
vec = vec[~np.isnan(vec)]
t_otsu = filters.threshold_otsu(vec)
# use Marching Squares algorithm to detect contours on ndwi image
contours = measure.find_contours(im_ndwi, t_otsu)
im_ndwi_buffer = np.copy(im_ndwi)
im_ndwi_buffer[~im_ref_buffer] = np.nan
contours = measure.find_contours(im_ndwi_buffer, t_otsu)
# remove contours that contain NaNs (due to cloud pixels in the contour)
contours_nonans = []
@ -247,7 +210,7 @@ def find_wl_contours1(im_ndwi, cloud_mask):
return contours
def find_wl_contours2(im_ms, im_labels, cloud_mask, buffer_size, is_reference_sl):
def find_wl_contours2(im_ms, im_labels, cloud_mask, buffer_size, im_ref_buffer):
"""
New robust method for extracting shorelines. Incorporates the classification component to
refine the treshold and make it specific to the sand/water interface.
@ -265,8 +228,8 @@ def find_wl_contours2(im_ms, im_labels, cloud_mask, buffer_size, is_reference_sl
buffer_size: int
size of the buffer around the sandy beach over which the pixels are considered in the
thresholding algorithm.
is_reference_sl: boolean
True if there is a reference shoreline, False otherwise
im_ref_buffer: np.array
Binary image containing a buffer around the reference shoreline
Returns: -----------
contours_wi: list of np.arrays
@ -282,9 +245,9 @@ def find_wl_contours2(im_ms, im_labels, cloud_mask, buffer_size, is_reference_sl
ncols = cloud_mask.shape[1]
# calculate Normalized Difference Modified Water Index (SWIR - G)
im_mwi = nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
im_mwi = SDS_tools.nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
# calculate Normalized Difference Modified Water Index (NIR - G)
im_wi = nd_index(im_ms[:,:,3], im_ms[:,:,1], cloud_mask)
im_wi = SDS_tools.nd_index(im_ms[:,:,3], im_ms[:,:,1], cloud_mask)
# stack indices together
im_ind = np.stack((im_wi, im_mwi), axis=-1)
vec_ind = im_ind.reshape(nrows*ncols,2)
@ -316,16 +279,11 @@ def find_wl_contours2(im_ms, im_labels, cloud_mask, buffer_size, is_reference_sl
# find contour with MS algorithm
im_wi_buffer = np.copy(im_wi)
im_wi_buffer[~im_buffer] = np.nan
im_wi_buffer[~im_ref_buffer] = np.nan
im_mwi_buffer = np.copy(im_mwi)
im_mwi_buffer[~im_buffer] = np.nan
if is_reference_sl: # if there is a reference_shoreline map the shoreline on the entire image
contours_wi = measure.find_contours(im_wi, t_wi)
contours_mwi = measure.find_contours(im_mwi, t_mwi)
else: # otherwise only map the shoreline along the sandy pixels
contours_wi = measure.find_contours(im_wi_buffer, t_wi)
contours_mwi = measure.find_contours(im_mwi_buffer, t_mwi)
im_mwi_buffer[~im_ref_buffer] = np.nan
contours_wi = measure.find_contours(im_wi_buffer, t_wi)
contours_mwi = measure.find_contours(im_mwi_buffer, t_mwi)
# remove contour points that are NaNs (around clouds)
contours = contours_wi
@ -354,7 +312,66 @@ def find_wl_contours2(im_ms, im_labels, cloud_mask, buffer_size, is_reference_sl
return contours_wi, contours_mwi
def process_shoreline(contours, georef, image_epsg, settings):
###################################################################################################
# SHORELINE PROCESSING FUNCTIONS
###################################################################################################
def create_shoreline_buffer(im_shape, georef, image_epsg, pixel_size, settings):
"""
Creates a buffer around the reference shoreline. The size of the buffer is given by
settings['max_dist_ref'].
KV WRL 2018
Arguments:
-----------
im_shape: np.array
size of the image (rows,columns)
georef: np.array
vector of 6 elements [Xtr, Xscale, Xshear, Ytr, Yshear, Yscale]
image_epsg: int
spatial reference system of the image from which the contours were extracted
pixel_size: int
size of the pixel in metres (15 for Landsat, 10 for Sentinel-2)
settings: dict
contains the following fields:
output_epsg: int
output spatial reference system
reference_shoreline: np.array
coordinates of the reference shoreline
max_dist_ref: int
maximum distance from the reference shoreline in metres
Returns: -----------
im_buffer: np.array
binary image, True where the buffer is, False otherwise
"""
# initialise the image buffer
im_buffer = np.ones(im_shape).astype(bool)
if 'reference_shoreline' in settings.keys():
# convert reference shoreline to pixel coordinates
ref_sl = settings['reference_shoreline']
ref_sl_conv = SDS_tools.convert_epsg(ref_sl, settings['output_epsg'],image_epsg)[:,:-1]
ref_sl_pix = SDS_tools.convert_world2pix(ref_sl_conv, georef)
ref_sl_pix_rounded = np.round(ref_sl_pix).astype(int)
# create binary image of the reference shoreline (1 where the shoreline is 0 otherwise)
im_binary = np.zeros(im_shape)
for j in range(len(ref_sl_pix_rounded)):
im_binary[ref_sl_pix_rounded[j,1], ref_sl_pix_rounded[j,0]] = 1
im_binary = im_binary.astype(bool)
# dilate the binary image to create a buffer around the reference shoreline
max_dist_ref_pixels = np.ceil(settings['max_dist_ref']/pixel_size)
se = morphology.disk(max_dist_ref_pixels)
im_buffer = morphology.binary_dilation(im_binary, se)
return im_buffer
def process_shoreline(contours, cloud_mask, georef, image_epsg, settings):
"""
Converts the contours from image coordinates to world coordinates. This function also removes
the contours that are too small to be a shoreline (based on the parameter
@ -366,18 +383,21 @@ def process_shoreline(contours, georef, image_epsg, settings):
-----------
contours: np.array or list of np.array
image contours as detected by the function find_contours
cloud_mask: np.array
2D cloud mask with True where cloud pixels are
georef: np.array
vector of 6 elements [Xtr, Xscale, Xshear, Ytr, Yshear, Yscale]
image_epsg: int
spatial reference system of the image from which the contours were extracted
settings: dict
contains important parameters for processing the shoreline:
output_epsg: output spatial reference system
min_length_sl: minimum length of shoreline perimeter to be kept (in meters)
reference_sl: [optional] reference shoreline coordinates
max_dist_ref: max distance (in meters) allowed from a reference shoreline
contains the following fields:
output_epsg: int
output spatial reference system
min_length_sl: float
minimum length of shoreline perimeter to be kept (in meters)
Returns: -----------
Returns:
-----------
shoreline: np.array
array of points with the X and Y coordinates of the shoreline
@ -387,8 +407,8 @@ def process_shoreline(contours, georef, image_epsg, settings):
contours_world = SDS_tools.convert_pix2world(contours, georef)
# convert world coordinates to desired spatial reference system
contours_epsg = SDS_tools.convert_epsg(contours_world, image_epsg, settings['output_epsg'])
# remove contours that have a perimeter < min_length_wl (provided in settings dict)
# this enable to remove the very small contours that do not correspond to the shoreline
# remove contours that have a perimeter < min_length_sl (provided in settings dict)
# this enables to remove the very small contours that do not correspond to the shoreline
contours_long = []
for l, wl in enumerate(contours_epsg):
coords = [(wl[k,0], wl[k,1]) for k in range(len(wl))]
@ -403,17 +423,22 @@ def process_shoreline(contours, georef, image_epsg, settings):
y_points = np.append(y_points,contours_long[k][:,1])
contours_array = np.transpose(np.array([x_points,y_points]))
# if reference shoreline has been manually digitised
if 'reference_shoreline' in settings.keys():
# only keep the points that are at a certain distance (define in settings) from the
# reference shoreline, enables to avoid false detections and remove obvious outliers
temp = np.zeros((len(contours_array))).astype(bool)
for k in range(len(settings['reference_shoreline'])):
temp = np.logical_or(np.linalg.norm(contours_array - settings['reference_shoreline'][k,[0,1]],
axis=1) < settings['max_dist_ref'], temp)
shoreline = contours_array[temp]
else:
shoreline = contours_array
shoreline = contours_array
# now remove any shoreline points that are attached to cloud pixels
if sum(sum(cloud_mask)) > 0:
# get the coordinates of the cloud pixels
idx_cloud = np.where(cloud_mask)
idx_cloud = np.array([(idx_cloud[0][k], idx_cloud[1][k]) for k in range(len(idx_cloud[0]))])
# convert to world coordinates and same epsg as the shoreline points
coords_cloud = SDS_tools.convert_epsg(SDS_tools.convert_pix2world(idx_cloud, georef),
image_epsg, settings['output_epsg'])[:,:-1]
# only keep the shoreline points that are at least 30m from any cloud pixel
idx_keep = np.ones(len(shoreline)).astype(bool)
for k in range(len(shoreline)):
if np.any(np.linalg.norm(shoreline[k,:] - coords_cloud, axis=1) < 30):
idx_keep[k] = False
shoreline = shoreline[idx_keep]
return shoreline
@ -440,22 +465,23 @@ def show_detection(im_ms, cloud_mask, im_labels, shoreline,image_epsg, georef,
georef: np.array
vector of 6 elements [Xtr, Xscale, Xshear, Ytr, Yshear, Yscale]
settings: dict
contains important parameters for processing the shoreline
contains the following fields:
date: string
date at which the image was taken
satname: string
indicates the satname (L5,L7,L8 or S2)
Returns: -----------
Returns:
-----------
skip_image: boolean
True if the user wants to skip the image, False otherwise.
"""
sitename = settings['inputs']['sitename']
filepath_data = settings['inputs']['filepath']
# subfolder where the .jpg file is stored if the user accepts the shoreline detection
filepath = os.path.join(os.getcwd(), 'data', sitename, 'jpg_files', 'detection')
filepath = os.path.join(filepath_data, sitename, 'jpg_files', 'detection')
im_RGB = SDS_preprocess.rescale_image_intensity(im_ms[:,:,[2,1,0]], cloud_mask, 99.9)
@ -473,7 +499,7 @@ def show_detection(im_ms, cloud_mask, im_labels, shoreline,image_epsg, georef,
im_class[im_labels[:,:,k],2] = colours[k,2]
# compute MNDWI grayscale image
im_mwi = nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
im_mwi = SDS_tools.nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
# transform world coordinates of shoreline into pixel coordinates
# use try/except in case there are no coordinates to be transformed (shoreline = [])
@ -485,35 +511,46 @@ def show_detection(im_ms, cloud_mask, im_labels, shoreline,image_epsg, georef,
# if try fails, just add nan into the shoreline vector so the next parts can still run
sl_pix = np.array([[np.nan, np.nan],[np.nan, np.nan]])
# according to the image shape, decide whether it is better to have the images in the subplot
# in different rows or different columns
fig = plt.figure()
if im_RGB.shape[1] > 2*im_RGB.shape[0]:
# vertical subplots
gs = gridspec.GridSpec(3, 1)
gs.update(bottom=0.03, top=0.97, left=0.03, right=0.97)
ax1 = fig.add_subplot(gs[0,0])
ax2 = fig.add_subplot(gs[1,0])
ax3 = fig.add_subplot(gs[2,0])
if plt.get_fignums():
# get open figure if it exists
fig = plt.gcf()
ax1 = fig.axes[0]
ax2 = fig.axes[1]
ax3 = fig.axes[2]
else:
# horizontal subplots
gs = gridspec.GridSpec(1, 3)
gs.update(bottom=0.05, top=0.95, left=0.05, right=0.95)
ax1 = fig.add_subplot(gs[0,0])
ax2 = fig.add_subplot(gs[0,1])
ax3 = fig.add_subplot(gs[0,2])
# else create a new figure
fig = plt.figure()
fig.set_size_inches([12.53, 9.3])
mng = plt.get_current_fig_manager()
mng.window.showMaximized()
# according to the image shape, decide whether it is better to have the images
# in vertical subplots or horizontal subplots
if im_RGB.shape[1] > 2*im_RGB.shape[0]:
# vertical subplots
gs = gridspec.GridSpec(3, 1)
gs.update(bottom=0.03, top=0.97, left=0.03, right=0.97)
ax1 = fig.add_subplot(gs[0,0])
ax2 = fig.add_subplot(gs[1,0])
ax3 = fig.add_subplot(gs[2,0])
else:
# horizontal subplots
gs = gridspec.GridSpec(1, 3)
gs.update(bottom=0.05, top=0.95, left=0.05, right=0.95)
ax1 = fig.add_subplot(gs[0,0])
ax2 = fig.add_subplot(gs[0,1])
ax3 = fig.add_subplot(gs[0,2])
# change the color of nans to either black (0.0) or white (1.0) or somewhere in between
nan_color = 1.0
im_RGB = np.where(np.isnan(im_RGB), nan_color, im_RGB)
im_class = np.where(np.isnan(im_class), 1.0, im_class)
# create image 1 (RGB)
ax1.imshow(im_RGB)
ax1.plot(sl_pix[:,0], sl_pix[:,1], 'k.', markersize=3)
ax1.axis('off')
btn_keep = plt.text(0, 0.9, 'keep', size=16, ha="left", va="top",
transform=ax1.transAxes,
bbox=dict(boxstyle="square", ec='k',fc='w'))
btn_skip = plt.text(1, 0.9, 'skip', size=16, ha="right", va="top",
transform=ax1.transAxes,
bbox=dict(boxstyle="square", ec='k',fc='w'))
ax1.set_title(sitename + ' ' + date + ' ' + satname, fontweight='bold', fontsize=16)
ax1.set_title(sitename, fontweight='bold', fontsize=16)
# create image 2 (classification)
ax2.imshow(im_class)
@ -525,10 +562,13 @@ def show_detection(im_ms, cloud_mask, im_labels, shoreline,image_epsg, georef,
black_line = mlines.Line2D([],[],color='k',linestyle='-', label='shoreline')
ax2.legend(handles=[orange_patch,white_patch,blue_patch, black_line],
bbox_to_anchor=(1, 0.5), fontsize=10)
ax2.set_title(date, fontweight='bold', fontsize=16)
# create image 3 (MNDWI)
ax3.imshow(im_mwi, cmap='bwr')
ax3.plot(sl_pix[:,0], sl_pix[:,1], 'k.', markersize=3)
ax3.axis('off')
ax3.set_title(satname, fontweight='bold', fontsize=16)
# additional options
# ax1.set_anchor('W')
@ -538,23 +578,56 @@ def show_detection(im_ms, cloud_mask, im_labels, shoreline,image_epsg, georef,
# cb.set_label('MNDWI values')
# ax3.set_anchor('W')
fig.set_size_inches([12.53, 9.3])
mng = plt.get_current_fig_manager()
mng.window.showMaximized()
# wait for user's selection: <keep> or <skip>
pt = ginput(n=1, timeout=100000, show_clicks=True)
pt = np.array(pt)
# if user clicks around the <skip> button, return skip_image = True
if pt[0][0] > im_ms.shape[1]/2:
skip_image = True
plt.close()
else:
skip_image = False
btn_skip.set_visible(False)
btn_keep.set_visible(False)
fig.savefig(os.path.join(filepath, date + '_' + satname + '.jpg'), dpi=150)
plt.close()
# if check_detection is True, let user manually accept/reject the images
skip_image = False
if settings['check_detection']:
# set a key event to accept/reject the detections (see https://stackoverflow.com/a/15033071)
# this variable needs to be immuatable so we can access it after the keypress event
key_event = {}
def press(event):
# store what key was pressed in the dictionary
key_event['pressed'] = event.key
# let the user press a key, right arrow to keep the image, left arrow to skip it
# to break the loop the user can press 'escape'
while True:
btn_keep = plt.text(1.1, 0.9, 'keep ⇨', size=12, ha="right", va="top",
transform=ax1.transAxes,
bbox=dict(boxstyle="square", ec='k',fc='w'))
btn_skip = plt.text(-0.1, 0.9, '⇦ skip', size=12, ha="left", va="top",
transform=ax1.transAxes,
bbox=dict(boxstyle="square", ec='k',fc='w'))
btn_esc = plt.text(0.5, 0, '<esc> to quit', size=12, ha="center", va="top",
transform=ax1.transAxes,
bbox=dict(boxstyle="square", ec='k',fc='w'))
plt.draw()
fig.canvas.mpl_connect('key_press_event', press)
plt.waitforbuttonpress()
# after button is pressed, remove the buttons
btn_skip.remove()
btn_keep.remove()
btn_esc.remove()
# keep/skip image according to the pressed key, 'escape' to break the loop
if key_event.get('pressed') == 'right':
skip_image = False
break
elif key_event.get('pressed') == 'left':
skip_image = True
break
elif key_event.get('pressed') == 'escape':
plt.close()
raise StopIteration('User cancelled checking shoreline detection')
else:
plt.waitforbuttonpress()
# if save_figure is True, save a .jpg under /jpg_files/detection
if settings['save_figure'] and not skip_image:
fig.savefig(os.path.join(filepath, date + '_' + satname + '.jpg'), dpi=200)
# Don't close the figure window, but remove all axes and settings, ready for next plot
for ax in fig.axes:
ax.clear()
return skip_image
@ -596,15 +669,17 @@ def extract_shorelines(metadata, settings):
"""
sitename = settings['inputs']['sitename']
filepath_data = settings['inputs']['filepath']
# initialise output structure
output = dict([])
# create a subfolder to store the .jpg images showing the detection
filepath_jpg = os.path.join(os.getcwd(), 'data', sitename, 'jpg_files', 'detection')
try:
os.makedirs(filepath_jpg)
except:
print('')
filepath_jpg = os.path.join(filepath_data, sitename, 'jpg_files', 'detection')
if not os.path.exists(filepath_jpg):
os.makedirs(filepath_jpg)
# close all open figures
plt.close('all')
print('Mapping shorelines:')
# loop through satellite list
for satname in metadata.keys():
@ -613,7 +688,7 @@ def extract_shorelines(metadata, settings):
filepath = SDS_tools.get_filepath(settings['inputs'],satname)
filenames = metadata[satname]['filenames']
# initialise some variables
# initialise the output variables
output_timestamp = [] # datetime at which the image was acquired (UTC time)
output_shoreline = [] # vector of shoreline points
output_filename = [] # filename of the images from which the shorelines where derived
@ -621,21 +696,31 @@ def extract_shorelines(metadata, settings):
output_geoaccuracy = []# georeferencing accuracy of the images
output_idxkeep = [] # index that were kept during the analysis (cloudy images are skipped)
# convert settings['min_beach_area'] and settings['buffer_size'] from metres to pixels
# load classifiers and
if satname in ['L5','L7','L8']:
pixel_size = 15
if settings['dark_sand']:
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat_dark.pkl'))
else:
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_Landsat.pkl'))
elif satname == 'S2':
pixel_size = 10
clf = joblib.load(os.path.join(os.getcwd(), 'classifiers', 'NN_4classes_S2.pkl'))
# convert settings['min_beach_area'] and settings['buffer_size'] from metres to pixels
buffer_size_pixels = np.ceil(settings['buffer_size']/pixel_size)
min_beach_area_pixels = np.ceil(settings['min_beach_area']/pixel_size**2)
# loop through the images
for i in range(len(filenames)):
print('\r%s: %d%%' % (satname,int(((i+1)/len(filenames))*100)), end='')
# get image filename
fn = SDS_tools.get_filenames(filenames[i],filepath, satname)
# preprocess image (cloud mask + pansharpening/downsampling)
im_ms, georef, cloud_mask, im_extra, imQA = SDS_preprocess.preprocess_single(fn, satname, settings['cloud_mask_issue'])
im_ms, georef, cloud_mask, im_extra, im_QA, im_nodata = SDS_preprocess.preprocess_single(fn, satname, settings['cloud_mask_issue'])
# get image spatial reference system (epsg code) from metadata dict
image_epsg = metadata[satname]['epsg'][i]
# calculate cloud cover
@ -647,36 +732,44 @@ def extract_shorelines(metadata, settings):
# classify image in 4 classes (sand, whitewater, water, other) with NN classifier
im_classif, im_labels = classify_image_NN(im_ms, im_extra, cloud_mask,
min_beach_area_pixels, satname)
min_beach_area_pixels, clf)
# calculate a buffer around the reference shoreline (if any has been digitised)
im_ref_buffer = create_shoreline_buffer(cloud_mask.shape, georef, image_epsg,
pixel_size, settings)
# extract water line contours
# if there aren't any sandy pixels, use find_wl_contours1 (traditional method),
# otherwise use find_wl_contours2 (enhanced method with classification)
# there are two options to map the contours:
# if there are pixels in the 'sand' class --> use find_wl_contours2 (enhanced)
# otherwise use find_wl_contours2 (traditional)
try: # use try/except structure for long runs
if sum(sum(im_labels[:,:,0])) == 0 :
# compute MNDWI (SWIR-Green normalized index) grayscale image
im_mndwi = nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
# find water contourson MNDWI grayscale image
contours_mwi = find_wl_contours1(im_mndwi, cloud_mask)
# compute MNDWI image (SWIR-G)
im_mndwi = SDS_tools.nd_index(im_ms[:,:,4], im_ms[:,:,1], cloud_mask)
# find water contours on MNDWI grayscale image
contours_mwi = find_wl_contours1(im_mndwi, cloud_mask, im_ref_buffer)
else:
# use classification to refine threshold and extract sand/water interface
is_reference_sl = 'reference_shoreline' in settings.keys()
# use classification to refine threshold and extract the sand/water interface
contours_wi, contours_mwi = find_wl_contours2(im_ms, im_labels,
cloud_mask, buffer_size_pixels, is_reference_sl)
cloud_mask, buffer_size_pixels, im_ref_buffer)
except:
print('Could not map shoreline for this image: ' + filenames[i])
continue
# process water contours into shorelines
shoreline = process_shoreline(contours_mwi, georef, image_epsg, settings)
# process the water contours into a shoreline
shoreline = process_shoreline(contours_mwi, cloud_mask, georef, image_epsg, settings)
if settings['check_detection']:
date = filenames[i][:10]
# visualise the mapped shorelines, there are two options:
# if settings['check_detection'] = True, shows the detection to the user for accept/reject
# if settings['save_figure'] = True, saves a figure for each mapped shoreline
if settings['check_detection'] or settings['save_figure']:
date = filenames[i][:19]
skip_image = show_detection(im_ms, cloud_mask, im_labels, shoreline,
image_epsg, georef, settings, date, satname)
# if the user decides to skip the image, continue and do not save the mapped shoreline
if skip_image:
continue
# fill and save outputput structure
# append to output variables
output_timestamp.append(metadata[satname]['dates'][i])
output_shoreline.append(shoreline)
output_filename.append(filenames[i])
@ -684,44 +777,34 @@ def extract_shorelines(metadata, settings):
output_geoaccuracy.append(metadata[satname]['acc_georef'][i])
output_idxkeep.append(i)
# create dictionnary of output
output[satname] = {
'timestamp': output_timestamp,
'shoreline': output_shoreline,
'dates': output_timestamp,
'shorelines': output_shoreline,
'filename': output_filename,
'cloudcover': output_cloudcover,
'cloud_cover': output_cloudcover,
'geoaccuracy': output_geoaccuracy,
'idxkeep': output_idxkeep
'idx': output_idxkeep
}
print('')
# add some metadata
output['meta'] = {
'timestamp': 'UTC time',
'shoreline': 'coordinate system epsg : ' + str(settings['output_epsg']),
'cloudcover': 'calculated on the cropped image',
'geoaccuracy': 'RMSE error based on GCPs',
'idxkeep': 'indices of the images that were kept to extract a shoreline'
}
# Close figure window if still open
if plt.get_fignums():
plt.close()
# change the format to have one list sorted by date with all the shorelines (easier to use)
output = SDS_tools.merge_output(output)
# save outputput structure as output.pkl
filepath = os.path.join(os.getcwd(), 'data', sitename)
filepath = os.path.join(filepath_data, sitename)
with open(os.path.join(filepath, sitename + '_output.pkl'), 'wb') as f:
pickle.dump(output, f)
# save output as kml for GIS applications
kml = simplekml.Kml()
for i in range(len(output['shorelines'])):
if len(output['shorelines'][i]) == 0:
continue
sl = output['shorelines'][i]
date = output['dates'][i]
newline = kml.newlinestring(name= date.strftime('%Y-%m-%d'))
newline.coords = sl
newline.description = satname + ' shoreline' + '\n' + 'acquired at ' + date.strftime('%H:%M:%S') + ' UTC'
kml.save(os.path.join(filepath, sitename + '_output.kml'))
# save output into a gdb.GeoDataFrame
gdf = SDS_tools.output_to_gdf(output)
# set projection
gdf.crs = {'init':'epsg:'+str(settings['output_epsg'])}
# save as geojson
gdf.to_file(os.path.join(filepath, sitename + '_output.geojson'), driver='GeoJSON', encoding='utf-8')
return output

@ -10,11 +10,16 @@ import matplotlib.pyplot as plt
import pdb
# other modules
from osgeo import gdal, ogr, osr
from osgeo import gdal, osr
import geopandas as gpd
from shapely import geometry
import skimage.transform as transform
import simplekml
from scipy.ndimage.filters import uniform_filter
###################################################################################################
# COORDINATES CONVERSION FUNCTIONS
###################################################################################################
def convert_pix2world(points, georef):
"""
Converts pixel coordinates (row,columns) to world projected coordinates
@ -54,8 +59,7 @@ def convert_pix2world(points, georef):
points_converted = tform(tmp)
else:
print('invalid input type')
raise
raise Exception('invalid input type')
return points_converted
@ -139,64 +143,118 @@ def convert_epsg(points, epsg_in, epsg_out):
elif type(points) is np.ndarray:
points_converted = np.array(coordTransform.TransformPoints(points))
else:
print('invalid input type')
raise
raise Exception('invalid input type')
return points_converted
def coords_from_kml(fn):
###################################################################################################
# IMAGE ANALYSIS FUNCTIONS
###################################################################################################
def nd_index(im1, im2, cloud_mask):
"""
Extracts coordinates from a .kml file.
Computes normalised difference index on 2 images (2D), given a cloud mask (2D).
KV WRL 2018
Arguments:
-----------
fn: str
filepath + filename of the kml file to be read
im1, im2: np.array
Images (2D) with which to calculate the ND index
cloud_mask: np.array
2D cloud mask with True where cloud pixels are
Returns: -----------
polygon: list
coordinates extracted from the .kml file
im_nd: np.array
Image (2D) containing the ND index
"""
# reshape the cloud mask
vec_mask = cloud_mask.reshape(im1.shape[0] * im1.shape[1])
# initialise with NaNs
vec_nd = np.ones(len(vec_mask)) * np.nan
# reshape the two images
vec1 = im1.reshape(im1.shape[0] * im1.shape[1])
vec2 = im2.reshape(im2.shape[0] * im2.shape[1])
# compute the normalised difference index
temp = np.divide(vec1[~vec_mask] - vec2[~vec_mask],
vec1[~vec_mask] + vec2[~vec_mask])
vec_nd[~vec_mask] = temp
# reshape into image
im_nd = vec_nd.reshape(im1.shape[0], im1.shape[1])
return im_nd
def image_std(image, radius):
"""
Calculates the standard deviation of an image, using a moving window of specified radius.
# read .kml file
with open(fn) as kmlFile:
doc = kmlFile.read()
# parse to find coordinates field
str1 = '<coordinates>'
str2 = '</coordinates>'
subdoc = doc[doc.find(str1)+len(str1):doc.find(str2)]
coordlist = subdoc.split('\n')
# read coordinates
polygon = []
for i in range(1,len(coordlist)-1):
polygon.append([float(coordlist[i].split(',')[0]), float(coordlist[i].split(',')[1])])
Arguments:
-----------
image: np.array
2D array containing the pixel intensities of a single-band image
radius: int
radius defining the moving window used to calculate the standard deviation. For example,
radius = 1 will produce a 3x3 moving window.
return [polygon]
Returns:
-----------
win_std: np.array
2D array containing the standard deviation of the image
def save_kml(coords, epsg):
"""
Saves coordinates with specified spatial reference system into a .kml file in WGS84.
KV WRL 2018
# convert to float
image = image.astype(float)
# first pad the image
image_padded = np.pad(image, radius, 'reflect')
# window size
win_rows, win_cols = radius*2 + 1, radius*2 + 1
# calculate std
win_mean = uniform_filter(image_padded, (win_rows, win_cols))
win_sqr_mean = uniform_filter(image_padded**2, (win_rows, win_cols))
win_var = win_sqr_mean - win_mean**2
win_std = np.sqrt(win_var)
# remove padding
win_std = win_std[radius:-radius, radius:-radius]
return win_std
def mask_raster(fn, mask):
"""
Masks a .tif raster using GDAL.
Arguments:
-----------
coords: np.array
coordinates (2 columns) to be converted into a .kml file
fn: str
filepath + filename of the .tif raster
mask: np.array
array of boolean where True indicates the pixels that are to be masked
Returns:
-----------
Saves 'coords.kml' in the current folder.
overwrites the .tif file directly
"""
kml = simplekml.Kml()
coords_wgs84 = convert_epsg(coords, epsg, 4326)
kml.newlinestring(name='coords', coords=coords_wgs84)
kml.save('coords.kml')
# open raster
raster = gdal.Open(fn, gdal.GA_Update)
# mask raster
for i in range(raster.RasterCount):
out_band = raster.GetRasterBand(i+1)
out_data = out_band.ReadAsArray()
out_band.SetNoDataValue(0)
no_data_value = out_band.GetNoDataValue()
out_data[mask] = no_data_value
out_band.WriteArray(out_data)
# close dataset and flush cache
raster = None
###################################################################################################
# UTILITIES
###################################################################################################
def get_filepath(inputs,satname):
"""
@ -230,38 +288,26 @@ def get_filepath(inputs,satname):
"""
sitename = inputs['sitename']
filepath_data = inputs['filepath']
# access the images
if satname == 'L5':
# access downloaded Landsat 5 images
filepath = os.path.join(os.getcwd(), 'data', sitename, satname, '30m')
filepath = os.path.join(filepath_data, sitename, satname, '30m')
elif satname == 'L7':
# access downloaded Landsat 7 images
filepath_pan = os.path.join(os.getcwd(), 'data', sitename, 'L7', 'pan')
filepath_ms = os.path.join(os.getcwd(), 'data', sitename, 'L7', 'ms')
filenames_pan = os.listdir(filepath_pan)
filenames_ms = os.listdir(filepath_ms)
if (not len(filenames_pan) == len(filenames_ms)):
raise 'error: not the same amount of files for pan and ms'
filepath_pan = os.path.join(filepath_data, sitename, 'L7', 'pan')
filepath_ms = os.path.join(filepath_data, sitename, 'L7', 'ms')
filepath = [filepath_pan, filepath_ms]
elif satname == 'L8':
# access downloaded Landsat 8 images
filepath_pan = os.path.join(os.getcwd(), 'data', sitename, 'L8', 'pan')
filepath_ms = os.path.join(os.getcwd(), 'data', sitename, 'L8', 'ms')
filenames_pan = os.listdir(filepath_pan)
filenames_ms = os.listdir(filepath_ms)
if (not len(filenames_pan) == len(filenames_ms)):
raise 'error: not the same amount of files for pan and ms'
filepath_pan = os.path.join(filepath_data, sitename, 'L8', 'pan')
filepath_ms = os.path.join(filepath_data, sitename, 'L8', 'ms')
filepath = [filepath_pan, filepath_ms]
elif satname == 'S2':
# access downloaded Sentinel 2 images
filepath10 = os.path.join(os.getcwd(), 'data', sitename, satname, '10m')
filenames10 = os.listdir(filepath10)
filepath20 = os.path.join(os.getcwd(), 'data', sitename, satname, '20m')
filenames20 = os.listdir(filepath20)
filepath60 = os.path.join(os.getcwd(), 'data', sitename, satname, '60m')
filenames60 = os.listdir(filepath60)
if (not len(filenames10) == len(filenames20)) or (not len(filenames20) == len(filenames60)):
raise 'error: not the same amount of files for 10, 20 and 60 m bands'
filepath10 = os.path.join(filepath_data, sitename, satname, '10m')
filepath20 = os.path.join(filepath_data, sitename, satname, '20m')
filepath60 = os.path.join(filepath_data, sitename, satname, '60m')
filepath = [filepath10, filepath20, filepath60]
return filepath
@ -303,105 +349,168 @@ def get_filenames(filename, filepath, satname):
return fn
def image_std(image, radius):
def merge_output(output):
"""
Calculates the standard deviation of an image, using a moving window of specified radius.
Function to merge the output dictionnary, which has one key per satellite mission into a
dictionnary containing all the shorelines and dates ordered chronologically.
Arguments:
-----------
image: np.array
2D array containing the pixel intensities of a single-band image
radius: int
radius defining the moving window used to calculate the standard deviation. For example,
radius = 1 will produce a 3x3 moving window.
output: dict
contains the extracted shorelines and corresponding dates, organised by satellite mission
Returns:
-----------
win_std: np.array
2D array containing the standard deviation of the image
output_all: dict
contains the extracted shorelines in a single list sorted by date
"""
# convert to float
image = image.astype(float)
# first pad the image
image_padded = np.pad(image, radius, 'reflect')
# window size
win_rows, win_cols = radius*2 + 1, radius*2 + 1
# calculate std
win_mean = uniform_filter(image_padded, (win_rows, win_cols))
win_sqr_mean = uniform_filter(image_padded**2, (win_rows, win_cols))
win_var = win_sqr_mean - win_mean**2
win_std = np.sqrt(win_var)
# remove padding
win_std = win_std[radius:-radius, radius:-radius]
# initialize output dict
output_all = dict([])
satnames = list(output.keys())
for key in output[satnames[0]].keys():
output_all[key] = []
# create extra key for the satellite name
output_all['satname'] = []
# fill the output dict
for satname in list(output.keys()):
for key in output[satnames[0]].keys():
output_all[key] = output_all[key] + output[satname][key]
output_all['satname'] = output_all['satname'] + [_ for _ in np.tile(satname,
len(output[satname]['dates']))]
# sort chronologically
idx_sorted = sorted(range(len(output_all['dates'])), key=output_all['dates'].__getitem__)
for key in output_all.keys():
output_all[key] = [output_all[key][i] for i in idx_sorted]
return win_std
return output_all
def mask_raster(fn, mask):
###################################################################################################
# CONVERSIONS FROM DICT TO GEODATAFRAME AND READ/WRITE GEOJSON
###################################################################################################
def polygon_from_kml(fn):
"""
Masks a .tif raster using GDAL.
Extracts coordinates from a .kml file.
KV WRL 2018
Arguments:
-----------
fn: str
filepath + filename of the .tif raster
mask: np.array
array of boolean where True indicates the pixels that are to be masked
fn: str
filepath + filename of the kml file to be read
Returns:
-----------
overwrites the .tif file directly
Returns: -----------
polygon: list
coordinates extracted from the .kml file
"""
# open raster
raster = gdal.Open(fn, gdal.GA_Update)
# mask raster
for i in range(raster.RasterCount):
out_band = raster.GetRasterBand(i+1)
out_data = out_band.ReadAsArray()
out_band.SetNoDataValue(0)
no_data_value = out_band.GetNoDataValue()
out_data[mask] = no_data_value
out_band.WriteArray(out_data)
# close dataset and flush cache
raster = None
# read .kml file
with open(fn) as kmlFile:
doc = kmlFile.read()
# parse to find coordinates field
str1 = '<coordinates>'
str2 = '</coordinates>'
subdoc = doc[doc.find(str1)+len(str1):doc.find(str2)]
coordlist = subdoc.split('\n')
# read coordinates
polygon = []
for i in range(1,len(coordlist)-1):
polygon.append([float(coordlist[i].split(',')[0]), float(coordlist[i].split(',')[1])])
def merge_output(output):
return [polygon]
def transects_from_geojson(filename):
"""
Function to merge the output dictionnary, which has one key per satellite mission into a
dictionnary containing all the shorelines and dates ordered chronologically.
Reads transect coordinates from a .geojson file.
Arguments:
-----------
output: dict
contains the extracted shorelines and corresponding dates.
filename: str
contains the path and filename of the geojson file to be loaded
Returns:
-----------
output_all_sorted: dict
contains the extracted shorelines sorted by date in a single list
transects: dict
contains the X and Y coordinates of each transect.
"""
output_all = {'dates':[], 'shorelines':[], 'geoaccuracy':[], 'satname':[], 'image_filename':[]}
for satname in list(output.keys()):
if satname == 'meta':
gdf = gpd.read_file(filename)
transects = dict([])
for i in gdf.index:
transects[gdf.loc[i,'name']] = np.array(gdf.loc[i,'geometry'].coords)
print('%d transects have been loaded' % len(transects.keys()))
return transects
def output_to_gdf(output):
"""
Saves the mapped shorelines as a gpd.GeoDataFrame
KV WRL 2018
Arguments:
-----------
output: dict
contains the coordinates of the mapped shorelines + attributes
Returns: -----------
gdf_all: gpd.GeoDataFrame
"""
# loop through the mapped shorelines
for i in range(len(output['shorelines'])):
# skip if there shoreline is empty
if len(output['shorelines'][i]) == 0:
continue
output_all['dates'] = output_all['dates'] + output[satname]['timestamp']
output_all['shorelines'] = output_all['shorelines'] + output[satname]['shoreline']
output_all['geoaccuracy'] = output_all['geoaccuracy'] + output[satname]['geoaccuracy']
output_all['satname'] = output_all['satname'] + [_ for _ in np.tile(satname,
len(output[satname]['timestamp']))]
output_all['image_filename'] = output_all['image_filename'] + output[satname]['filename']
# save the geometry + attributes
geom = geometry.LineString(output['shorelines'][i])
gdf = gpd.GeoDataFrame(geometry=gpd.GeoSeries(geom))
gdf.index = [i]
gdf.loc[i,'date'] = output['dates'][i].strftime('%Y-%m-%d %H:%M:%S')
gdf.loc[i,'satname'] = output['satname'][i]
gdf.loc[i,'geoaccuracy'] = output['geoaccuracy'][i]
gdf.loc[i,'cloud_cover'] = output['cloud_cover'][i]
# store into geodataframe
if i == 0:
gdf_all = gdf
else:
gdf_all = gdf_all.append(gdf)
return gdf_all
def transects_to_gdf(transects):
"""
Saves the shore-normal transects as a gpd.GeoDataFrame
# sort chronologically
output_all_sorted = {'dates':[], 'shorelines':[], 'geoaccuracy':[], 'satname':[], 'image_filename':[]}
idx_sorted = sorted(range(len(output_all['dates'])), key=output_all['dates'].__getitem__)
output_all_sorted['dates'] = [output_all['dates'][i] for i in idx_sorted]
output_all_sorted['shorelines'] = [output_all['shorelines'][i] for i in idx_sorted]
output_all_sorted['geoaccuracy'] = [output_all['geoaccuracy'][i] for i in idx_sorted]
output_all_sorted['satname'] = [output_all['satname'][i] for i in idx_sorted]
output_all_sorted['image_filename'] = [output_all['image_filename'][i] for i in idx_sorted]
KV WRL 2018
return output_all_sorted
Arguments:
-----------
transects: dict
contains the coordinates of the transects
Returns: -----------
gdf_all: gpd.GeoDataFrame
"""
# loop through the mapped shorelines
for i,key in enumerate(list(transects.keys())):
# save the geometry + attributes
geom = geometry.LineString(transects[key])
gdf = gpd.GeoDataFrame(geometry=gpd.GeoSeries(geom))
gdf.index = [i]
gdf.loc[i,'name'] = key
# store into geodataframe
if i == 0:
gdf_all = gdf
else:
gdf_all = gdf_all.append(gdf)
return gdf_all

@ -12,15 +12,10 @@ import pdb
# other modules
import skimage.transform as transform
from pylab import ginput
import pickle
import simplekml
import json
from osgeo import ogr
def find_indices(lst, condition):
"imitation of MATLAB find function"
return [i for i, elem in enumerate(lst) if condition(elem)]
import geopandas as gpd
# own modules
from coastsat import SDS_tools
def create_transect(origin, orientation, length):
"""
@ -66,19 +61,16 @@ def draw_transects(output, settings):
output: dict
contains the extracted shorelines and corresponding dates.
settings: dict
contains parameters defining :
transect_length: length of the transect in metres
contains the inputs
Returns:
-----------
transects: dict
contains the X and Y coordinates of all the transects drawn. These are also saved
as a .pkl and .kml (+ a .jpg figure showing the location of the transects)
as a .geojson (+ a .jpg figure showing the location of the transects)
"""
sitename = settings['inputs']['sitename']
length = settings['transect_length']
filepath = os.path.join(os.getcwd(), 'data', sitename)
filepath = os.path.join(settings['inputs']['filepath'], sitename)
# plot all shorelines
fig1 = plt.figure()
@ -103,90 +95,47 @@ def draw_transects(output, settings):
counter = 0
# loop until user breaks it by click <enter>
while 1:
try:
pts = ginput(n=2, timeout=1e9)
# let user click two points
pts = ginput(n=2, timeout=1e9)
if len(pts) > 0:
origin = pts[0]
except:
else:
fig1.gca().set_title('Transect locations', fontsize=16)
fig1.savefig(os.path.join(filepath, 'jpg_files', sitename + '_transect_locations.jpg'), dpi=200)
plt.title('Transects saved as ' + sitename + '_transects.pkl and ' + sitename + '_transects.kml ')
plt.title('Transect coordinates saved as ' + sitename + '_transects.geojson')
plt.draw()
ginput(n=1, timeout=5, show_clicks=True)
ginput(n=1, timeout=3, show_clicks=True)
plt.close(fig1)
break
counter = counter + 1
# create the transect using the origin, orientation and length
temp = np.array(pts[1]) - np.array(origin)
phi = np.arctan2(temp[1], temp[0])
orientation = -(phi*180/np.pi - 90)
transect = create_transect(origin, orientation, length)
transect = np.array([pts[0], pts[1]])
# alternative of making the transect the origin, orientation and length
# temp = np.array(pts[1]) - np.array(origin)
# phi = np.arctan2(temp[1], temp[0])
# orientation = -(phi*180/np.pi - 90)
# length = np.linalg.norm(temp)
# transect = create_transect(origin, orientation, length)
transects[str(counter)] = transect
# plot the transects on the figure
ax1.plot(transect[:,0], transect[:,1], 'b.', markersize=4)
ax1.plot(transect[:,0], transect[:,1], 'b-', lw=2.5)
ax1.plot(transect[0,0], transect[0,1], 'rx', markersize=10)
ax1.text(transect[-1,0], transect[-1,1], str(counter), size=16,
bbox=dict(boxstyle="square", ec='k',fc='w'))
plt.draw()
# save as transects.pkl
with open(os.path.join(filepath, sitename + '_transects.pkl'), 'wb') as f:
pickle.dump(transects, f)
# save as transects.kml (for GIS)
kml = simplekml.Kml()
for key in transects.keys():
newline = kml.newlinestring(name=key)
newline.coords = transects[key]
newline.description = 'user-defined cross-shore transect'
kml.save(os.path.join(filepath, sitename + '_transects.kml'))
# save as transects.geojson (for GIS)
gdf = SDS_tools.transects_to_gdf(transects)
# set projection
gdf.crs = {'init':'epsg:'+str(settings['output_epsg'])}
# save as geojson
gdf.to_file(os.path.join(filepath, sitename + '_transects.geojson'), driver='GeoJSON', encoding='utf-8')
print('Transect locations saved in ' + filepath)
return transects
def load_transects_from_kml(filename):
"""
Reads transect coordinates from a KML file.
Arguments:
-----------
filename: str
contains the path and filename of the KML file to be loaded
Returns:
-----------
transects: dict
contains the X and Y coordinates of each transect.
"""
# set driver
drv = ogr.GetDriverByName('KML')
# read file
file = drv.Open(filename)
layer = file.GetLayer()
feature = layer.GetNextFeature()
# initialise transects dictionnary
transects = dict([])
while feature:
f_dict = json.loads(feature.ExportToJson())
# raise an exception if the KML file contains other features that LineString geometries
if not f_dict['geometry']['type'] == 'LineString':
raise Exception('The KML file you provided does not contain LineString geometries. Modify your KML file and try again.')
# store the name of the feature and coordinates in the transects dictionnary
else:
name = f_dict['properties']['Name']
coords = np.array(f_dict['geometry']['coordinates'])[:,:-1]
transects[name] = coords
feature = layer.GetNextFeature()
print('%d transects have been loaded' % len(transects.keys()))
return transects
def compute_intersection(output, transects, settings):
"""
Computes the intersection between the 2D mapped shorelines and the transects, to generate
@ -240,12 +189,18 @@ def compute_intersection(output, transects, settings):
d_origin = np.array([np.linalg.norm(sl[k,:] - p1) for k in range(len(sl))])
# find the shoreline points that are close to the transects and to the origin
# the distance to the origin is hard-coded here to 1 km
logic_close = np.logical_and(d_line <= along_dist, d_origin <= 1000)
idx_close = find_indices(logic_close, lambda e: e == True)
idx_dist = np.logical_and(d_line <= along_dist, d_origin <= 1000)
# find the shoreline points that are in the direction of the transect (within 90 degrees)
temp_sl = sl - np.array(transects[key][0,:])
phi_sl = np.array([np.arctan2(temp_sl[k,1], temp_sl[k,0]) for k in range(len(temp_sl))])
diff_angle = (phi - phi_sl)
idx_angle = np.abs(diff_angle) < np.pi/2
# combine the transects that are close in distance and close in orientation
idx_close = np.where(np.logical_and(idx_dist,idx_angle))[0]
idx_points_all.append(idx_close)
# in case there are no shoreline points close to the transect
if not idx_close:
if len(idx_close) == 0:
chainage_mtx[i,j,:] = np.tile(np.nan,(1,6))
else:
# change of base to shore-normal coordinate system

@ -0,0 +1,18 @@
channels:
- defaults
- conda-forge
dependencies:
- python=3.7
- numpy=1.16.3
- matplotlib=3.0.3
- earthengine-api=0.1.173
- gdal=2.3.3
- pandas=0.24.2
- geopandas=0.4.1
- pytz=2019.1
- scikit-image=0.15.0
- scikit-learn=0.20.3
- shapely=1.6.4
- scipy=1.2.1
- spyder=3.3.4
- notebook=5.7.8

@ -13,31 +13,37 @@ import pickle
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import SDS_download, SDS_preprocess, SDS_shoreline, SDS_tools, SDS_transects
# region of interest (longitude, latitude in WGS84), can be loaded from a .kml polygon
polygon = SDS_tools.coords_from_kml('NARRA_polygon.kml')
#polygon = [[[151.301454, -33.700754],
# [151.311453, -33.702075],
# [151.307237, -33.739761],
# [151.294220, -33.736329],
# [151.301454, -33.700754]]]
from coastsat import SDS_download, SDS_preprocess, SDS_shoreline, SDS_tools, SDS_transects
# region of interest (longitude, latitude in WGS84)
polygon = [[[151.301454, -33.700754],
[151.311453, -33.702075],
[151.307237, -33.739761],
[151.294220, -33.736329],
[151.301454, -33.700754]]]
# can also be loaded from a .kml polygon
#kml_polygon = os.path.join(os.getcwd(), 'examples', 'NARRA_polygon.kml')
#polygon = SDS_tools.polygon_from_kml(kml_polygon)
# date range
dates = ['2017-12-01', '2018-02-01']
dates = ['2017-12-01', '2018-01-01']
# satellite missions
sat_list = ['L8','S2']
sat_list = ['S2']
# name of the site
sitename = 'NARRA'
# filepath where data will be stored
filepath_data = os.path.join(os.getcwd(), 'data')
# put all the inputs into a dictionnary
inputs = {
'polygon': polygon,
'dates': dates,
'sat_list': sat_list,
'sitename': sitename
'sitename': sitename,
'filepath': filepath_data
}
#%% 2. Retrieve images
@ -46,35 +52,33 @@ inputs = {
metadata = SDS_download.retrieve_images(inputs)
# if you have already downloaded the images, just load the metadata file
#filepath = os.path.join(os.getcwd(), 'data', sitename)
#with open(os.path.join(filepath, sitename + '_metadata' + '.pkl'), 'rb') as f:
# metadata = pickle.load(f)
metadata = SDS_download.get_metadata(inputs)
#%% 3. Batch shoreline detection
# settings for the shoreline extraction
settings = {
# general parameters:
'cloud_thresh': 0.2, # threshold on maximum cloud cover
'cloud_thresh': 0.5, # threshold on maximum cloud cover
'output_epsg': 28356, # epsg code of spatial reference system desired for the output
# quality control:
'check_detection': True, # if True, shows each shoreline detection to the user for validation
'save_figure': True, # if True, saves a figure showing the mapped shoreline for each image
# add the inputs defined previously
'inputs': inputs,
# [ONLY FOR ADVANCED USERS] shoreline detection parameters:
'min_beach_area': 4500, # minimum area (in metres^2) for an object to be labelled as a beach
'buffer_size': 150, # radius (in metres) of the buffer around sandy pixels considered in the shoreline detection
'min_length_sl': 200, # minimum length (in metres) of shoreline perimeter to be valid
'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images
'dark_sand': False, # only switch to True if your site has dark sand (e.g. black sand beach)
}
# [OPTIONAL] preprocess images (cloud masking, pansharpening/down-sampling)
SDS_preprocess.save_jpg(metadata, settings)
# [OPTIONAL] create a reference shoreline (helps to identify outliers and false detections)
settings['reference_shoreline'] = SDS_preprocess.get_reference_sl_manual(metadata, settings)
settings['reference_shoreline'] = SDS_preprocess.get_reference_sl(metadata, settings)
# set the max distance (in meters) allowed from the reference shoreline for a detected shoreline to be valid
settings['max_dist_ref'] = 100
@ -99,14 +103,12 @@ fig.set_size_inches([15.76, 8.52])
#%% 4. Shoreline analysis
# if you have already mapped the shorelines, load the output.pkl file
filepath = os.path.join(os.getcwd(), 'data', sitename)
filepath = os.path.join(inputs['filepath'], sitename)
with open(os.path.join(filepath, sitename + '_output' + '.pkl'), 'rb') as f:
output = pickle.load(f)
# now we have to define cross-shore transects over which to quantify the shoreline changes
# each transect is defined by two points, its origin and a second point that defines its orientation
# the parameter transect length determines how far from the origin the transect will span
settings['transect_length'] = 500
# there are 3 options to create the transects:
# - option 1: draw the shore-normal transects along the beach
@ -116,9 +118,9 @@ settings['transect_length'] = 500
# option 1: draw origin of transect first and then a second point to define the orientation
transects = SDS_transects.draw_transects(output, settings)
# option 2: load the transects from a KML file
#kml_file = 'NARRA_transects.kml'
#transects = SDS_transects.load_transects_from_kml(kml_file)
# option 2: load the transects from a .geojson file
#geojson_file = os.path.join(os.getcwd(), 'examples', 'NARRA_transects.geojson')
#transects = SDS_tools.transects_from_geojson(geojson_file)
# option 3: create the transects by manually providing the coordinates of two points
#transects = dict([])
@ -136,11 +138,11 @@ fig = plt.figure()
gs = gridspec.GridSpec(len(cross_distance),1)
gs.update(left=0.05, right=0.95, bottom=0.05, top=0.95, hspace=0.05)
for i,key in enumerate(cross_distance.keys()):
if np.all(np.isnan(cross_distance[key])):
continue
ax = fig.add_subplot(gs[i,0])
ax.grid(linestyle=':', color='0.5')
ax.set_ylim([-50,50])
if not i == len(cross_distance.keys()):
ax.set_xticks = []
ax.plot(output['dates'], cross_distance[key]- np.nanmedian(cross_distance[key]), '-^', markersize=6)
ax.set_ylabel('distance [m]', fontsize=12)
ax.text(0.5,0.95,'Transect ' + key, bbox=dict(boxstyle="square", ec='k',fc='w'), ha='center',

@ -8,9 +8,10 @@
"\n",
"This software is described in *Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (submitted). CoastSat: a Google Earth Engine-enabled software to extract shorelines from publicly available satellite imagery, Environmental Modelling and Software*. It enables the users to extract time-series of shoreline change over the last 30+ years at their site of interest.\n",
"\n",
"There are two main steps:\n",
"There are three main steps:\n",
"- retrieval of the satellite images of the region of interest from Google Earth Engine\n",
"- extraction of the shorelines from the images using a sub-pixel resolution technique\n",
"- intersection of the 2D shorelines with shore-normal transects\n",
"\n",
"## 1. Initial settings\n",
"\n",
@ -29,7 +30,7 @@
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"import matplotlib.pyplot as plt\n",
"import SDS_download, SDS_preprocess, SDS_shoreline, SDS_tools, SDS_transects"
"from coastsat import SDS_download, SDS_preprocess, SDS_shoreline, SDS_tools, SDS_transects"
]
},
{
@ -38,7 +39,9 @@
"source": [
"## 2. Retrieval of the images from GEE\n",
"\n",
"Define the region of interest (`polygon`), the date range (`dates`) and the satellite missions (`sat_list`) from which you wish to retrieve the satellite images. The images will be cropped on the Google Earth Engine server and only the region of interest will be downloaded as a .TIF file. The files will be organised in the local directory under *.\\data\\sitename*."
"Define the region of interest (`polygon`), the date range (`dates`) and the satellite missions (`sat_list`) from which you wish to retrieve the satellite images. The images will be cropped on the Google Earth Engine server and only the region of interest will be downloaded as a .tif file. The files will stored in the directory defined in `filepath`.\n",
"\n",
"Make sure the area of your ROI is smaller than 100 km2 (if larger split it into smaller ROIs)."
]
},
{
@ -59,8 +62,10 @@
"sat_list = ['S2']\n",
"# name of the site\n",
"sitename = 'NARRA'\n",
"# directory where the data will be stored\n",
"filepath = os.path.join(os.getcwd(), 'data')\n",
"# put all the inputs into a dictionnary\n",
"inputs = {'polygon': polygon, 'dates': dates, 'sat_list': sat_list, 'sitename': sitename}"
"inputs = {'polygon': polygon, 'dates': dates, 'sat_list': sat_list, 'sitename': sitename, 'filepath':filepath}"
]
},
{
@ -92,9 +97,7 @@
"metadata": {},
"outputs": [],
"source": [
"filepath = os.path.join(os.getcwd(), 'data', sitename)\n",
"with open(os.path.join(filepath, sitename + '_metadata' + '.pkl'), 'rb') as f:\n",
" metadata = pickle.load(f) "
"metadata = SDS_download.get_metadata(inputs) "
]
},
{
@ -103,7 +106,11 @@
"source": [
"## 3. Shoreline extraction\n",
"\n",
"Maps the position of the shoreline on the satellite images. The user can define the cloud threhold (`cloud_thresh`) and select the spatial reference system in which he would like to output the coordinates of the mapped shorelines (`output_epsg`). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. To quality control each shoreline detection and manually validate the mapped shorelines, the user has the option to set the parameter `check_detection` to **True**. The other parameters are for advanced users only and are described in the last section of the README."
"This section maps the position of the shoreline on the satellite images. The user can define the cloud threhold (`cloud_thresh`) and select the spatial reference system in which to output the coordinates of the mapped shorelines (`output_epsg`). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. Make sure that your are using cartesian coordinates and not spherical coordinates (lat,lon) like WGS84. \n",
"\n",
"To quality control each shoreline detection and manually validate the mapped shorelines, the user has the option to set the parameter `check_detection` to **True**. To save a figure for each mapped shoreline set `save_figure` to **True**. \n",
"\n",
"The other parameters are for advanced users only and are described in the README."
]
},
{
@ -114,19 +121,19 @@
"source": [
"settings = { \n",
" # general parameters:\n",
" 'cloud_thresh': 0.2, # threshold on maximum cloud cover\n",
" 'cloud_thresh': 0.5, # threshold on maximum cloud cover\n",
" 'output_epsg': 28356, # epsg code of spatial reference system desired for the output \n",
" # quality control:\n",
" 'check_detection': True, # if True, shows each shoreline detection to the user for validation\n",
"\n",
" 'save_figure': True, # if True, saves a figure showing the mapped shoreline for each image\n",
" # add the inputs defined previously\n",
" 'inputs': inputs,\n",
" \n",
" # [ONLY FOR ADVANCED USERS] shoreline detection parameters:\n",
" 'min_beach_area': 4500, # minimum area (in metres^2) for an object to be labelled as a beach\n",
" 'buffer_size': 150, # radius (in metres) of the buffer around sandy pixels considered in the shoreline detection\n",
" 'min_length_sl': 200, # minimum length (in metres) of shoreline perimeter to be valid\n",
" 'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images \n",
" 'cloud_mask_issue': False, # switch this parameter to True if sand pixels are masked (in black) on many images \n",
" 'dark_sand': False, # only switch to True if your site has dark sand (e.g. black sand beach)\n",
"}"
]
},
@ -152,7 +159,7 @@
"metadata": {},
"source": [
"### [OPTIONAL] Digitize a reference shoreline\n",
"Creates a reference shoreline which helps to identify outliers and false detections. The reference shoreline is manually digitised by the user on one of the images. The parameter `max_dist_ref` defines the maximum distance from the reference shoreline (in metres) at which a valid detected shoreline can be. If you think that you shoreline will move more than the default value of 100 m, please change this parameter to an appropriate distance."
"Creates a reference shoreline which helps to identify outliers and false detections. The reference shoreline is manually digitised by the user on one of the images. The parameter `max_dist_ref` defines the maximum distance from the reference shoreline (in metres) at which a valid detected shoreline can be. If you think that the default value of 100 m will not capture the full shoreline variability of your site, increase this value to an appropriate distance."
]
},
{
@ -162,7 +169,7 @@
"outputs": [],
"source": [
"%matplotlib qt\n",
"settings['reference_shoreline'] = SDS_preprocess.get_reference_sl_manual(metadata, settings)\n",
"settings['reference_shoreline'] = SDS_preprocess.get_reference_sl(metadata, settings)\n",
"settings['max_dist_ref'] = 100 # max distance (in meters) allowed from the reference shoreline"
]
},
@ -171,7 +178,7 @@
"metadata": {},
"source": [
"### Batch shoreline detection\n",
"Extracts the shorelines from the images. The mapped shorelines are saved into `output.pkl` (under *./data/sitename*) which contains the shoreline coordinates for each date in the spatial reference system specified by the user in `'output_epsg'`."
"Extracts the 2D shorelines from the images in the spatial reference system specified by the user in `'output_epsg'`. The mapped shorelines are saved into `output.pkl` (under *./data/sitename*) and `output.geojson` to use in GIS softwares."
]
},
{
@ -190,7 +197,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Simple plot of the mapped shorelines"
"Simple plot of the mapped shorelines. The coordinates are stored in the output dictionnary together with the exact dates in UTC time, the georeferencing accuracy and the cloud cover."
]
},
{
@ -227,7 +234,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**If you have already mapped the shorelines**, just load the output file by only running the section below"
"**If you have already mapped the shorelines**, just load the output file (`output.pkl`) by running the section below"
]
},
{
@ -236,27 +243,11 @@
"metadata": {},
"outputs": [],
"source": [
"filepath = os.path.join(os.getcwd(), 'data', sitename)\n",
"filepath = os.path.join(inputs['filepath'], sitename)\n",
"with open(os.path.join(filepath, sitename + '_output' + '.pkl'), 'rb') as f:\n",
" output = pickle.load(f) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The shore-normal transects are defined by two points, the origin of the transect and a second point that defines its orientaion. The parameter *transect_length* determines how far from the origin the transects span."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"settings['transect_length'] = 500 # defines the length of the transects in metres"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -280,7 +271,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Option 2**: the user can load the transect coordinates (make sure the spatial reference system is the same as defined by *output_epsg* previously) from a .kml file by calling:"
"**Option 2**: the user can load the transect coordinates (make sure the spatial reference system is the same as defined previously by the parameter *output_epsg*) from a .geojson file by calling:"
]
},
{
@ -289,8 +280,8 @@
"metadata": {},
"outputs": [],
"source": [
"kml_file = 'NARRA_transects.kml'\n",
"transects = SDS_transects.load_transects_from_kml(kml_file)"
"geojson_file = os.path.join(os.getcwd(), 'examples', 'NARRA_transects.geojson')\n",
"transects = SDS_tools.transects_from_geojson(geojson_file)"
]
},
{
@ -349,14 +340,14 @@
"gs = gridspec.GridSpec(len(cross_distance),1)\n",
"gs.update(left=0.05, right=0.95, bottom=0.05, top=0.95, hspace=0.05)\n",
"for i,key in enumerate(cross_distance.keys()):\n",
" if np.all(np.isnan(cross_distance[key])):\n",
" continue\n",
" ax = fig.add_subplot(gs[i,0])\n",
" ax.grid(linestyle=':', color='0.5')\n",
" ax.set_ylim([-50,50])\n",
" if not i == len(cross_distance.keys()):\n",
" ax.set_xticks = []\n",
" ax.plot(output['dates'], cross_distance[key]- np.nanmedian(cross_distance[key]), '-^', markersize=6)\n",
" ax.set_ylabel('distance [m]', fontsize=12)\n",
" ax.text(0.5,0.95,'Transect ' + key, bbox=dict(boxstyle=\"square\", ec='k',fc='w'), ha='center',\n",
" ax.text(0.5,0.95, key, bbox=dict(boxstyle=\"square\", ec='k',fc='w'), ha='center',\n",
" va='top', transform=ax.transAxes, fontsize=14)\n",
"mng = plt.get_current_fig_manager() \n",
"mng.window.showMaximized() \n",
@ -380,7 +371,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.7.3"
},
"varInspector": {
"cols": {

Before

Width:  |  Height:  |  Size: 16 MiB

After

Width:  |  Height:  |  Size: 16 MiB

Before

Width:  |  Height:  |  Size: 18 MiB

After

Width:  |  Height:  |  Size: 18 MiB

@ -1,215 +0,0 @@
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
@EXPLICIT
https://conda.anaconda.org/conda-forge/linux-64/libgfortran-3.0.0-1.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/blas-1.0-mkl.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/ca-certificates-2018.11.29-ha4d7672_0.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/intel-openmp-2019.1-144.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-7.3.0-hdf63c60_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-7.2.0-hdf63c60_3.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libssh2-1.8.0-1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-7.3.0-hdf63c60_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pandoc-2.6-1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/poppler-data-0.4.9-1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.6-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/expat-2.2.5-hf484d3e_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/freexl-1.0.5-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/geos-3.7.1-hf484d3e_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/giflib-5.1.4-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/graphite2-1.3.13-hf484d3e_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/icu-58.2-hf484d3e_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/jpeg-9c-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/json-c-0.13.1-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libffi-3.2.1-hf484d3e_1005.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libiconv-1.15-h14c3975_1004.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libsodium-1.0.16-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.32.1-h14c3975_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/mkl-2019.1-144.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.1-hf484d3e_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/openblas-0.3.3-h9ac9557_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/openssl-1.0.2p-h14c3975_1002.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/pcre-8.42-h439df22_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pixman-0.34.0-h14c3975_1003.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/proj4-5.2.0-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pthread-stubs-0.4-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/tzcode-2018g-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-kbproto-1.0.7-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libice-1.0.9-h14c3975_1004.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libxau-1.0.9-h14c3975_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libxdmcp-1.1.2-h14c3975_1007.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-renderproto-0.11.1-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-xextproto-7.3.0-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-xproto-7.0.31-h14c3975_1007.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xz-5.2.4-h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/zlib-1.2.11-h14c3975_1004.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/boost-cpp-1.68.0-h11c811c_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/gettext-0.19.8.1-h9745a5d_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/hdf4-4.2.13-h9a582f1_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.4-nompi_h11e915b_1105.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libedit-3.1.20170329-hf8c457e_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libpng-1.6.36-h84994c4_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libtiff-4.0.10-h648cc4a_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libxcb-1.13-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.9.8-h143f9aa_1005.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/readline-7.0-hf8c457e_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.9-h84994c4_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xerces-c-3.2.2-hac72e42_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libsm-1.2.3-h4937e3b_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/zeromq-4.2.5-hf484d3e_1006.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/freetype-2.9.1-h94bbf69_1005.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/geotiff-1.4.3-h1105359_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/glib-2.56.2-had28632_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/kealib-1.4.10-he7154bc_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/krb5-1.16.3-hc83ff2d_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libkml-1.3.0-h328b03d_1009.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/openjpeg-2.3.0-hf38bd82_1003.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.26.0-h67949de_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libx11-1.6.7-h14c3975_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/dbus-1.13.2-h714fa37_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/fontconfig-2.13.1-h2176d3f_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/gstreamer-1.14.0-hb453b48_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libcurl-7.64.0-h01ee5af_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libpq-10.6-h13b8bad_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libspatialite-4.3.0a-hb5ec416_1026.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/python-3.6.7-hd21baee_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libxext-1.3.3-h14c3975_1004.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libxrender-0.9.10-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/xorg-libxt-1.1.5-h14c3975_1002.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/alabaster-0.7.12-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/asn1crypto-0.24.0-py36_1003.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/attrs-18.2.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/backcall-0.1.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/cachetools-2.1.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/cairo-1.14.12-h80bd089_1005.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/certifi-2018.11.29-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/chardet-3.0.4-py36_1003.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/cloudpickle-0.7.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/curl-7.64.0-h646f8bb_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/dask-core-1.1.1-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/decorator-4.3.2-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/docutils-0.14-py36_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/entrypoints-0.3-py36_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/gst-plugins-base-1.14.0-hbbd80ab_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/httplib2-0.12.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/idna-2.8-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/imagesize-1.1.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/ipython_genutils-0.2.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/jeepney-0.4-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/kiwisolver-1.0.1-py36h6bb024c_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/lazy-object-proxy-1.3.1-py36h14c3975_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/markupsafe-1.1.0-py36h14c3975_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/mccabe-0.6.1-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/mistune-0.8.4-py36h14c3975_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/numpy-base-1.15.4-py36hde5b4d6_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/olefile-0.46-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pandocfilters-1.4.2-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/parso-0.3.3-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pickleshare-0.7.5-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/postgresql-10.6-h66cca7a_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/prometheus_client-0.5.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/psutil-5.5.0-py36h14c3975_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/ptyprocess-0.6.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyasn1-0.4.4-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pycodestyle-2.5.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pycparser-2.19-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyflakes-2.1.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyparsing-2.3.1-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pysocks-1.6.8-py36_1002.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pytz-2018.9-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pyzmq-17.1.2-py36h6afc9c9_1001.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/qtpy-1.6.0-pyh8a2030e_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/rope-0.10.7-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/send2trash-1.5.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/simplejson-3.16.1-py36h470a237_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/simplekml-1.3.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/sip-4.18.1-py36hf484d3e_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/six-1.12.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/snowballstemmer-1.2.1-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/sphinxcontrib-websupport-1.1.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/testpath-0.4.2-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/toolz-0.9.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/tornado-5.1.1-py36h14c3975_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/typed-ast-1.3.1-py36h14c3975_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/wcwidth-0.1.7-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/webencodings-0.5.1-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/wrapt-1.11.1-py36h14c3975_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/wurlitzer-1.0.2-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/babel-2.6.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/cffi-1.11.5-py36h9745a5d_1001.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/cycler-0.10.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/cytoolz-0.9.0.1-py36h14c3975_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/gobject-introspection-1.56.1-py36h9e29830_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-1.9.0-he243708_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/jedi-0.13.2-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libdap4-3.19.1-hd48c02d_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libnetcdf-4.6.2-hbdf4f91_1001.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/packaging-19.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pexpect-4.6.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pillow-5.4.1-py36h00a061d_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/poppler-0.67.0-h2fc8fa2_1002.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyasn1-modules-0.2.3-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pyrsistent-0.14.10-py36h14c3975_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/python-dateutil-2.8.0-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/qt-5.6.3-h8bf5577_3.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/qtawesome-0.5.6-pyh8a2030e_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/rsa-3.4.2-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/setuptools-40.8.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/terminado-0.8.1-py36_1001.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/traitlets-4.3.2-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/uritemplate-3.0.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/astroid-2.1.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/atk-2.25.90-hf2eb9ee_1001.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/bleach-3.1.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/cryptography-2.5-py36hb7f436b_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/gdk-pixbuf-2.36.12-h4f1c04b_1001.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-auth-1.6.2-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/isort-4.3.4-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/jinja2-2.10-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/jsonschema-3.0.0a3-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/jupyter_core-4.4.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/libgdal-2.4.0-h982c1cc_1002.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/networkx-2.2-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/oauth2client-4.1.3-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pango-1.40.14-hf0c64fd_1003.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pygments-2.3.1-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pyqt-5.6.0-py36h13b7fb3_1008.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/wheel-0.32.3-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-auth-httplib2-0.0.3-py_2.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/gtk2-2.24.31-h5baeb44_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/jupyter_client-5.2.4-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/nbformat-4.4.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pip-19.0.2-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/prompt_toolkit-2.0.8-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pylint-2.2.2-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pyopenssl-19.0.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/secretstorage-3.1.1-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-api-python-client-1.7.8-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/ipython-7.2.0-py36h24bf2e0_1000.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/keyring-17.1.1-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/nbconvert-5.3.1-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/urllib3-1.24.1-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/earthengine-api-0.1.167-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/ipykernel-5.1.0-py36h24bf2e0_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/requests-2.21.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/jupyter_console-6.0.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/notebook-5.7.4-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/qtconsole-4.4.3-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/sphinx-1.8.4-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/spyder-kernels-0.4.2-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/numpydoc-0.8.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/widgetsnbextension-3.4.2-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/ipywidgets-7.4.2-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/spyder-3.3.3-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/jupyter-1.0.0-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/gdal-2.4.0-py36h1c6dbfb_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/imageio-2.5.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.0.2-py36_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.0.2-py36h167e16e_1002.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/mkl_fft-1.0.10-py36h14c3975_1.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/mkl_random-1.0.2-py36h637b7d7_2.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/numpy-1.15.4-py36h7e9f1db_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/pywavelets-1.0.1-py36h3010b51_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/scipy-1.2.0-py36h7c811a0_0.tar.bz2
https://conda.anaconda.org/conda-forge/linux-64/scikit-image-0.14.2-py36hf484d3e_1.tar.bz2
https://repo.anaconda.com/pkgs/main/linux-64/scikit-learn-0.20.2-py36hd81dba3_0.tar.bz2

@ -1,184 +0,0 @@
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: osx-64
@EXPLICIT
https://repo.anaconda.com/pkgs/main/osx-64/blas-1.0-mkl.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/bzip2-1.0.6-1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ca-certificates-2018.03.07-0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/expat-2.2.5-hfc679d8_2.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/freexl-1.0.5-h470a237_2.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/geos-3.7.0-hfc679d8_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/gettext-0.19.8.1-h1f1d5ed_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/giflib-5.1.4-h470a237_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/icu-58.2-hfc679d8_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/intel-openmp-2019.1-144.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/jpeg-9c-h470a237_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/json-c-0.13.1-h470a237_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libcxxabi-4.0.1-hcfea43d_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libgfortran-3.0.1-h93005f0_2.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libiconv-1.15-h470a237_3.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libsodium-1.0.16-h3efe00b_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pandoc-2.2.3.2-0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/pcre-8.41-hfc679d8_3.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/pixman-0.34.0-h470a237_3.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/poppler-data-0.4.9-0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/proj4-5.2.0-h470a237_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/xz-5.2.4-h1de35cc_4.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/zlib-1.2.11-h1de35cc_3.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/boost-cpp-1.68.0-h3a22d5f_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/hdf4-4.2.13-h951d187_2.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/hdf5-1.10.4-nompi_h5598ddc_1003.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libcxx-4.0.1-hcfea43d_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libpng-1.6.35-ha441bb4_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libtiff-4.0.10-he6b73bb_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libxml2-2.9.8-h422b904_5.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/mkl-2018.0.3-1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/openssl-1.0.2p-h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/tk-8.6.8-ha441bb4_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/xerces-c-3.2.2-h5d6a6da_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/freetype-2.9.1-hb4e5f40_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/geotiff-1.4.2-h9c44c65_7.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/kealib-1.4.10-heffcb4b_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libffi-3.2.1-h475c297_4.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libkml-1.3.0-he469717_9.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libssh2-1.8.0-h5b517e9_3.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ncurses-6.1-h0a44026_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/openjpeg-2.3.0-h316dc23_3.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/zeromq-4.2.5-h0a44026_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/fontconfig-2.13.1-hce039c3_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/glib-2.56.2-h464dc38_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/libedit-3.1.20170329-hb402a30_2.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/readline-7.0-h1de35cc_5.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/cairo-1.14.12-h276e583_5.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/dbus-1.13.2-h760590f_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/krb5-1.16.2-hbb41f41_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libpq-10.5-hf16a0db_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/sqlite-3.26.0-hb1c47c0_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libcurl-7.62.0-hbdb9355_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libspatialite-4.3.0a-h201a3a7_25.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/postgresql-10.5-ha408888_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/python-3.6.6-h5001a0f_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/qt-5.9.6-h45cd832_2.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/alabaster-0.7.12-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/appnope-0.1.0-py36hf537a9a_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/asn1crypto-0.24.0-py36_1003.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/backcall-0.1.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/cachetools-2.1.0-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/certifi-2018.10.15-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/chardet-3.0.4-py36_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/cloudpickle-0.6.1-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/cryptography-vectors-2.3.1-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/curl-7.62.0-h74213dd_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/dask-core-1.0.0-py_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/decorator-4.3.0-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/docutils-0.14-py36hbfde631_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/entrypoints-0.2.3-py36_2.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/httplib2-0.12.0-py36_1000.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/idna-2.8-py36_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/imagesize-1.1.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ipython_genutils-0.2.0-py36h241746c_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/kiwisolver-1.0.1-py36h0a44026_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/lazy-object-proxy-1.3.1-py36h1de35cc_2.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/markupsafe-1.1.0-py36h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/mccabe-0.6.1-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/mistune-0.8.4-py36h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/numpy-base-1.15.4-py36h8a80b8c_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/olefile-0.46-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pandocfilters-1.4.2-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/parso-0.3.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pickleshare-0.7.5-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/prometheus_client-0.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/psutil-5.4.8-py36h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ptyprocess-0.6.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyasn1-0.4.4-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pycodestyle-2.4.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pycparser-2.19-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pyflakes-2.0.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pyparsing-2.3.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/python.app-2-py36_9.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pytz-2018.7-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pyzmq-17.1.2-py36h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/qtpy-1.5.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/free/osx-64/requests-2.14.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/rope-0.11.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/send2trash-1.5.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/simplejson-3.16.1-py36h470a237_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/simplekml-1.3.0-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/sip-4.19.8-py36h0a44026_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/six-1.11.0-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/snowballstemmer-1.2.1-py36h6c7b616_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/sphinxcontrib-1.0-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/testpath-0.4.2-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/toolz-0.9.0-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/tornado-5.1.1-py36h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/typed-ast-1.1.0-py36h1de35cc_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/wcwidth-0.1.7-py36h8c6ec74_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/webencodings-0.5.1-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/wrapt-1.10.11-py36h1de35cc_2.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/wurlitzer-1.0.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/astroid-2.1.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/babel-2.6.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/cffi-1.11.5-py36h5e8e0c9_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/cycler-0.10.0-py36hfc81398_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/cytoolz-0.9.0.1-py36h470a237_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jedi-0.13.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/keyring-16.1.1-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libdap4-3.19.1-h18059cb_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libnetcdf-4.6.2-h45f6246_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/packaging-18.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pexpect-4.6.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/pillow-5.3.0-py36hc736899_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/poppler-0.67.0-hdf8a1b3_2.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyasn1-modules-0.2.1-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pyqt-5.9.2-py36h655552a_2.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/python-dateutil-2.7.5-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/qtawesome-0.5.3-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/rsa-3.4.2-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/setuptools-40.6.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/sphinxcontrib-websupport-1.1.0-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/terminado-0.8.1-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/traitlets-4.3.2-py36h65bd3ce_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/uritemplate-3.0.0-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/bleach-3.0.2-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/cryptography-2.3.1-py36hdffb7b8_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-auth-1.6.1-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/isort-4.3.4-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jinja2-2.10-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jsonschema-2.6.0-py36hb385e00_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jupyter_core-4.4.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/libgdal-2.3.2-h42efa9e_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/networkx-2.2-py_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/oauth2client-4.1.2-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pygments-2.2.0-py36h240cd3f_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/wheel-0.32.3-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-auth-httplib2-0.0.3-py_2.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jupyter_client-5.2.3-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/nbformat-4.4.0-py36h827af21_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pip-18.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/prompt_toolkit-2.0.7-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/pylint-2.2.2-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/pyopenssl-18.0.0-py36_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/sphinx-1.8.2-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-api-python-client-1.7.5-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ipython-7.2.0-py36h39e3cac_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/nbconvert-5.3.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/numpydoc-0.8.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/earthengine-api-0.1.152-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ipykernel-5.1.0-py36h39e3cac_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jupyter_console-6.0.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/notebook-5.7.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/qtconsole-4.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/spyder-kernels-0.3.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/widgetsnbextension-3.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/ipywidgets-7.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/jupyter-1.0.0-py36_7.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/gdal-2.3.2-py36hfc77a4a_1.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/imageio-2.4.1-py36_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/matplotlib-3.0.1-py36h54f8f79_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/mkl_fft-1.0.6-py36hb8a8100_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/mkl_random-1.0.1-py36h5d10147_1.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/numpy-1.15.4-py36h6a91979_0.tar.bz2
https://conda.anaconda.org/conda-forge/osx-64/pywavelets-1.0.1-py36h7eb728f_0.tar.bz2
https://repo.anaconda.com/pkgs/main/osx-64/scipy-1.1.0-py36h28f7352_1.tar.bz2
https://conda.anaconda.org/anaconda/osx-64/scikit-image-0.14.0-py36h0a44026_1.tar.bz2
https://conda.anaconda.org/anaconda/osx-64/scikit-learn-0.20.1-py36h4f467ca_0.tar.bz2

@ -1,174 +0,0 @@
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
@EXPLICIT
https://repo.anaconda.com/pkgs/main/win-64/blas-1.0-mkl.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/ca-certificates-2018.03.07-0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/icc_rt-2017.0.4-h97af966_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/intel-openmp-2019.1-144.tar.bz2
https://repo.anaconda.com/pkgs/msys2/win-64/msys2-conda-epoch-20160418-1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pandoc-2.2.3.2-0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/vs2015_runtime-14.15.26706-h3a45250_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/winpty-0.4.3-4.tar.bz2
https://repo.anaconda.com/pkgs/msys2/win-64/m2w64-gmp-6.1.0-2.tar.bz2
https://repo.anaconda.com/pkgs/msys2/win-64/m2w64-libwinpthread-git-5.0.0.4634.697f757-2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/mkl-2018.0.3-1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/vc-14.1-h0510ff6_4.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/expat-2.2.5-he025d50_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/geos-3.6.2-h9ef7328_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/icu-58.2-ha66f8fd_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jpeg-9b-hb83a4c4_2.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/libiconv-1.15-h1df5818_7.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libsodium-1.0.16-h9d3ae62_0.tar.bz2
https://repo.anaconda.com/pkgs/msys2/win-64/m2w64-gcc-libs-core-5.3.0-7.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/openssl-1.0.2p-hfa6e2cd_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/proj4-4.9.3-hcf24537_7.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/python-3.6.7-h33f27b4_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/sqlite-3.25.3-he774522_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/tk-8.6.8-hfa6e2cd_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/xerces-c-3.2.2-ha925a31_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/xz-5.2.4-h2fa13f4_4.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/zlib-1.2.11-h62dcd97_3.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/alabaster-0.7.12-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/asn1crypto-0.24.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/backcall-0.1.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/cachetools-2.1.0-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/certifi-2018.10.15-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/chardet-3.0.4-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/cloudpickle-0.6.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/colorama-0.4.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/dask-core-1.0.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/decorator-4.3.0-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/docutils-0.14-py36h6012d8f_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/entrypoints-0.2.3-py36_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/freexl-1.0.5-hfa6e2cd_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/hdf4-4.2.13-h712560f_2.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/hdf5-1.10.1-h98b8871_1.tar.bz2
https://conda.anaconda.org/conda-forge/win-64/httplib2-0.12.0-py36_1000.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/idna-2.7-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/imagesize-1.1.0-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/ipython_genutils-0.2.0-py36h3c5d0ee_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/kiwisolver-1.0.1-py36h6538335_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/krb5-1.16.1-h038dc86_6.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/lazy-object-proxy-1.3.1-py36hfa6e2cd_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libboost-1.67.0-hd9e427e_4.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libpng-1.6.35-h2a8f88b_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libssh2-1.8.0-hd619d38_4.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libtiff-4.0.9-h36446d0_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libxml2-2.9.8-hadb2253_1.tar.bz2
https://repo.anaconda.com/pkgs/msys2/win-64/m2w64-gcc-libgfortran-5.3.0-6.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/markupsafe-1.1.0-py36he774522_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/mccabe-0.6.1-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/mistune-0.8.4-py36he774522_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/numpy-base-1.15.4-py36h8128ebf_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/olefile-0.46-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pandocfilters-1.4.2-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/parso-0.3.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pickleshare-0.7.5-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/prometheus_client-0.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/psutil-5.4.8-py36he774522_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyasn1-0.4.4-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pycodestyle-2.4.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pycparser-2.19-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pyflakes-2.0.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pyparsing-2.3.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pytz-2018.7-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pywin32-223-py36hfa6e2cd_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/qtpy-1.5.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/rope-0.11.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/send2trash-1.5.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/win-64/simplejson-3.16.1-py36hfa6e2cd_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/simplekml-1.3.0-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/sip-4.19.8-py36h6538335_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/six-1.11.0-py36_1.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/snowballstemmer-1.2.1-py36h763602f_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/sphinxcontrib-1.0-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/testpath-0.4.2-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/toolz-0.9.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/tornado-5.1.1-py36hfa6e2cd_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/typed-ast-1.1.0-py36hfa6e2cd_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/wcwidth-0.1.7-py36h3d5aa90_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/webencodings-0.5.1-py36_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/win_inet_pton-1.0.1-py36_1.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/wincertstore-0.2-py36h7fe50ca_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/wrapt-1.10.11-py36hfa6e2cd_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/zeromq-4.2.5-he025d50_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/astroid-2.1.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/babel-2.6.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/cffi-1.11.5-py36h74b6da3_1.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/cycler-0.10.0-py36h009560c_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/freetype-2.9.1-ha9979f8_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jedi-0.13.1-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/kealib-1.4.7-ha5b336b_5.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/keyring-16.1.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libcurl-7.62.0-h2a8f88b_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libkml-1.3.0-he5f2a48_4.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/libpq-10.5-h5fe2233_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/libspatialite-4.3.0a-h383548d_18.tar.bz2
https://repo.anaconda.com/pkgs/msys2/win-64/m2w64-gcc-libs-5.3.0-7.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/openjpeg-2.3.0-h5ec785f_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/packaging-18.0-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/pyasn1-modules-0.2.1-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pysocks-1.6.8-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/python-dateutil-2.7.5-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pyzmq-17.1.2-py36hfa6e2cd_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/qt-5.9.6-vc14h1e9a669_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/qtawesome-0.5.3-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/rsa-3.4.2-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/setuptools-40.6.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/sphinxcontrib-websupport-1.1.0-py36_1.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/traitlets-4.3.2-py36h096827d_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/uritemplate-3.0.0-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/bleach-3.0.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/cryptography-2.3.1-py36h74b6da3_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/curl-7.62.0-h2a8f88b_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-auth-1.6.1-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/isort-4.3.4-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jinja2-2.10-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/jsonschema-2.6.0-py36h7636477_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jupyter_core-4.4.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/networkx-2.2-py36_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/oauth2client-4.1.2-py_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pillow-5.3.0-py36hdc69c19_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/pygments-2.2.0-py36hb010967_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pyqt-5.9.2-py36h6538335_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pywinpty-0.5.4-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/wheel-0.32.3-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-auth-httplib2-0.0.3-py_2.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jupyter_client-5.2.3-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/libnetcdf-4.4.1.1-h825a56a_8.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/nbformat-4.4.0-py36h3a5bc1b_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pip-18.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/prompt_toolkit-2.0.7-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pylint-2.2.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pyopenssl-18.0.0-py36_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/terminado-0.8.1-py36_1.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/google-api-python-client-1.7.5-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/ipython-7.2.0-py36h39e3cac_0.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/libgdal-2.2.2-h2727f2b_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/nbconvert-5.3.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/urllib3-1.23-py36_0.tar.bz2
https://conda.anaconda.org/conda-forge/noarch/earthengine-api-0.1.152-py_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/ipykernel-5.1.0-py36h39e3cac_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/requests-2.20.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jupyter_console-6.0.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/notebook-5.7.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/qtconsole-4.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/sphinx-1.8.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/spyder-kernels-0.3.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/numpydoc-0.8.0-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/widgetsnbextension-3.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/ipywidgets-7.4.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/spyder-3.3.2-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/jupyter-1.0.0-py36_7.tar.bz2
https://repo.continuum.io/pkgs/main/win-64/gdal-2.2.2-py36hcebd033_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/imageio-2.4.1-py36_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/matplotlib-3.0.1-py36hc8f65d3_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/mkl_fft-1.0.6-py36hdbbee80_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/mkl_random-1.0.1-py36h77b88f5_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/numpy-1.15.4-py36ha559c80_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/pywavelets-1.0.1-py36h8c2d366_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/scipy-1.1.0-py36h4f6bf74_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/shapely-1.6.4-py36hc90234e_0.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/scikit-image-0.14.0-py36h6538335_1.tar.bz2
https://repo.anaconda.com/pkgs/main/win-64/scikit-learn-0.20.1-py36hb854c30_0.tar.bz2
Loading…
Cancel
Save