#Comprises a fully working version of the major BC Generation R and Python codes

the project is still being set up so most of the code will still undergo
significant changes
Development
Valentin Heimhuber 7 years ago
parent 1f4a0bd6fb
commit 46ade05763

@ -0,0 +1,3 @@
(
echo HWQ027.s
) | C:\Users\z3509544\AppData\Local\Continuum\Anaconda3\python hunter_rma_preprocessing.py

@ -0,0 +1,39 @@
We are using a suit of python and R codes to download and preprocess ET and P data from NARCLIM for modeling the climate changes scenarios.
The codes are:
P1_NARCliM_NC_to_CSV_CCRC_SS.py This is the python code that downloads NARCLIM data (any variable) for a given point location in lat lon coordinates).
R_Preparing_BASH_script_for_NARCLIM_batch_download.R (this creates a LINUX BASH code
for executing the NARCLIM download python code (P1_NARCliM_NC_to_CSV_CCRC_SS.py) for each of the 31 catchments and per variable: uses NARCLIM data type and climate variable as inputs
and creates 3 text files per variable. Each text file contains BASH code for 10 catchments.
Info for P1_NARCliM_NC_to_CSV_CCRC_SS.py code:
Variables available from NARCLIM (output):
'evspsblmean' water_evaporation flux (actual ET) long_name: Surface evaporation standard_name: water_evaporation_flux units: kg m-2 s-1
'potevpmean' potential ET water_potential_evaporation_flux kg m-2 s-1
'tasmean mean near surface temperature
tasmax maximum near surface temperature
pracc precipitation daily precipitation sum (sum of convective prcacc and stratiform prncacc precip)
pr1Hmaxtstep maximum 1 hour interval rainfall in a one day period
'pr1Hmaxtstep' Max. 1-hour time-window moving averaged precipitation rate units: kg m-2 s-1 maximum 1-hour time-window moving averaged values from point values 60.0 second
'wss1Hmaxtstep' Max. 1-hour time-window moving averaged surface wind speed units: m s-1 maximum 1-hour time-window moving averaged values from point values 60.0 second
'wssmax' Surface wind speed standard_name: air_velocity units: m s-1 height: 10 m
'wssmean' Surface wind speed standard_name: air_velocity units: m s-1
Code Input Variables:
Datatype: Choose 'T_NNRP' for reanalysis or 'T_GCMS' for GCM forcing data
BiasBool: Choose 'True' for bias corrected data, 'False' for normal model outputs
Execution of code in bash-Code for netcdf interrogation:
1st step: log into storm servers: Putty: hurricane.ccrc.unsw.edu.au or typhoon.ccrc.unsw.edu.au or cyclone.ccrc.unsw.edu.au + UNSW credentials (zID)
In BASH copy and enter:
module load python
latitude=-32.91
longitude=151.80
name='HunterRiver'
Datatype='T_NNRP'
Biasboolean='False'
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'pracc' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;

@ -0,0 +1,233 @@
# -*- coding: utf-8 -*-
from netCDF4 import *
import numpy as np
from numpy import *
import os
import pandas as pd
import glob
import matplotlib
import matplotlib.pyplot as plt
from datetime import datetime
from datetime import timedelta
import argparse
import time
#
# Set working direcotry (where postprocessed NARClIM data is located)
os.chdir('/srv/ccrc/data30/z3393020/NARCliM/postprocess/')
#
#User input for location and variable type - from command line
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--lat", help="first number")
parser.add_argument("--lon", help="second number")
parser.add_argument("--varName", help="operation")
parser.add_argument("--timestep", help="operation")
parser.add_argument("--domain", help="operation")
parser.add_argument("--LocationName", help="operation")
parser.add_argument("--Datatype", help="operation")
parser.add_argument("--BiasBool", help="operation")
args = parser.parse_args()
print(args.lat)
print(args.lon)
print(args.varName)
mylat= float(args.lat)
mylon= float(args.lon)
Clim_var_type = args.varName
NC_Domain = args.domain
Timestep = args.timestep
Location = args.LocationName
Data_Type = args.Datatype
Bias_Correction_BOOL = args.BiasBool
print("Extracting all NARCLIM time series for variable: ", Clim_var_type, " for lat lon: ", mylat, mylon, Location, "domain", NC_Domain, " timestep ", Timestep, " Datatype ", Data_Type, " biascorrected? ", Bias_Correction_BOOL)
lat_equal_len_string="%.3f" % abs(mylat)
lon_equal_len_string= "%.3f" % mylon
if Bias_Correction_BOOL == 'False':
#set directory path for output files
output_directory = '/srv/ccrc/data02/z5025317/NARCliM_out/'+ Location + '_' + lat_equal_len_string + '_' + lon_equal_len_string + '/'
#output_directory = 'J:\Project wrl2016032\NARCLIM_Raw_Data\Extracted'
print '---------------------------------------------------------'
if not os.path.exists(output_directory):
os.makedirs(output_directory)
print("output directory folder didn't exist and was generated here:")
print(output_directory)
print '---------------------------------------------------------'
#
#time.sleep(10)
#set up the loop variables for interrogating the entire NARCLIM raw data
NC_Periods = ('1990-2009','2020-2039','2060-2079')
if Data_Type == 'T_NNRP':
NC_Periods = ('1950-2009','Stop')
#
#Define empty pandas data frames
Full_df = pd.DataFrame()
GCM_df = pd.DataFrame()
R13_df = pd.DataFrame()
MultiNC_df = pd.DataFrame()
#
#Loop through models and construct CSV per site
for NC_Period in NC_Periods:
if NC_Period != "Stop":
Period_short = NC_Period[:4]
GCMs = os.listdir('./'+ NC_Period)
for GCM in GCMs:
print GCM
Warf_runs = os.listdir('./' + NC_Period + '/' + GCM + '/')
for Warf_run in Warf_runs:
Current_input_dir = './' + NC_Period + '/' + GCM + '/' + Warf_run + '/' + NC_Domain + '/'
print Current_input_dir
Climvar_ptrn = '*' + Timestep + '_*' + Clim_var_type + '.nc'
Climvar_NCs = glob.glob(Current_input_dir + Climvar_ptrn)
#print Climvar_NCs[1]
#Climvar_NCs = Climvar_NCs[0:2]
#print(Climvar_NCs)
for netcdf in Climvar_NCs:
f=Dataset(netcdf)
# This section print on the screen information contained in the headings of the file
#print '---------------------------------------------------------'
#print f.ncattrs()
#print f.title
#print f.variables
#print
#for varname in f.variables:
# print varname,' -> ',np.shape(f.variables[varname])
#print '---------------------------------------------------------'
# Based on the desired inputs, this finds the nearest grid centerpoint index (x,y) in the *.nc file
dist_x=np.abs(f.variables['lon'][:,:]-float(mylon))
dist_y=np.abs(f.variables['lat'][:,:]-float(mylat))
dist=dist_x + dist_y
latindex=np.where(dist_y==np.min(dist_y))
lonindex=np.where(dist_x==np.min(dist_x))
index=np.where(dist==np.min(dist))
print '---------------------------------------------------------'
print netcdf
print 'Information on the nearest point'
print 'Your desired lat,lon = ',mylat,mylon
print 'The nearest lat,lon = ', f.variables['lat'][latindex[0],latindex[1]], f.variables['lon'][lonindex[0],lonindex[1]]
#print 'The index of the nearest lat,lon (x,y) = ',index[0], index[1]
#Here we constract a pandas data frame, having the "time"/day as an index and a numer of variables (i.e. Clim_var_type, pracc, as columns)
d={}
#d["time"] = f.variables['time'][:]
d[ GCM +'_'+ Warf_run +'_'+ Period_short] = f.variables[Clim_var_type][:, int(index[0]), int(index[1])]
#if GCM == 'NNRP' and Warf_run == 'R1':
# d['Period']= NC_Period
timestamp = f.variables['time'][:]
timestamp_dates = pd.to_datetime(timestamp, unit='h', origin=pd.Timestamp('1949-12-01'))
df1=pd.DataFrame(d, index=timestamp_dates)
f.close()
print 'closing '+ os.path.basename(os.path.normpath(netcdf)) + ' moving to next netcdf file'
#print f
print '---------------------------------------------------------'
#append in time direction each new time series to the data frame
MultiNC_df = MultiNC_df.append(df1)
#append in columns direction individual GCM-RCM-123 run time series (along x axis)
MultiNC_df = MultiNC_df.sort_index(axis=0, ascending=True)
R13_df = pd.concat([R13_df, MultiNC_df], axis=1)
MultiNC_df =pd.DataFrame()
#append blocks of R1 R2 and R3 in x axis direction
R13_df = R13_df.sort_index(axis=0, ascending=True)
GCM_df = pd.concat([GCM_df, R13_df], axis=1)
R13_df = pd.DataFrame()
#append time periods in x axis direction (change axis=1 to =0 if periods for same model should be added to same model R123 column)
GCM_df = GCM_df.sort_index(axis=0, ascending=True)
Full_df = pd.concat([Full_df, GCM_df], axis=1)
GCM_df = pd.DataFrame()
Full_df = Full_df.sort_index(axis=0, ascending=True)
#adding a column with the NARCLIM decade
Full_df.loc[(Full_df.index > '1950-01-01') & (Full_df.index < '2010-01-01'), 'period']= '1990-2009'
Full_df.loc[(Full_df.index > '1990-01-01') & (Full_df.index < '2010-01-01'), 'period']= '1990-2009'
Full_df.loc[(Full_df.index > '2020-01-01') & (Full_df.index < '2040-01-01'), 'period']= '2020-2039'
Full_df.loc[(Full_df.index > '2060-01-01') & (Full_df.index < '2080-01-01'), 'period']= '2060-2079'
#
if Bias_Correction_BOOL == 'True':
os.chdir('/srv/ccrc/data30/z3393020/NARCliM/Bias_corrected/')
#set directory path for output files
output_directory = '/srv/ccrc/data02/z5025317/NARCliM_out/'+ Location + '_' + lat_equal_len_string + '_' + lon_equal_len_string + '/Bias_corrected/'
#output_directory = 'J:\Project wrl2016032\NARCLIM_Raw_Data\Extracted'
if not os.path.exists(output_directory):
os.makedirs(output_directory)
print("output directory folder didn't exist and was generated here:")
print(output_directory)
#time.sleep(10)
#set up the loop variables for interrogating the entire NARCLIM raw data
GCMs = ('CCCMA3.1','CSIRO-MK3.0','ECHAM5', 'MIROC3.2', 'NNRP')
#
#Define empty pandas data frames
Full_df = pd.DataFrame()
GCM_df = pd.DataFrame()
R13_df = pd.DataFrame()
MultiNC_df = pd.DataFrame()
#
#Loop through models and construct CSV per site
for GCM in GCMs:
print GCM
Warf_runs = os.listdir('./' + GCM + '/')
for Warf_run in Warf_runs:
NC_Periods = os.listdir('./' + GCM + '/' + Warf_run + '/')
for NC_Period in NC_Periods:
Period_short = NC_Period[:4]
Current_input_dir = './' + GCM + '/' + Warf_run + '/' + NC_Period + '/' + NC_Domain + '/'
print Current_input_dir
Climvar_ptrn = '*' + Timestep + '_*' + Clim_var_type + '_bc.nc'
Climvar_NCs = glob.glob(Current_input_dir + Climvar_ptrn)
print Climvar_NCs[1]
print Climvar_NCs[2]
for netcdf in Climvar_NCs:
#netcdf = '/srv/ccrc/data31/z3393020/NARCliM/Bias_corrected/' + netcdf[2:]
#print netcdf
f=Dataset(netcdf)
# This section print on the screen information contained in the headings of the file
print '---------------------------------------------------------'
print f.ncattrs()
print f.title
print f.variables
print
for varname in f.variables:
print varname,' -> ',np.shape(f.variables[varname])
print '---------------------------------------------------------'
# Based on the desired inputs, this finds the nearest grid centerpoint index (x,y) in the *.nc file
dist_x=np.abs(f.variables['lon'][:,:]-float(mylon))
dist_y=np.abs(f.variables['lat'][:,:]-float(mylat))
dist=dist_x + dist_y
latindex=np.where(dist_y==np.min(dist_y))
lonindex=np.where(dist_x==np.min(dist_x))
index=np.where(dist==np.min(dist))
print '---------------------------------------------------------'
print netcdf
print 'Information on the nearest point'
print 'Your desired lat,lon = ',mylat,mylon
print 'The nearest lat,lon = ', f.variables['lat'][latindex[0],latindex[1]], f.variables['lon'][lonindex[0],lonindex[1]]
print 'The index of the nearest lat,lon (x,y) = ',index[0], index[1]
#Here we constract a pandas data frame, having the "time"/day as an index and a numer of variables (i.e. Clim_var_type, pracc, as columns)
d={}
#d["time"] = f.variables['time'][:]
d[ GCM +'_'+ Warf_run +'_'+ Period_short] = f.variables[Clim_var_type+'_bc'][:, int(index[0]), int(index[1])]
#if GCM == 'NNRP' and Warf_run == 'R1':
# d['Period']= NC_Period
timestamp = f.variables['time'][:]
timestamp_dates = pd.to_datetime(timestamp, unit='h', origin=pd.Timestamp('1949-12-01'))
df1=pd.DataFrame(d, index=timestamp_dates)
f.close()
print 'closing '+ os.path.basename(os.path.normpath(netcdf)) + ' moving to next netcdf file'
#print f
print '---------------------------------------------------------'
#append in time direction each new time series to the data frame
MultiNC_df = MultiNC_df.append(df1)
#append in columns direction individual GCM-RCM-123 run time series (along x axis)
MultiNC_df = MultiNC_df.sort_index(axis=0, ascending=True)
R13_df = pd.concat([R13_df, MultiNC_df], axis=1)
MultiNC_df =pd.DataFrame()
#append blocks of R1 R2 and R3 in x axis direction
R13_df = R13_df.sort_index(axis=0, ascending=True)
GCM_df = pd.concat([GCM_df, R13_df], axis=1)
R13_df = pd.DataFrame()
#append time periods in x axis direction (change axis=1 to =0 if periods for same model should be added to same model R123 column)
GCM_df = GCM_df.sort_index(axis=0, ascending=True)
Full_df = pd.concat([Full_df, GCM_df], axis=1)
GCM_df = pd.DataFrame()
Full_df = Full_df.sort_index(axis=0, ascending=True)
#export the pandas data frame as a CSV file within the output directory
out_file_name = Clim_var_type + '_'+ Data_Type[2:] + '_' + Location + '_' + lat_equal_len_string + '_' + lon_equal_len_string + '_NARCliM_summary.csv'
out_path = output_directory +'/' + out_file_name
Full_df.to_csv(out_path)

@ -0,0 +1,89 @@
Variables available from NARCLIM (output):
'evspsblmean' water_evaporation flux (actual ET) long_name: Surface evaporation standard_name: water_evaporation_flux units: kg m-2 s-1
'potevpmean' potential ET water_potential_evaporation_flux kg m-2 s-1
'tasmean mean near surface temperature
tasmax maximum near surface temperature
pracc precipitation daily precipitation sum (sum of convective prcacc and stratiform prncacc precip)
pr1Hmaxtstep maximum 1 hour interval rainfall in a one day period
'pr1Hmaxtstep' Max. 1-hour time-window moving averaged precipitation rate units: kg m-2 s-1 maximum 1-hour time-window moving averaged values from point values 60.0 second
'wss1Hmaxtstep' Max. 1-hour time-window moving averaged surface wind speed units: m s-1 maximum 1-hour time-window moving averaged values from point values 60.0 second
'wssmax' Surface wind speed standard_name: air_velocity units: m s-1 height: 10 m
'wssmean' Surface wind speed standard_name: air_velocity units: m s-1
Sites:
Northern NSW:
Tweed River: -28.17, 153.56
Cudgera Creek: -28.36, 153.58
Belongil Creek: -28.63, 153.59
Central NSW:
Port Stevens: -32.71, 152.20
Terrigal Lagoon: -33.4, 151.44
Hunter River: -32.91, 151.80
Hunter River near Mahmods site: -32.843, 151.706
Southern NSW:
Batemans Bay: -35.76, 150.25
Towamba River: -37.1, 149.91
Nadgee Lake: -37.47, 149.97
Code Input Variables:
Datatype: Choose 'T_NNRP' for reanalysis or 'T_GCMS' for GCM forcing data
BiasBool: Choose 'True' for bias corrected data, 'False' for normal model outputs
Execution of code in bash-Code for netcdf interrogation:
1st step: log into storm servers: Putty: hurricane.ccrc.unsw.edu.au or typhoon.ccrc.unsw.edu.au or cyclone.ccrc.unsw.edu.au + UNSW credentials (zID)
In BASH copy and enter:
module load python
latitude=-32.91
longitude=151.80
name='HunterRiver'
Datatype='T_NNRP'
Biasboolean='False'
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'pracc' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'tasmean' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'tasmax' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'pr1Hmaxtstep' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'wssmean' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'pracc' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'wss1Hmaxtstep' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'evspsblmean' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean;
python /srv/ccrc/data02/z5025317/Code_execution/P1_NARCliM_NC_to_CSV_CCRC_SS.py --lat $latitude --lon $longitude --varName 'potevpmean' --domain 'd02' --timestep 'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Biasboolean
#1 The above code extracts time series from the full model ensemble over a single model grid cell (based on lat lon input) for the above variables of interest and stores into CSV files.
Example of output name = evspsblmean_35.76_150.25_NARCliM_summary.csv
#2 The "P1_NARCliM_plots_Windows" code takes these CSV files as input and creates a) a pdf wiht a number of time series and box plots and b) another CSV file containing the Deltas between present day, near and far future
for each model in the ensemble. Output: C:\Users\z5025317\WRL_Postdoc\Projects\Paper#1\Output\Nadgee\Nadgee_tasmax_NARCliM_ensemble_changes.csv
#3 The "P1_NARCliM_First_Pass_variab_deviation_plots" code takes those Delta CSV files as input and generates the future climate deviation plots that were originally developed by Duncan Rayner.
Run the code with different constellations of Estuary (study sites) and climate variables:
e.g. Clim_var_type = "tasmax*" # '*' will create pdf for all variables in folder
Present_Day_Clim_Var = 'MaxT' #MaxT, MinT, Rainfall, (the name for present day clim var refers to the name of the observed climate data that's used for the baseline period variability.
#!!!# Important!
#For present day variability data, only rainfall and temperature data actually correspond to the study sites. ET and Wind are taken from the existing project folder and hence, are for a Hunter weather station
#e.g. Terrigal_Wind and Terrigal_ET are actually Hunter in reality. This is because we don't have ET for other sites than Hunter at this stage.
##PROBLEM: Without changing anything, the P1_NARCliM_NC_to_CSV_CCRC_SS.py stopped working properly on the CCRC storm servers. It's not giving an error but loading the nc files with Dataset(nc) just takes unlimited time.
It used to take only a few seconds. NOT solved yet as of 7th of May 2018.### This was solved for the /postprocessed folder at the end of May 2018 but the problem persits with the /bias_corrected/ data folder.
running a simple netcdf info script
python /srv/ccrc/data02/z5025317/Code_execution/P1_Basic_NETCDF_Interrogation.py
deactivate
conda env create --name EEenv -- file C:\Users\z5025317\WRL_Postdoc\Software\EE\ee-jupyter-examples-master\kilian_env

@ -0,0 +1,59 @@
#code for preparing a text file with BASH code for batch download of NARCLIM data for the HUNTER WQ modeling of
#future climate scenarios
#NARCLIM Variables
#evspsblmean water_evaporation flux (actual ET) long_name: Surface evaporation standard_name: water_evaporation_flux units: kg m-2 s-1
#tasmean mean near surface temperature
#pracc precipitation daily precipitation sum (sum of convective prcacc and stratiform prncacc precip)
Clim_Var <- 'pracc'
Datatype <- 'T_GCM' #T_GCMS for GCM forcing, T_NNRP for reanalysis (only 1950-2009)
Biasboolean <- 'True' #use bias corrected data? True or False python boolean
Directory <- 'C:/Users/z5025317/OneDrive - UNSW/Hunter_CC_Modeling/07_Modelling/01_Input/BC_Generation/catchments/'
Filename <- 'Catchment_Prev_Hunter_Model_Centroids_VH_WGS84_attribute_Table.csv'
#Load CSV with location names and lat lon coordinates
Location.df <- data.frame(read.csv(paste(Directory, Filename, sep=""), header=T))
#create empty vector for storing the command line text and open file
Vector.for.command.line.txt <- c()
Vector.for.command.line.txt <- c(Vector.for.command.line.txt, "module load python")
text1 <- c(paste("Datatype='",Datatype,"'", sep=""),
paste("Bias_corrected='",Biasboolean,"'", sep=""), paste("ClimVarName='",Clim_Var,"'", sep=""))
Vector.for.command.line.txt <- c(Vector.for.command.line.txt, text1)
for (i in 1:(length(Location.df$Name))){
#name<-as.character(Location.df$Name[i])
#name<-gsub('([[:punct:]])|\\s+','_',name)
if(i<10){
name<-paste('Catchment_0', as.character(i), sep="")
}else{
name<-paste('Catchment_', as.character(i), sep="")
}
latitude=round(as.numeric(Location.df$Lat[i]),3)
longitude=round(as.numeric(Location.df$Long[i]),3)
text <- c(paste("latitude=",latitude,"", sep=""), paste("longitude=",longitude,"", sep=""),
paste("name='",name,"'", sep=""),
"python /srv/ccrc/data02/z5025317/Code_execution/\\
P1_NARCliM_NC_to_CSV_CCRC_SS.py \\
--lat $latitude --lon $longitude --varName $ClimVarName --domain 'd02' --timestep \\
'DAY' --LocationName $name --Datatype $Datatype --BiasBool $Bias_corrected")
Vector.for.command.line.txt <- c(Vector.for.command.line.txt, text)
if(i==10|i==20|i==31){
Vector.for.command.line.txt <- c(Vector.for.command.line.txt, " ")
text.file.name <- paste('C:/Users/z5025317/OneDrive - UNSW/Hunter_CC_Modeling/07_Modelling/01_Input/BC_Generation/Code/NARCLIM_Download_and_Processing/',Clim_Var, "_", Datatype, "_", Biasboolean,substring(as.character(i), 1,1), ".txt", sep="")
#open and fill text file
fileConn <- file(text.file.name)
writeLines(Vector.for.command.line.txt, fileConn)
close(fileConn)
#
if(i==10|i==20){
Vector.for.command.line.txt <- c()
Vector.for.command.line.txt <- c(Vector.for.command.line.txt, "module load python")
text1 <- c(paste("Datatype='",Datatype,"'", sep=""),
paste("Bias_corrected='",Biasboolean,"'", sep=""), paste("ClimVarName='",Clim_Var,"'", sep=""))
Vector.for.command.line.txt <- c(Vector.for.command.line.txt, text1)
}
}
}

@ -0,0 +1,26 @@
#This is the readme document for the modeling of climate change impacts in the Hunter River estuary project
#participants: Mat Deiber, Tino Heimhuber + 2/3 CVEN Masters Thesis Students
#Goal: use state of the art high-resolution climate change projection data (NARCLIM regional climate model ensemble)
in conjunction with sea level rise scenarios to model the potential impacts of climate change on the hydrodynamics,
salinity and temperature (potentially water quality) of the estuary system.
Folder Structure:
Modeling
01_Input
BC_Generation (this is where all the rainfall runoff modeling etc. is done for generation of the boundary condition input file for RMA)
Key steps:
The hydrological and hydrodynamic model is already fully set up and calibrated. We basically just need to create a plausible
range of future boundary condition scenarios and run them through the model
The first step will be to generate new freshwater inflow time series for the small catchments by using NARcLIM forcing data.
For each catchment, well use the GRID cell of which the GRID centerpoint is closest to the Catchment centerpoint.
NARCLIM will provide 12 rainfall time series per gridpoint so we need to adjust the python codes to automate the whole workflow even more.
To begin with, we can pick 1-3 scenarios and test how well the “present-day” reanalysis
data can reproduce the observed catchment flow time series and also how different the NARcLIM ET is from the observed.
Once we generated 12 RMA boundary condition files, one for each NARCCLIM ensemble member,
the next step will be to automate the climate change scenario runs for NARcLIM.
Loading…
Cancel
Save