arbutus.cloud
Deployment
In April 2019 the Ocean Networks Canada private cloud computing facility was migrated from west.cloud
to the Digital Research Alliance of Canada
(aka the Alliance, formerly Compute Canada)
arbutus.cloud.
arbutus.cloud
runs on OpenStack.
The OpenStack dashboard provides a web interface to manage and report on cloud resources.
The arbutus.cloud
dashboard is at https://arbutus.cloud.computecanada.ca/.
Authentication and authorization for arbutus.cloud
is managed by the Alliance,
so the userid/password that are required to log in to the dashboard are the same as those
used for the CCDB
Web Interface
Initial setup was done via the https://arbutus.cloud.computecanada.ca/ web interface with guidance from the Compute Canada Cloud Quickstart Guide and the OpenStack End User Guide.
The project (aka tenant) name for the SalishSeaCast system is ctb-onc-allen
.
Network
The network configuration was done for us by Compute Canada.
It’s configuration can be inspected via the Network section of the web interface.
The subnet of the VMs is rrg-allen-network
and it routes to the public network via the rrg-allen-router
.
There is 1 floating IP address available for assignment to provide access from the public network to a VM.
Access & Security
Generate an ssh key pair on a Linux or OS/X system using the command:
$ cd $HOME/.ssh/
$ ssh -t rsa -b 4096 -f ~/.ssh/arbutus.cloud_id_rsa -C <yourname>-arbutus.cloud
Assign a strong passphrase to the key pair when prompted. Passphraseless keys have their place, but they are a bad idea for general use.
Import the public key into the web interface via the Compute > Key Pairs > Import Key Pair button.
Use the Compute > Network > Security Groups > Manage Rules button associated with the default security group to add security rules to allow:
ssh
ping
ZeroMQ distributed logging subscriptions
access to the image instances.
ssh Rule:
Rule: SSH
Remote: CIDR
CIDR: 0.0.0.0/0
ping Rule:
Rule: ALL ICMP
Direction: Ingress
Remote: CIDR
CIDR: 0.0.0.0/0
ZeroMQ distributed logging subscription Rules:
For
run_NEMO
andwatch_NEMO
:Rule: Custom TCP
Direction: Ingress
Port range: 5556 - 5557
Remote: CIDR
CIDR: 142.103.36.0/24
For
make_ww3_wind_file
,make_ww3_current_file
,run_ww3
, andwatch_ww3
:Rule: Custom TCP
Direction: Ingress
Port range: 5570 - 5573
Remote: CIDR
CIDR: 142.103.36.0/24
For
make_fvcom_boundary
,make_fvcom_rivers_forcing
,run_fvcom
, andwatch_fvcom
:Rule: Custom TCP
Direction: Ingress
Port range: 5580 - 5587
Remote: CIDR
CIDR: 142.103.36.0/24
Head Node Instance
Use the Compute > Instances section of the web interface to manage instances.
To launch an instance to use as the head node use the Launch Instance button. On the Details tab set the following parameters:
Instance Name:
nowcast0
Description:
SalishSeaCast system head node
Availability Zone:
Any Availability Zone
Count:
1
On the Source tab set the following parameters:
Select Boot Source:
Image
Create New Volume:
No
Image:
Ubuntu-18.04-Bionic-x64-2018-09
Note
We have to use the Ubuntu-18.04-Bionic-x64-2018-09
image,
not the Ubuntu-18.04-Bionic-minimal-x64-2018-08
image because the latter does not include the kernel elements required for the head node to run the NFS server service.
On the Flavor tab choose: `nemo-c16-60gb-90-numa-test`
On the Network tab confirm that rrg-allen-network
is selected.
On the Security Groups tab confirm that default
is selected.
On the Key Pairs tab confirm that the key pair you imported in the Access & Security section above is selected.
Note
If only 1 key pair has been imported it will be used by default. If there is more than 1 key pair available, one must be selected. Only 1 key can be loaded automatically into an instance on launch. Additional public keys can be loaded once an instance is running.
Click the Launch button to launch the instance.
Once the instance is running use the More > Associate Floating IP menu item to associate a public IP address with the instance.
Compute Node Instance
Use the Compute > Instances section of the web interface to manage instances.
To launch an instance to use as a compute node template use the Launch Instance button. On the Details tab set the following parameters:
Instance Name:
nowcast1
Description:
SalishSeaCast system compute node
Availability Zone:
Any Availability Zone
Count:
1
On the Source tab set the following parameters:
Select Boot Source:
Image
Create New Volume:
No
Image:
Ubuntu-18.04-Bionic-x64-2018-09
On the Flavor tab choose: nemo-c16-60gb-90-numa-test
On the Network tab confirm that rrg-allen-network
is selected.
On the Security Groups tab confirm that default
is selected.
On the Key Pairs tab confirm that the key pair you imported in the Access & Security section above is selected.
Note
If only 1 key pair has been imported it will be used by default. If there is more than 1 key pair available, one must be selected. Only 1 key can be loaded automatically into an instance on launch. Additional public keys can be loaded once an instance is running.
Click the Launch button to launch the instance.
ssh Access
Log in to the publicly accessible head node instance with the command:
$ ssh -i $HOME/.ssh/arbutus.cloud_id_rsa ubuntu@<ip-address>
The first time you connect to an instance you will be prompted to accept its RSA host key fingerprint.
You can verify the fingerprint by looking for the SSH HOST KEY FINGERPRINT
section in the instance log in the Instances > nowcast0 > Log tab.
If you have previously associated a different instance with the IP address you may receive a message about host key verification failure and potential man-in-the-middle attacks.
To resolve the issue delete the prior host key from your $HOME/.ssh/known_hosts
file.
The message will tell you what line it is on.
You will also be prompted for the passphrase that you assigned to the ssh key pair when you created it. On Linux and OS/X authenticating the ssh key with your passphrase has the side-effect of adding it to the ssh-agent instance that was started when you logged into the system. You can add the key to the agent yourself with the command:
$ ssh-add $HOME/.ssh/arbutus.cloud_id_rsa
You can list the keys that the agent is managing for you with:
$ ssh-add -l
You can simplify logins to the instance by adding the following lines to your $HOME/.ssh/config
file:
Host arbutus.cloud
Hostname <ip-address>
User ubuntu
IdentityFile ~/.ssh/arbutus.cloud_id_rsa
ForwardAgent yes
With that in place you should be able to connect to the instance with:
$ ssh arbutus.cloud
Provisioning and Configuration
Head Node
Fetch and apply any available updates on the nowcast0
Head Node Instance
that you launched above with:
$ sudo apt update
$ sudo apt upgrade
$ sudo apt auto-remove
Set the timezone with:
$ sudo timedatectl set-timezone America/Vancouver
Confirm the date,
time,
time zone,
and that the systemd-timesyncd.service
is activate with:
$ timedatectl status
Provision the Head Node Instance with the following packages:
$ sudo apt update
$ sudo apt install -y mercurial git
$ sudo apt install -y gfortran
$ sudo apt install -y libopenmpi2 libopenmpi-dev openmpi-bin
$ sudo apt install -y libnetcdf-dev libnetcdff-dev netcdf-bin
$ sudo apt install -y nco
$ sudo apt install -y liburi-perl m4
$ sudo apt install -y make cmake ksh mg
$ sudo apt install -y python3-pip python3-dev
$ sudo apt install -y nfs-common nfs-kernel-server
Copy the public key of the passphrase-less ssh key pair that will be used for nowcast cloud operations into $HOME/.ssh/authorized_keys
pm the head node:
# on a system where they key pair is stored
$ ssh-copy-id -f -i $HOME/.ssh/SalishSeaNEMO-nowcast_id_rsa arbutus.cloud
Copy the passphrase-less ssh key pair that will be used for nowcast cloud operations into $HOME/.ssh/
as id_rsa
and id_rsa.pub
for mpirun to use for communication with the compute instances:
# on a system where they key pair is stored
$ scp $HOME/.ssh/SalishSeaNEMO-nowcast_id_rsa arbutus.cloud:.ssh/id_rsa
$ scp $HOME/.ssh/SalishSeaNEMO-nowcast_id_rsa.pub arbutus.cloud:.ssh/id_rsa.pub
The nowcast operations key pair could have been used as the default key pair in the OpenStack web interface, but using a key pair with a passphrase there allows for more flexibility: in particular, the possibility of revoking the passphrase-less key pair without loosing access to the instances.
Add code to $HOME/.profile
to add wwatch3 bin/
and exe/
paths to PATH
if they exist,
and export environment variables to enable wwatch3 to use netCDF4:
# Add wwatch3 bin/ and exe/ paths to PATH if they exist
if [ -d "/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin" ] ; then
PATH="/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin:$PATH"
fi
if [ -d "/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/exe" ] ; then
PATH="/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/exe:$PATH"
fi
# Enable wwatch3 to use netCDF4
export WWATCH3_NETCDF=NC4
export NETCDF_CONFIG=$(which nc-config)
Create $HOME/.bash_aliases
containing a command to make rm default to prompting for confirmation:
alias rm="rm -i"
Compute Node Template
Fetch and apply any available updates on the nowcast1
Compute Node Instance that you launched above with:
$ sudo apt update
$ sudo apt upgrade
$ sudo apt auto-remove
Set the timezone with:
$ sudo timedatectl set-timezone America/Vancouver
Confirm the date,
time,
time zone,
and that the systemd-timesyncd.service
is activate with:
$ timedatectl status
Provision the Head Node Instance with the following packages:
$ sudo apt update
$ sudo apt install -y gfortran
$ sudo apt install -y libopenmpi2 libopenmpi-dev openmpi-bin
$ sudo apt install -y libnetcdf-dev libnetcdff-dev netcdf-bin
$ sudo apt install -y mg
$ sudo apt install -y nfs-common
Add code to $HOME/.profile
to add wwatch3 bin/
and exe/
paths to PATH
if they exist,
and export environment variables to enable wwatch3 to use netCDF4:
# Add wwatch3 bin/ and exe/ paths to PATH if they exist
if [ -d "/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin" ] ; then
PATH="/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin:$PATH"
fi
if [ -d "/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/exe" ] ; then
PATH="/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/exe:$PATH"
fi
# Enable wwatch3 to use netCDF4
export WWATCH3_NETCDF=NC4
export NETCDF_CONFIG=$(which nc-config)
Create $HOME/.bash_aliases
containing a command to make rm default to prompting for confirmation:
alias rm="rm -i"
Create the /nemoShare/
mount point,
and set the owner and group:
$ sudo mkdir -p /nemoShare/MEOPAR
$ sudo chown ubuntu:ubuntu /nemoShare/ /nemoShare/MEOPAR/
From the head node,
copy the public key of the passphrase-less ssh key pair that will be used for nowcast cloud operations into $HOME/.ssh/authorized_keys
on the compute node:
# on nowcast0
$ ssh-copy-id -f -i $HOME/.ssh/id_rsa nowcast1
Capture a snapshot image of the instance to use to as the boot image for the other compute nodes using the Create Snapshot button on the Compute > Instances page.
Use a name like nowcast-c16-60g-numa-compute-v0
for the image.
Hosts Mappings
Once all of the compute node VMs have been launched so that we know their IP addresses,
create an .ssh/config
file,
and MPI hosts mapping files for NEMO/WAVEWATCH VMs and FVCOM VMs on the head node.
Head Node .ssh/config
Host *
StrictHostKeyChecking no
# Head node and XIOS host
Host nowcast0
HostName 192.168.238.14
# NEMO compute nodes
Host nowcast1
HostName 192.168.238.10
Host nowcast2
HostName 192.168.238.13
Host nowcast3
HostName 192.168.238.8
Host nowcast4
HostName 192.168.238.16
Host nowcast5
HostName 192.168.238.5
Host nowcast6
HostName 192.168.238.6
Host nowcast7
HostName 192.168.238.18
Host nowcast8
HostName 192.168.238.15
# FVCOM compute nodes
Host fvcom0
HostName 192.168.238.12
Host fvcom1
HostName 192.168.238.7
Host fvcom2
HostName 192.168.238.20
Host fvcom3
HostName 192.168.238.11
Host fvcom4
HostName 192.168.238.9
Host fvcom5
HostName 192.168.238.28
Host fvcom6
HostName 192.168.238.27
MPI Hosts Mappings
$HOME/mpi_hosts
for NEMO/WAVEWATCH VMs containing:
192.168.238.10 slots=15 max-slots=16
192.168.238.13 slots=15 max-slots=16
192.168.238.8 slots=15 max-slots=16
192.168.238.16 slots=15 max-slots=16
192.168.238.5 slots=15 max-slots=16
192.168.238.6 slots=15 max-slots=16
192.168.238.18 slots=15 max-slots=16
192.168.238.15 slots=15 max-slots=16
$HOME/mpi_hosts.fvcom.x2
for FVCOM VMs used for x2
model configuration runs containing:
192.168.238.12 slots=15 max-slots=16
192.168.238.7 slots=15 max-slots=16
$HOME/mpi_hosts.fvcom.r12
for FVCOM VMs used for r12
model configuration runs containing:
192.168.238.20 slots=15 max-slots=16
192.168.238.11 slots=15 max-slots=16
192.168.238.9 slots=15 max-slots=16
192.168.238.28 slots=15 max-slots=16
192.168.238.27 slots=15 max-slots=16
Git Repositories
Clone the following repos into /nemoShare/MEOPAR/nowcast-sys/
:
$ cd /nemoShare/MEOPAR/nowcast-sys/
$ git clone git@github.com:SalishSeaCast/grid.git
$ git clone git@github.com:UBC-MOAD/moad_tools.git
$ git clone git@github.com:43ravens/NEMO_Nowcast.git
$ git clone git@github.com:SalishSeaCast/NEMO-Cmd.git
$ git clone git@github.com:SalishSeaCast/rivers-climatology.git
$ git clone git@github.com:SalishSeaCast/SalishSeaCmd.git
$ git clone git@github.com:SalishSeaCast/SalishSeaNowcast.git
$ git clone git@github.com:SalishSeaCast/SalishSeaWaves.git
$ git clone git@github.com:SalishSeaCast/SS-run-sets.git
$ git clone git@github.com:SalishSeaCast/tides.git
$ git clone git@github.com:SalishSeaCast/tools.git
$ git clone git@github.com:SalishSeaCast/tracers.git
$ git clone git@gitlab.com:mdunphy/FVCOM41.git
$ git clone git@gitlab.com:mdunphy/FVCOM-VHFR-config.git
$ git clone git@github.com:SalishSeaCast/FVCOM-Cmd.git
$ git clone git@gitlab.com:douglatornell/OPPTools.git
$ git clone git@github.com:SalishSeaCast/NEMO-3.6-code.git
$ git clone git@github.com:SalishSeaCast/XIOS-ARCH.git
$ git clone git@github.com:SalishSeaCast/XIOS-2.git
Build XIOS-2
Symlink the XIOS-2 build configuration files for arbutus.cloud
from the XIOS-ARCH
repo clone into the XIOS-2/arch/
directory:
$ cd /nemoShare/MEOPAR/nowcast-sys/XIOS-2/arch
$ ln -s ../../XIOS-ARCH/COMPUTECANADA/arch-GCC_ARBUTUS.fcm
$ ln -s ../../XIOS-ARCH/COMPUTECANADA/arch-GCC_ARBUTUS.path
Build XIOS-2 with:
$ cd /nemoShare/MEOPAR/nowcast-sys/XIOS-2
$ ./make_xios --arch GCC_ARBUTUS --netcdf_lib netcdf4_seq --job 8
Build NEMO-3.6
Build NEMO-3.6 and rebuild_nemo.exe:
$ cd /nemoShare/MEOPAR/nowcast-sys/NEMO-3.6-code/NEMOGCM/CONFIG
$ XIOS_HOME=/nemoShare/MEOPAR/nowcast-sys/XIOS-2 ./makenemo -m GCC_ARBUTUS -n SalishSeaCast -j8
$ XIOS_HOME=/nemoShare/MEOPAR/nowcast-sys/XIOS-2 ./makenemo -m GCC_ARBUTUS -n SalishSeaCast_Blue -j8
$ cd /nemoShare/MEOPAR/nowcast-sys/NEMO-3.6-code/NEMOGCM/TOOLS/
$ XIOS_HOME=/nemoShare/MEOPAR/nowcast-sys/XIOS-2 ./maketools -m GCC_ARBUTUS -n REBUILD_NEMO
Build WAVEWATCH III ®
Access to download WAVEWATCH III ®
(wwatch3 hereafter)
code tarballs is obtained by sending an email request from the https://polar.ncep.noaa.gov/waves/wavewatch/license.shtml.
The eventual reply will provide a username and password that can be used to access https://polar.ncep.noaa.gov/waves/wavewatch/distribution/ from which the wwatch3.v5.16.tar.gz
files can be downloaded with:
$ cd /nemoShare/MEOPAR/nowcast-sys/
$ curl -u username:password -LO download_url
where username
,
password
,
and download_url
are those provided in the reply to the email request.
Follow the instructions in the Installing Files section of the wwatch3 manual to unpack the tarball to create a local installation in /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/
that will use the gfortran and gcc compilers:
$ mkdir /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16
$ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16
$ tar -xvzf /nemoShare/MEOPAR/nowcast-sys/wwatch3.v5.16.tar.gz
$ ./install_ww3_tar
install_ww3_tar is an interactive shell script. Accept the defaults that it offers other than to choose:
local installation in
/nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/
gfortran as the Fortran 77 compiler
gcc as the C compiler
Ensure that /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin
and /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/exe
are in PATH
.
Change the comp
and link
scripts in /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin
to point to comp.gnu
and link.gnu
,
and make comp.gnu
executable:
$ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin
$ ln -sf comp.gnu comp && chmod +x comp.gnu
$ ln -sf link.gnu link
Symlink the SalishSeaWaves/switch
file in /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin
:
$ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/bin
$ ln -sf /nemoShare/MEOPAR/nowcast-sys/SalishSeaWaves/switch switch
Export the WWATCH3_NETCDF
and NETCDF_CONFIG
environment variables:
export WWATCH3_NETCDF=NC4
export NETCDF_CONFIG=$(which nc-config)
Build the suite of wwatch3 programs with:
$ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-5.16/work
$ w3_make
Build FVCOM-4.1
Build FVCOM with:
$ cd /nemoShare/MEOPAR/nowcast-sys/FVCOM41/Configure
$ ./setup -c VancouverHarbourX2 -a UBUNTU-18.04-GCC
$ make libs gotm fvcom
Update FVCOM-4.1
Fetch and merge changes from the FVCOM41 repo on GitLab and do a clean build:
$ cd /nemoShare/MEOPAR/nowcast-sys/FVCOM41/
$ git pull origin master
$ cd Configure/
$ ./setup -c VancouverHarbourX2 -a UBUNTU-18.04-GCC
$ make clean
$ make libs gotm fvcom
Python Packages
Install the Miniconda environment and package manager:
$ cd /nemoShare/MEOPAR/nowcast-sys/
$ curl -LO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
$ bash Miniconda3-latest-Linux-x86_64.sh
Answer /nemoShare/MEOPAR/nowcast-sys/miniconda3
when the installer asks for an installation location.
Answer no when the install asks Do you wish the installer to initialize Miniconda3 by running conda init? [yes|no].
The Python packages that the system depends on are installed in a conda environment with:
$ cd /nemoShare/MEOPAR/nowcast-sys/
$ conda update -n base -c defaults conda
$ conda env create \
--prefix /nemoShare/MEOPAR/nowcast-sys/nowcast-env \
-f SalishSeaNowcast/envs/environment-prod.yaml
$ source /nemoShare/MEOPAR/nowcast-sys/miniconda3/bin/activate /nemoShare/MEOPAR/nowcast-sys/nowcast-env/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable NEMO_Nowcast/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable moad_tools/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable tools/SalishSeaTools/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ cd OPPTools/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ git switch SalishSeaCast-prod
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ cd /nemoShare/MEOPAR/nowcast-sys/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable OPPTools/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable NEMO-Cmd/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable SalishSeaCmd/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable FVCOM-Cmd/
(/nemoShare/MEOPAR/nowcast-sys/nowcast-env)$ python3 -m pip install --editable SalishSeaNowcast/
Environment Variables
Add the following files to the /nemoShare/MEOPAR/nowcast-sys/nowcast-env
environment to automatically export the environment variables required by the nowcast system when the environment is activated:
$ cd /nemoShare/MEOPAR/nowcast-sys/nowcast-env
$ mkdir -p etc/conda/activate.d
$ cat << EOF > etc/conda/activate.d/envvars.sh
export NOWCAST_ENV=/nemoShare/MEOPAR/nowcast-sys/nowcast-env
export NOWCAST_CONFIG=/nemoShare/MEOPAR/nowcast-sys/SalishSeaNowcast/config
export NOWCAST_YAML=/nemoShare/MEOPAR/nowcast-sys/SalishSeaNowcast/config/nowcast.yaml
export NOWCAST_LOGS=/nemoShare/MEOPAR/nowcast-sys/logs/nowcast
export NUMEXPR_MAX_THREADS=8
export SENTRY_DSN=a_valid_sentry_dsn_url
EOF
and unset them when it is deactivated.
$ mkdir -p etc/conda/deactivate.d
$ cat << EOF > etc/conda/deactivate.d/envvars.sh
unset NOWCAST_ENV
unset NOWCAST_CONFIG
unset NOWCAST_YAML
unset NOWCAST_LOGS
unset NUMEXPR_MAX_THREADS
unset SENTRY_DSN
EOF
NEMO Runs Directory
Create a runs/
directory for the NEMO runs and populate it with:
$ cd /nemoShare/MEOPAR/nowcast-sys/
$ mkdir -p logs/nowcast/
$ mkdir runs
$ chmod g+ws runs
$ cd runs/
$ mkdir -p LiveOcean NEMO-atmos rivers ssh
$ chmod -R g+s LiveOcean NEMO-atmos rivers ssh
$ ln -s ../grid
$ ln -s ../rivers-climatology
$ ln -s ../tides
$ ln -s ../tracers
$ cp ../SS-run-sets/v201702/nowcast-green/namelist.time_nowcast_template namelist.time
WaveWatch Runs Directories
Create a wwatch3-runs/
directory tree and populate it with:
The wwatch3 grid:
$ mkdir -p /nemoShare/MEOPAR/nowcast-sys/wwatch3-runs/grid $ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-runs/ $ ln -s /nemoShare/MEOPAR/nowcast-sys/SalishSeaWaves/ww3_grid_SoG.inp ww3_grid.inp $ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-runs/grid $ ln -sf /nemoShare/MEOPAR/nowcast-sys/SalishSeaWaves/SoG_BCgrid_00500m.bot $ ln -sf /nemoShare/MEOPAR/nowcast-sys/SalishSeaWaves/SoG_BCgrid_00500m.msk $ cd /nemoShare/MEOPAR/nowcast-sys/wwatch3-runs/ $ ww3_grid | tee ww3_grid.out
Directory for wind forcing:
$ mkdir -p /nemoShare/MEOPAR/nowcast-sys/wwatch3-runs/wind
The make_ww3_wind_file worker:
Uses files from
/nemoShare/MEOPAR/GEM2.5/ops/NEMO-atmos/
appropriate for the wwatch3 run date and type to produce aSoG_wind_yyyymmdd.nc
file in thewind/
directory
The run_ww3 worker:
Generates in the temporary run directory a
ww3_prnc_wind.inp
file containing the path to the file produced by the make_ww3_wind_file workerSymlinks
ww3_prnc_wind.inp
asww3_prnc.inp
Runs ww3_prnc to produce the wwatch3 wind forcing files for the run. The output of ww3_prnc is stored in the run’s
stdout
file.
Directory for current forcing:
$ mkdir -p /nemoShare/MEOPAR/nowcast-sys/wwatch3-runs/current
The make_ww3_wind_file worker:
Uses files from the
/nemoShare/MEOPAR/SalishSea/
NEMO results storage tree appropriate for the wwatch3 run date and type to produce aSoG_current_yyyymmdd.nc
file in thecurrent/
directory
The run_ww3 worker:
Generates in the temporary run directory a
ww3_prnc_current.inp
file containing the path to the file produced by the make_ww3_current_file workerSymlinks
ww3_prnc_current.inp
asww3_prnc.inp
Runs ww3_prnc to produce the wwatch3 current forcing files for the run. The output of ww3_prnc is stored in the run’s
stdout
file.
FVCOM Runs Directory
Create an fvcom-runs/
directory for the VHFR FVCOM runs and populate it with:
$ cd /nemoShare/MEOPAR/nowcast-sys/
$ mkdir fvcom-runs
$ chmod g+ws fvcom-runs
$ cd fvcom-runs/
$ cp ../FVCOM-VHFR-config/namelists/namelist.case.template namelist.case
$ cp ../FVCOM-VHFR-config/namelists/namelist.grid.template namelist.grid
$ cp ../FVCOM-VHFR-config/namelists/namelists/namelist.nesting.template namelist.nesting
$ cp ../FVCOM-VHFR-config/namelists/namelist.netcdf.template namelist.netcdf
$ cp ../FVCOM-VHFR-config/namelists/namelist.numerics.template namelist.numerics
$ cp ../FVCOM-VHFR-config/namelists/namelist.obc.template namelist.obc
$ cp ../FVCOM-VHFR-config/namelists/namelist.physics.template namelist.physics
$ cp ../FVCOM-VHFR-config/namelists/namelist.restart.template namelist.restart
$ cp ../FVCOM-VHFR-config/namelists/namelist.rivers.template namelist.rivers.x2
$ cp ../FVCOM-VHFR-config/namelists/namelist.rivers.template namelist.rivers.r12
$ cp ../FVCOM-VHFR-config/namelists/namelist.startup.hotstart.template namelist.startup.hotstart
$ cp ../FVCOM-VHFR-config/namelists/namelist.station_timeseries.template namelist.station_timeseries
$ cp ../FVCOM-VHFR-config/namelists/namelist.surface.template namelist.surface
Managing Compute Nodes
Here are some useful bash loop one-liners for operating on collections of compute nodes.
If compute node instances are group-launched, their hostnames can be set with:
for n in {1..8}
do
echo nowcast${n}
ssh nowcast${n} "sudo hostnamectl set-hostname nowcast${n}"
done
Mount shared storage via NFS from head node:
for n in {1..8}
do
echo nowcast${n}
ssh nowcast${n} \
"sudo mount -t nfs -o proto=tcp,port=2049 192.168.238.14:/MEOPAR /nemoShare/MEOPAR"
done
Confirm whether or not /nemoShare/MEOPAR/
is a mount point:
for n in {1..8}
do
echo nowcast${n}
ssh nowcast${n} "mountpoint /nemoShare/MEOPAR"
done
Confirm that /nemoShare/MEOPAR/
has the shared storage mounts:
for n in {1..8}
do
echo nowcast${n}
ssh nowcast${n} "ls -l /nemoShare/MEOPAR"
done