Simulation
Overview
Requesting New Applications
DesignSafe regularly adds new software applications in support of natural hazards engineering research. You may contact DesignSafe by submitting a help ticket if you would like to request the addition of a software application to the Workspace.
Getting Your Own HPC Application
For those researchers with larger computational needs on the order of tens of thousands, or even millions of core-hours, or if you have a software application that we don't support in the web portal, you may request your own allocation of computing time on TACC's HPC systems. Your files can still be stored in the Data Depot, allowing you to share your research results with your team members, as well as curate and publish your findings.
Commercial/Licensed Applications
The DesignSafe infrastructure includes support for commercial/licensed software. Wile in some cases licenses can be provided by the DesignSafe project itself, not all vendors will make licenses available for larger open communities at reasonable cost. You may contact DesignSafe by submitting a help ticket if you have questions regarding a commercial software application.
ADCIRC User Guide
The ADCIRC (ADvanced CIRCulation) model is a system of computer programs often used in Coastal Engineering Storm Surge research for solving time dependent, free surface circulation and transport problems in two and three dimensions. These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids.
An Example ADCIRC Unstructed Triangular 2D Mesh For the Houston/Galveston Bay, TX region
ADCIRC is often coupled with The wind wave model SWAN (Simulating WAves Nearshore), especially in storm-surge applications where wave radiation stress can have important effects on ocean circulation and vice cersa. Typical research topics include:
- prediction of storm surge and flooding
- modeling tides and wind driven circulation
- larval transport studies
- near shore marine operations
- dredging feasibility and material disposal studies
The following user guide gives a brief overview of ADCIRC and how it and supporting programs can be run on DesignSafe.
ADCIRC Applications
ADCIRC is a suite of Fortran programs, for either parallel or serial execution. The main components comprise of:
adcirc
- Non parallelized version of ADCIRC.- This version of the application is ideal for smaller simulations, and runs on a single node on Frontera. Runtimes are subject to current wait times in the Frontera job queue.
- There is options to run the serial
adcirc
program within DesignSafe via the ADCIRC Interactive VM.
padcirc
- Parallelized version of ADCIRC.- This is the Parallel version of the ADCIRC application and uses multiple compute nodes on TACC's Frontera or Lonestar6 HPC resource and is ideal for larger simulations. Runtimes are subject to current wait times in the HPC job queues.
- Within DesignSafe,
padcirc
simulations can be run within the ADCIRC Interactie VM, in the HPC JupyterHub, and via the TACC HPC queues on TACC's Fontera, Stampede3, and Lonestar6 HPC resources.
adcswan
/padcswan
- Serial/Parallelized versions of ADCIRC coupled with SWAN- The tightly coupled SWAN + ADCIRC paradigm allows both wave and circulation interactions to be solved on the same unstructured mesh resulting in a more accurate and efficient solution technique.
- This version of the application uses multiple nodes on TACC's Frontera or Lonestar6 HPC resource and is ideal for larger simulations. Runtimes are subject to current wait times in the HPC job queues.
adcprep
adcprep
is a utility program that prepares input files for PADCIRC & PADCSWAN simulations. It partitions the mesh accross each parallel process, distributing the necessary input files, such as fort.15, fort.14, and fort.13, through a user-friendly interface.- Note this utility only needs to be run when running the parallel version of ADCIRC and ADCIRC+SWAN.
Along with the above programs, commonly used utilities used in conjunction with ADCIRC include:
- FigureGen - A fortran program for visualizing ADCIRC inputs and outputs over the grid. Has a variety of capabilities, and can be run within DesignSafe as a stand-alone app or through the Interactive ADCIRC VM. See the FigureGen Document for more information.
- Kalpana - A python package for visualizign ADCIRC inputs/outputs and converting them into shapefiles and google kmz files for visualization in QGIS. Kalpana can also be run through the Interactive ADCIRC VM, or as a standalone application. See the Kalpana Documentation for more information.
Decision Matrix for ADCIRC Applications
Deciding which DesignSafe application to run depends on your problem domain and size. In general, the serial adcirc application is only used for testing and benchmarking, as most problems of interest require large grids. The easiest way to determine the size of your ADCIRC problem is to measure it in terms of the # of finite elements in your grid. This can be found at the top of the fort.14 file (see input files for more information):
❯ head fort.14 -n 5
Quarter Annular Grid - Example 1 ! ALPHANUMERIC DESCRIPTOR FOR GRID FILE
96 63 ! NE,NP - NUMBER OF ELEMENTS AND NUMBER OF NODAL POINTS
1 60960.0 0.0 3.0480 ! NODE NO., X, Y, DEPTH
2 76200.0 0.0 4.7625
3 91440.0 0.0 6.8580
The Quarter Annular Grid Example can be found in the CommunityData folder at ``CommunityData/Use Case Products/ADCIRC/adcirc/adcirc_quarterannular-2d'
For example, for the common benchmark test case invovling a hypothetical quarter annular grid, we can see that the problem size is of size 96 finite elements.
This test is more than ok to run using the regular ADCIRC version, and no adcprep
prior run is required.
For cases when using parallel processing, the main deciding factor can be how many parallel processes to use.
Scaling studies have shown that targeting about 2000 nodes per process is ideal for these scenarios.
Thus the following table can be helpful for deciding where and when to run each application.
# Elements | ADCPREP? | # Nodes per Process | Sequential ADCIRC | Parallel ADCIRC | SWAN + ADCIRC |
---|---|---|---|---|---|
< 1000 | No | 1 | ✅ | 🔄 1 | 🔄 2 |
1000 - 1 million | Yes | 2000 | ❌ | ✅ | ✅ |
> 1 million | Yes 3 | 2000 | ❌ | ✅ 4 | ✅ |
- ✅: Recommended for this scenario.
- ❌: Not recommended or not yet available.
- 🔄: Viable under certain conditions or for certain job sizes. Please refer to footnotes.
ADCIRC On DesignSafe
DesignSafe Offers a variety of platforms on which to run and test ADCIRC related applications. Behing the scenes power it is all powered by the Tapis API which connects the compute resources with the analysis environments. In the context of how ADCIRC can be run, the two important things to keep in mind is (1) Where is the computation running and (2) Through what interface am I interacting with this compute platform. At a high level, the compute platforms, from most powerful to least, that DesignSafe offers for computation are:
-
High Performance Computing (HPC) Job Queues - Job queues that are configured to handle jobs requiring multiple compute nodes, with GPU compute nodes also available. ADCIRC can be run on HPC job queues in the following manner:
- HPC ADCIRC Applications - By using the pre-configured HPC applications either via the Web-Portal or throug the Tapis API. These HPC applications run pre-built version of ADCIRC on inputs that you can upload to your MyData on DesignSafe.
- HPC Jupyter Instances - These are jupyter images running on an HPC queue, and can provide GPU support. For the moment, no native ADCIRC applications are supported in the HPC Jupyter instances, but ADCIRC can be installed in these environments.
- TACC- By requesting a specific allocation on TACC. Usually this is done if more resources are required for larger runs. Please open a ticket if your use case requires more than the resources provided by the pre-configured HPC applications. For more information on requesting HPC allocations, please refer to the HPC Allocations documentation.
-
JupyterHub Images - These run on dedicated VMs, so they can handle more computation, but not as much as the HPC job queues that have access to multiple nodes for massively parallel jobs.
- Interactive VMs - These run on shared VM resources, and therefore handle the lightest form of computations. The Interactive VM is launched from the web-portal, and offers a convenient environment for testing ADCIRC applications before running in a production environment.
ADCIRC Through the Interactive VM
The Interactive VM is a docker image running on a shared VM with ADCIRC and supporting utilities pre-built for easy testing and development of ADCIRC related applications within the DesignSafe environment.
Advantages and Disadvantages of Interactive ADCIRC VM
A few advantages of using the ADCIRC VM include:
- No queue wait time - Don't have to wait in the HPC queue to test input files.
- Pre-compiled versions of ADCIRC and supporting utilities such as FigureGen and Kalpana.
- Convenient Jupyter Lab interface, with plugins for github repo managemen, code formatting, and more.
Disadvantages include:
- VM runs on a shared resource - Can be slow if lots of users are using the VM.
- Not as large compute power - To simulate hurricanes at high-fidelity, ADCIRC needs to run on very large grids, which may take too long to run in the VM. Furthermore memory requirements for plotting and visualizing the grids and associated data may be too large for the interactive VM.
Overall, the interactive VM is meant to be as a testing and learning environment. It is ideal to configure and test smaller runs of large jobs before submitting to the HPC queue to verify inputs/outputs are configured correctly for ADCIRC and supporting programs.
Getting Started
You can access the interactive VM via the DesignSafe-CI workspace by selecting "Workspace" > "Tools & Applications" > "Simulation" > "ADCIRC" > Select "Jupyter" > "Interactive VM for ADCIRC" to start the interactive VM.
Selecting the Interactive VM for ADCIRC
The intereactive VM will spawn a JupyterLab instance for you on a shared VM not within the HPC queues, so wait time shoudl be minimal (all though may be a little longer if it's your first time). After your job goes into the "Running" stage, a dialogue box should prompt you to connect to your instance.
Once Job is running and window appears, click on Connect
Once you click connect you should see a familiar JupyterLab interface:
Jupyter Lab Interface Launcher screen provides Kalpana kernel and terminal environment with ADCIRC and FigureGen
Example - Running an ADCIRC simulation
One way to run an ADCIRC simulation in the interactive VM is via the linux terminal available from the Jupyter Lab interface. Open a new terminal from the launcher window, navigate to your MyData directory and create a new directory for your ADCIRC run. We copy into this directory some example ADCIRC input files corresponding to the ADCIRC Shinnecock Inlet test case (see ADCIRC data available on DesignSafe for more example cases).
cd ~/work/MyData
cp -r ~/work/CommunityData/Use\ Case\ Products/ADCIRC/adcirc/adcirc_shinnecock_inlet .
Serial Run
Now from within this directory we can run the code in serial by simply running the adcirc
command from the root directory containing the ADCIRC input files.
cd ~/work/MyData/adcirc_shinnecock_inlet
adcirc
You should see an output similar to:
Example ADCIRC output indicating the max elevation and maximum water velocity values and location at each time step.
Note the outputs are created in the same directory as the inputs (see left folder bar in Jupyter Lab interface).
Parallel Run
To run the same simulation in parallel, we must first run adcprep to prep the files for a parallel run. If we want to run the same simulation with four parallel processes, we must (from a clean simulation directory) run adcprep twice.
cd ~/work/MyData
cp -r ~/work/CommunityData/Use\ Case\ Products/ADCIRC/adcirc/adcirc_shinnecock_inlet .
adcprep
Note adcprep is an interactive program.
Example of running adcprep
the first time to partition the mesh.
On the first run through the you want to partition the mesh inputing in order:
- Number of processes for parallel run - Be careful it does not exist available processes from an mpi-run.
- Action to perform - Option 1 on the first run to partition the mesh, which must be done first.
- Name of the fort.14 file - In our case the default name of fort.14
After partitioning the mesh, a partmesh.txt
and metis_graph.txt
file should be created.
On the second run, you will input in the following order:
- Number of processes for parallel run - Be careful it does not exist available processes from an mpi-run.
- Action to perform - Option 2 on the second run to prep the rest of the input files.
Example of running adcprep
the second time to prep individual PE run directories.*
Note how after the second adcprep run, PE* directories are created for the input/output files corresponding to each individual process.
After both runs of adcprep
, padcirc
can now be run.
Note how it must be launched using the mpirun
command, specifying the number of outputs.
mpirun -np 4 padcirc
Running ADCIRC on HPC Resources
ADCIRC can be run on HPC resources at TACC through DesignSafe through the pre-configured HPC applications. Currently all these are configured to run only on TACC's Frontera supercomputer.
App ID | App Name |
---|---|
adcirc_netcdf_55_Frontera-55.01u4 | ADCIRC-V55 (Frontera) |
padcirc_swan-net_frontera_v55-55.00u4 | PADCIRC SWAN (Frontera) - V55 |
padcirc-frontera-55.01u4 | PADCIRC (Frontera) - V55 |
Note while the web portal provides a convenient interface to submit HPC jobs, the Tapis API provides a more programmatic way to interact and launch jobs. The corresponding Tapis application IDs of the web-portal apps avaialable are listed in the table above. We will review how to run HPC jobs through both of these interfaces below.
Using the Web Portal
To access the DesignSafe
- Select the appropriate ADCIRC application from the Simulation tab in the Workspace.
- Locate your Input Directory (Folder) with your input files that are in the Data Depot and follow the onscreen directions to enter this directory in the form.
- For the Parallel versions, enter your Mesh File into the form (usually fort.14 file).
- Enter a maximum job runtime in the form. See guidance on form for selecting a runtime.
- Enter a job name.
- Enter an output archive location or use the default provided.
- For the Parallel versions, select the number of nodes to be used for your job. Larger data files run more efficiently on higher node counts.
- Click Run to submit your job.
- Check the job status by clicking on the arrow in the upper right of the job submission form.
Using Tapis
Note: These instructions are for Tapis v2. See documentation on Tapis v2 on how to install the Tapis API, and in more in depth documentation. The below uses the Tapis Command Line Interface (CLI). They assume you've also authenticated with tapis using the
tapis auth init
command. Note that DesignSafe's native JupyterHub environment comes with the Tapis API pre-installed, so the following can be run from within a regular Jupyter Analysis Environment from the Web-Portal.
The same ADCIRC applications that are run through the front-end interface can be run via the Tapis v2 API.
For example, to view configurations for the PADCIRC (Frontera) v55 application, we can simply perform a tapis app show
command:
$ tapis app show
$ tapis apps show padcirc-frontera-55.01u4
+--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | padcirc-frontera-55.01u4 |
| name | padcirc-frontera |
| version | 55.01 |
| revision | 4 |
| label | PADCIRC (Frontera) - V55 |
| lastModified | a year ago |
| shortDescription | Parallel ADCIRC is a computer program for solving systems of shallow water equations. |
| longDescription | PADCIRC is the parallel version of the ADCIRC which is optimized for enhanced performance on multiple computer nodes to run very large models. It includes |
| | MPI library calls to allow it to operate at high efficiency on parallel machines. |
| owner | ds_admin |
| isPublic | True |
| executionType | HPC |
| executionSystem | designsafe.community.exec.frontera |
| deploymentSystem | designsafe.storage.default |
| available | True |
| parallelism | PARALLEL |
| defaultProcessorsPerNode | 168 |
| defaultMemoryPerNode | 192 |
| defaultNodeCount | 3 |
| defaultMaxRunTime | 02:00:00 |
| defaultQueue | normal |
| helpURI | https://www.designsafe-ci.org/rw/user-guides/tools-applications/simulation/adcirc/ |
| deploymentPath | /applications/padcirc-frontera-55.01u4.zip |
| templatePath | wrapper-frontera.sh |
| testPath | test/test.sh |
| checkpointable | False |
| uuid | 4548497563320577555-242ac11b-0001-005 |
| icon | None |
+--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+
To get an example job .json config for submitting this job we can use the tapis jobs init
command:
$ tapis jobs init padcirc-frontera-55.01u4 > test_job.json
$ cat test_job.json
{
"name": "padcirc-frontera-job-1715717562412",
"appId": "padcirc-frontera-55.01u4",
"batchQueue": "normal",
"maxRunTime": "01:00:00",
"memoryPerNode": "192GB",
"nodeCount": 1,
"processorsPerNode": 168,
"archive": true,
"inputs": {
"inputDirectory": "agave://designsafe.storage.community/app_examples/adcirc/EC2001"
},
"parameters": {},
"notifications": [
{
"event": "*",
"persistent": true,
"url": "carlosd@tacc.utexas.edu"
}
]
}
Note how the input directory is a DesignSafe Agave URI (see documentation on how to use agave URIs).
Now modify the test job the following way:
- Change the queue to the development queue (to wait less time)
- Node Count to 1 and the processors per node to 40
- Runtime to 30 minutes
- Notifications email accordingly (note that it should default to the email address associated with your DesignSafe account).
The resulting json file should look like:
{
"name": "padcirc-frontera-job-1715717815835",
"appId": "padcirc-frontera-55.01u4",
"batchQueue": "development",
"maxRunTime": "00:30:00",
"memoryPerNode": "192GB",
"nodeCount": 1,
"processorsPerNode": 40,
"archive": true,
"inputs": {
"inputDirectory": "agave://designsafe.storage.community/app_examples/adcirc/EC2001"
},
"parameters": {},
"notifications": [
{
"event": "*",
"persistent": true,
"url": "carlosd@tacc.utexas.edu"
}
]
}
Now to submit the job you can perform a tapis jobs submit
command, specifying the job config file just created:
$ tapis jobs submit -F test_job.json
fatal: not a git repository (or any of the parent directories): .git
+--------+------------------------------------------+
| Field | Value |
+--------+------------------------------------------+
| id | f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007 |
| name | padcirc-frontera-job-1715717815835 |
| status | ACCEPTED |
+--------+------------------------------------------+
Note: You can ignore the "fatal" git repository error.
To view the status of your job, you can list all your jobs (using a head
command to get the first couple of jobs):
$ tapis jobs list | head -n 4
+------------------------------------------+--------------------------------------------------------------+----------+
| id | name | status |
+------------------------------------------+--------------------------------------------------------------+----------+
| f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007 | padcirc-frontera-job-1715717815835 | RUNNING |
Note your job will be viewable from the front-end web interface as well:
Jobs submitted via the Tapis API will be visible on the web portal.
And to see the complete job config you can always do a tapis job show
command:
$ tapis jobs show f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007
+--------------------+----------------------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------+----------------------------------------------------------------------------------------------------------------+
| accepted | 2024-05-14T21:59:43.351Z |
| appId | padcirc-frontera-55.01u4 |
| appUuid | 4548497563320577555-242ac11b-0001-005 |
| archive | True |
| archiveOnAppError | False |
| archivePath | clos21/archive/jobs/job-f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007 |
| archiveSystem | designsafe.storage.default |
| blockedCount | 0 |
| created | 2024-05-14T21:59:43.354Z |
| ended | 21 minutes ago |
| failedStatusChecks | 0 |
| id | f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007 |
| lastStatusCheck | 21 minutes ago |
| lastStatusMessage | Transitioning from status ARCHIVING to FINISHED in phase ARCHIVING. |
| lastUpdated | 2024-05-14T22:01:22.494Z |
| maxHours | 0.5 |
| memoryPerNode | 192.0 |
| name | padcirc-frontera-job-1715717815835 |
| nodeCount | 1 |
| owner | clos21 |
| processorsPerNode | 40 |
| remoteEnded | 21 minutes ago |
| remoteJobId | 6316891 |
| remoteOutcome | FINISHED |
| remoteQueue | development |
| remoteStarted | 2024-05-14T22:00:05.629Z |
| remoteStatusChecks | 2 |
| remoteSubmitted | 22 minutes ago |
| schedulerJobId | None |
| status | FINISHED |
| submitRetries | 0 |
| systemId | designsafe.community.exec.frontera |
| tenantId | designsafe |
| tenantQueue | aloe.jobq.designsafe.submit.DefaultQueue |
| visible | True |
| workPath | /scratch1/05400/ds_apps/clos21/job-f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007-padcirc-frontera-job-1715717815835 |
+--------------------+----------------------------------------------------------------------------------------------------------------+
Note once the status of the job reaches the FINISHED
state, you should be able to find the job outputs in the archive directory.
Here we see if is on the storage system designsafe.storage.default
at the path clos21/archive/jobs/job-f6949b3e-a5c9-4c8d-b985-c4bfb4baccb4-007
.
This path should be in my MyData
directory, which I can view from the front end to see my output ADCIRC files:
Outputs will be found in MyData
within the the DataDepot.
See guides on FigureGen Document and Kalpana Documentation for info on how ot visualize ADCIRC output files.
ADCIRC Reference
The below section provides a brief overview of more techinical aspects of ADCIRC for quick reference. Links to supporting documentation are included or can be found below in the external documentation.
ADCIRC Command Line Options
adcirc
Command Line Options
Option | Description | Special Notes |
---|---|---|
-I INPUTDIR |
Set the directory for input files. | |
-O GLOBALDIR |
Set the directory for fulldomain output files. | |
-W NUM_WRITERS |
Dedicate NUM_WRITERS MPI processes to writing ascii output files. | Affects ascii formatted fort.63, fort.64, fort.73, and fort.74 files. |
adcprep
Command Line Options
Option | Description | Special Notes |
---|---|---|
--np NUM_SUBDOMAINS |
Decompose the domain into NUM_SUBDOMAINS subdomains. | Required for parallel computation. |
--partmesh |
Partition the mesh only, resulting in a partmesh.txt file. | Should be done first. Generates partmesh.txt for subdomain assignments. |
--prepall |
Decompose all ADCIRC input files using the partmesh.txt file. | Requires previous execution with --partmesh . Expects default input file names. |
adcprep
Runs
The usual workflow of running adcprep
consists of two steps - (1) partitioning of the mesh into sub-domains that each core will work on. (2) Decomposing other input files over the partitioned mesh.
Note that running adcprep
alone with no command line options will bring up an interactive menu.
Common adcprep
options used include:
- Partitioning Mesh Only
adcprep --partmesh --np 32
This command partitions the mesh into 32 subdomains, creating a partmesh.txt file.
- Preparing All Input Files
adcprep --prepall --np 32
Utilizes the previously created partmesh.txt file to decompose all input files into PE* subdirectories.
PADIRC Runs
Some common options used when using PADCIRC are the following:
- Specifying Input/Output Directories
padcirc -I /path/to/input -O /path/to/output
Looks for input files in /path/to/input
and writes output files to /path/to/output
.
- Adjusting Writer Cores
padcirc -W 4
Dedicates 4 MPI processes to write ASCII output files.
For more information see - ADCIRC Webpage Documentation
ADCIRC Input Files
Input File Table Summary
Default File Name(s) | Description | Condition |
---|---|---|
fort.14 |
Grid and Boundary Information File | Required |
fort.15 |
Model Parameter and Periodic Boundary Condition File | Required |
fort.10 |
Passive Scalar Transport Input File | Conditional |
fort.11 |
Density Initial Condition Input File | Conditional |
fort.13 |
Nodal Attributes File | Conditional |
fort.19 |
Non-periodic Elevation Boundary Condition File | Conditional |
fort.20 |
Non-periodic, Normal Flux Boundary Condition File | Conditional |
fort.22 |
Meteorological Forcing Data | Conditional |
fort.200, ... |
Multiple File Meteorological Forcing Input | Conditional |
fort.23 |
Wave Radiation Stress Forcing File | Conditional |
fort.24 |
Self Attraction/Earth Load Tide Forcing File | Conditional |
fort.25 , 225/227 |
Ice Coverage Input Files | Conditional |
fort.35 |
Level of No Motion Boundary Condition Input | Conditional |
fort.36 |
Salinity Boundary Condition Input | Conditional |
fort.37 |
Temperature Boundary Condition Input | Conditional |
fort.38 |
Surface Temperature Boundary Values | Conditional |
fort.39 |
Salinity and Temperature River Boundary Values | Conditional |
fort.67 or fort.68 |
2DDI Hot Start Files | Conditional |
fort.141 |
Time Varying Bathymetry Input File | Conditional |
elev_stat.151 |
Elevation Station Location input file | Conditional |
vel_stat.151 |
Velocity Station Location input file | Conditional |
conc_stat.151 |
Concentration Station Location input file | Conditional |
met_stat.151 |
Meteorological Recording Station Location Input file | Conditional |
N/A | Time-Varying Weir Input File | Conditional |
N/A | Time Varying Weirs Schedule File | Conditional |
ADCIRC Outputs Files
ADCIRC Outputs Summary
Default File Name(s) | Description | Simulation Type |
---|---|---|
fort.6 | Screen Output | Always |
fort.16 | General Diagnostic Output | Always |
fort.33 | Iterative Solver ITPACKV 2D Diagnostic Output | Specific setting |
fort.41 | 3D Density, Temperature and/or Salinity at Specified Recording Stations | 3D simulation |
fort.42 | 3D Velocity at Specified Recording Stations | 3D simulation |
fort.43 | 3D Turbulence at Specified Recording Stations | 3D simulation |
fort.44 | 3D Density, Temperature and/or Salinity at All Nodes in the Model Grid | 3D simulation |
fort.45 | 3D Velocity at All Nodes in the Model Grid | 3D simulation |
fort.46 | 3D Turbulence at All Nodes in the Model Grid | 3D simulation |
fort.47 | Temperature Values at the Surface Layer | Specific setting |
fort.51 | Elevation Harmonic Constituents at Specified Elevation Recording Stations | Harmonic analysis |
fort.52 | Depth-averaged Velocity Harmonic Constituents at Specified Velocity Stations | Harmonic analysis |
fort.53 | Elevation Harmonic Constituents at All Nodes in the Model Grid | Harmonic analysis |
fort.54 | Depth-averaged Velocity Harmonic Constituents at All Nodes in the Model Grid | Harmonic analysis |
fort.55 | Harmonic Constituent Diagnostic Output | Harmonic analysis |
fort.61 | Elevation Time Series at Specified Elevation Recording Stations | Time series output |
fort.62 | Depth-averaged Velocity Time Series at Specified Velocity Recording Stations | Time series output |
fort.63 | Elevation Time Series at All Nodes in the Model Grid | Time series output |
fort.64 | Depth-averaged Velocity Time Series at All Nodes in the Model Grid | Time series output |
maxele.63, maxvel.63, maxwvel.63, maxrs.63, minpr.63 | Global Maximum and Minimum files for the Model Run | Specific setting |
fort.67, fort.68 | Hot Start Output | Restart capability |
fort.71 | Atmospheric Pressure Time Series at Specified Meteorological Recording Stations | Meteorological input |
fort.72 | Wind Velocity Time Series at Specified Meteorological Recording Stations | Meteorological input |
fort.73 | Atmospheric Pressure Time Series at All Nodes in the Model Grid | Meteorological input |
fort.74 | Wind Stress or Velocity Time Series at All Nodes in the Model Grid | Meteorological input |
fort.75 | Bathymetry Time Series at Specified Bathymetry Recording Stations | Specific setting |
fort.76 | Bathymetry Time Series at All Nodes in the Model Grid | Specific setting |
fort.77 | Time-varying weir output file | Specific structure |
fort.81 | Depth-averaged Scalar Concentration Time Series at Specified Concentration Recording Stations | Scalar transport |
fort.83 | Depth-averaged Scalar Concentration Time Series at All Nodes in the Model Grid | Scalar transport |
fort.90 | Primitive Weighting in Continuity Equation Time Series at All Nodes in the Model Grid | Specific setting |
fort.91 | Ice Coverage Fields at Specified Recording Stations | Ice modeling |
fort.93 | Ice Coverage Fields at All Nodes in the Model Grid | Ice modeling |
ADCIRC Examples
Quarter Annular Harbor with Tidal Forcing Example: ADCIRC Simulation Guide
The Quarter Annular Harbor is commonly used as a test case to assess the performance of finite lement numerical schemes applied to shallow water equations.
Problem Setup
The Quarter Annular Harbor problem features a domain that is a quarter of an annulus, bounded by land on three sides and an open ocean boundary. The setup includes:
- Inner radius (
r1
): 60,960 m - Outer radius (
r2
): 152,400 m - Bathymetry: Varies quadratically from
h1
= 3.048 m atr1
toh2
= 19.05 m atr2
- Finite element grid: Radial spacing of 15,240 m and angular spacing of 11.25 degrees
The problem's geometry tests the model's performance in both horizontal coordinate directions, with an emphasis on identifying spurious modes and numerical dissipation.
ADCIRC Inputs
Two primary input files are required:
-
Grid and Boundary Information File (
fort.14
)This file outlines the mesh configuration, including:
- Grid Information: 96 elements and 63 nodes.
- Nodal Information: Node number, horizontal coordinates, and depth.
- Elemental Information: Element number, nodes per element, and comprising node numbers.
- Boundary Conditions:
- Elevation specified boundary: 1 segment with 9 nodes (Node 7 to Node 63).
- Normal flow boundary: 1 segment with 21 nodes (Node 63 to Node 7).
-
Model Parameter and Periodic Boundary Condition File (
fort.15
)Specifies model parameters, including:
- Initialization: Cold started from a state of rest.
- Coordinate System: Cartesian.
- Nonlinearities: Finite amplitude, advection, and quadratic bottom friction.
- Forcings: No tidal potential or wind stress. Gravity in m/s².
- Boundary Forcing: Sinusoidal elevation with a period of 44,712 s, amplitude of 0.3048 m, and phase of 0 degrees, ramped up over the first two days.
- Simulation Duration: 5 days with a time step of 174.656 s.
- Output Settings: Water level and velocity time series output at specified intervals and locations. Harmonic analysis of model elevation and velocity fields for the M2 constituent on the final day. Hot start files generated every 512 time steps.
ADCIRC Outputs
The simulation generates several output files, briefly summarized as follows:
- General Diagnostic Output (
fort.16
): Echoes input file information, ADCIRC processing data, and error messages. - Iterative Solver Diagnostic (
fort.33
): Contains solver diagnostics, typically empty after successful runs. - Harmonic Constituents:
- Elevation at specified stations (
fort.51
). - Velocity at specified stations (
fort.52
). - Elevation at all nodes (
fort.53
). - Velocity at all nodes (
fort.54
).
- Elevation at specified stations (
- Time Series Output:
- Elevation at specified stations (
fort.61
). - Velocity at specified stations (
fort.62
). - Elevation at all nodes (
fort.63
). - Velocity at all nodes (
fort.64
).
- Elevation at specified stations (
- Hot Start Files (
fort.67
,fort.68
): Facilitate restarting simulations from specific states.
Running Example**
This simulation example is best run from the ADCIRC Interactive VM.
- Start the ADCIRC Interactive VM
- Copy the inputs from
- Execute ADCIRC, specifying the input files and any runtime options as needed.
References
- ADCIRC Website Examples
- Lynch, D.R. and W.G. Gray. 1979. A wave equation model for finite element tidal computations. Computers and Fluids. 7:207-228.
Shinnecock Inlet, NY with Tidal Forcing Example: ADCIRC Simulation Guide
This documentation outlines the procedure and details for setting up and running an ADCIRC simulation focused on the tidal hydrodynamics in the vicinity of Shinnecock Inlet, NY. This example derives from a study conducted at the U.S. Army Corps of Engineers Coastal Hydraulics Laboratory. It is commonly used as a test-case for ADCIRC releases.
Problem Setup
Shinnecock Inlet is a geographical feature located along the outer shore of Long Island, New York. The simulation utilizes a finite element grid to model the hydrodynamics in this area, reflecting the following characteristics:
- The grid's discretization varies from approximately 2 km offshore to around 75 m in nearshore areas.
- Due to the coarse resolution, this model does not accurately resolve circulation near the inlet and the back bay.
The input files for this simulation can be found in the CommunityData directory at CommunityData/Use Case Products/ADCIRC/adcirc/adcirc_shinnecock_inlet
.
ADCIRC Input
-
Grid and Boundary Information File (
fort.14
)This file defines the simulation's spatial domain, containing:
- 5780 elements and 3070 nodes, detailing the mesh used for the simulation.
- Nodal and elemental information, including node numbers, horizontal coordinates, depth, and elements' composition.
- Boundary specifications:
- An elevation specified open boundary with 75 nodes (from node 75 to node 1).
- A normal flow mainland boundary with 285 nodes (from node 1 to node 75).
-
Model Parameter and Periodic Boundary Condition File (
fort.15
)This file outlines the simulation's parameters:
- Initialization from a state of rest (cold start).
- Use of a longitude-latitude coordinate system.
- Inclusion of nonlinearities such as finite amplitude (with elemental wetting and drying), advection, and hybrid bottom friction.
- The model is forced using tidal potential terms and along the elevation boundary with 5 tidal constituents (M2, S2, N2, O1, K1), ramped up over the first two days.
- The simulation duration is 5 days with a time step of 6 seconds.
- Output of water level and velocity time series every 300 time steps (half-hour) at all nodes from days 3.8 to 5. No harmonic output or hot start files are produced.
ADCIRC Output
The simulation generates the following output files:
- General Diagnostic Output (
fort.16
): Includes input file information, ADCIRC processing data, and error messages. - Iterative Solver Diagnostic (
fort.33
): Contains diagnostic information from the iterative solver, typically empty upon successful completion. - Elevation Time Series (
fort.63
): Outputs elevation time series at all nodes every 300 time steps. - Depth-averaged Velocity Time Series (
fort.64
): Outputs velocity time series at all nodes every 300 time steps.
References
- ADCIRC Website Examples
- Militello, A., and Kraus, N. C. (2000). Shinnecock Inlet, New York, Site Investigation, Report 4, Evaluation of Flood and Ebb Shoal Sediment Source Alternatives for the West of Shinnecock Interim Project, New York. Technical Report CHL-98-32. U.S. Army Engineer Research and Development Center, Vicksburg, Mississippi.
- Morang, A. (1999). Shinnecock Inlet, New York, Site Investigation Report 1, Morphology and Historical Behavior. Technical Report CHL-98-32, US Army Engineer Waterways Experiment Station, Vicksburg, Mississippi.
- Williams, G. L., Morang, A., Lillycrop, L. (1998). Shinnecock Inlet, New York, Site Investigation Report 2, Evaluation of Sand Bypass Options. Technical Report CHL-98-32, US Army Engineer Waterways Experiment Station, Vicksburg, Mississippi.
ADCIRC Installation (Advanced)
For the advanced user, below is a guide on how to install ADCIRC locally. The below instructions can be executed within a users Jupyter Hub environment (HPC and non-HPC), to get a local install of ADCIRC. Note this is for advanced users only.
Spack ADCIRC Installation (DesignSafe JupyterHub)
The below instructions are for DesignSafe on JupyterHub instances (non HPC). They will allow you to test and run ADCIRC examples within a Jupyter session, without having to use HPC resources. This is in particular useful for
Move into your MyData directory and clone the spack repo. Note we put the spack repo in MyData so that it persists over Jupyter sessions.
cd ~/MyData
git clone -c feature.manyFiles=true https://github.com/spack/spack.git ~/spack
After installing spack, initialize it with:
source ~/MyData/spack/share/spack/setup-env.sh
This needs to be run every time a new jupyter terminal environment is spawned. To automatically do this, add the command to your ~/.bashrc
or alternatively, set up an alias:
alias spack-setup='source ~/spack/share/spack/setup-env.sh'
Now we clone the spack ADCIRC repo, and add the ADCIRC spack repository to spack.
cd ~/MyData
git clone https://github.com/adcirc/adcirc-spack.git
spack repo add ~/MyData/adcirc-spack
Now to install ADCIRC:
spack install adcirc@main +swan +grib
Note: The installation above may take a long time!
To activate ADCIRC in your environment just run:
spack load adcirc
That should make the padcirc
, adcirc
, adcprep
, and padcswan
executablex available in your path.
For more information on how to use spack see the documentation For more information on ADCIRC's spack repository and build options, see the ADCIRC Spack Repository
Resources and Documentation
The following sections provide further information on useful resources for using ADCIRC.
ADCIRC Data Hosted on DesignSafe
A wealth of ADCIRC related data can be found already on DesginSafe, in both the CommunityData and published projects folder. The following are a few notable locations to find already available data to use for ADCIRC simulations. Note you want to most likely copy these files to your MyData or HPC work before using them, since CommunityData and Published Projects directory are read only, and can lead to issues when running jobs/notebooks from these directories.
- Community Data :
- Use Case Products -
CommunityData/Use Case Products/ADCIRC/adcirc
- App Examples -
CommunityData/app_examples/
- Use Case Products -
- Notable ADCIRC Published Projects:
To see a full list of ADCIRC related data in the data depot, search for ADCIRC
in the keyword search bar.
External Documentation
There are a wide variety of ADCIRC resources on the web. Below are a few to help navigate and learn more about ADCIRC.
- ADCIRC GitHub Page - As of v54, the official central information hub for all things ADCIRC. Contains source code, utility programs, and issue tracking; a good place for developers and users interested in the staying up to date with the latest updates and developments in ADCIRC, along with bug fixes and issues. Useful links include:
- Issues - For reporting bugs, searching for common issues with ADCIRC, or asking questions/feature requests.
- Test Suite - Test suite of ADCIRC examples, used for testing new releases of ADCIRC.
- ADCIRC Official Website - Older primary source for all things ADCIRC, including model description, capabilities, and latest updates. Useful Sub pages include:
- Input File Descriptions/Output File Descriptions - Mostly correct for basic inputs/outputs, barring any changes since v54+.
- Parameter Definitions
- Example Problems
- ADCIRC Wiki - Out of date, but some useful information on here.
Other ADCIRC Utilities and Libraries
The ADCIRC community is vast, with utility libraries being developed at different institutions around the world. Below we highlight a few other third-party ADCIRC utilities and libraries that are not currently supported on DesignSafe, but can be useful and may be supported in the future.
Do you have an ADCIRC utility or library you'd like to add to the list? Open a ticket to contribute to the user guide!
ClawPack User Guide
Clawpack (“Conservation Laws Package”) is a collection of finite volume methods for linear and nonlinear hyperbolic systems of conservation laws. Clawpack employs high-resolution Godunov-type methods with limiters in a general framework applicable to many kinds of waves.
More detailed information and CLAWPACK user documentation can be found at the Clawpack website.
How to Start a Clawpack Interactive Session in the Workspace
The Clawpack 5.4.0 suite has been installed into the DesignSafe Jupyter Hub environment. It is available for use with Python2 both from the command-line and in Jupyter notebooks. If using python2 from the terminal, you'll want to use the binaries located in '/opt/conda/envs/python2/bin'. To start using Clawpack:
- Select the Clawpack application from the Simulation tab in the Workspace.
- Click on Launch Jupyter.
- Open a GeoClaw notebook (see below to access an example notebook to get you started).
Example Clawpack Use Case
An example GeoClaw notebook in Jupyter can be seen by navigating to 'community / Jupyter Notebooks / Workspace Application Sample Notebooks / GeoClaw' and opening 'GeoClaw_topotools_example.ipynb', and can be previewed and copied to your own space from here.
Dakota User Guide
The Dakota project delivers both state-of-the-art research and robust, usable software for optimization and uncertainty quantification. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models. The Dakota toolkit provides a flexible, extensible interface between such simulation codes (e.g. OpenSees) and its iterative systems analysis methods, which include:
- optimization with gradient and nongradient-based methods;
- uncertainty quantification with sampling, reliability, stochastic expansion, and epistemic methods;
- parameter estimation using nonlinear least squares (deterministic) or Bayesian inference (stochastic);
- and sensitivity/variance analysis with design of experiments and parameter study methods.
These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty.
More detailed information and Dakota user documentation can be found at the Dakota website.
How to Submit a Dakota Job in the Workspace
- Select the Dakota appication from the Simulation tab in the Workspace.
- Locate your Input Directory (Folder) with your input files that are in the Data Depot and enter this directory in the form.
- Enter the name of your Dakota Drive File located in your Input Directory into the form.
- Enter a comma separated list of modules to load.
- Enter a name for your Dakota Output File.
- Enter a name for your Dakota Input File.
- Enter a name for your Dakota Error File.
- Enter a maximum job runtime in the form. See guidance on form for selecting a runtime.
- Enter a Job name.
- Enter an output archive location or use the default provided.
- Select the number of nodes to be used for your job. Larger data files run more efficiently on higher node counts.
- Click Run to submit your job.
- Check the job status by clicking on the arrow in the upper right of the job submission form.
IN-CORE User Guide
The Interdependent Networked Community Resilience Modeling Environment (IN-CORE) platform, in continuous development by the Center of Excellence for Risk-Based Community Resilience Planning (CoE), is the result of a multi-university research center funded by The National Institute of Standards and Technology (NIST). Moreover, the platform is intended to offer the potential for community contributed code as resilience modeling research evolves. The platform focuses on measurement science to support community resilience assessment through a risk-based approach to support decision-making for definition, prioritization, and comparison of resilience strategies at the community level.
The IN-CORE platform's main analysis tools correspond to the Python libraries pyincore and pyincore-viz. Users can access these using IN-CORE lab (hosted on the NCSA cloud system) or by installing the Python libraries on local computers; the latter allows the user to run the job locally or submit the job through the NCSA cloud system.
This user guide presents how to launch IN-CORE with DesignSafe resources, leveraging the computational capabilities within the DesignSafe Cyberinfrastructure. Moreover, advantages of launching IN-CORE within DesignSafe include the potential to integrate shared data, streamline data curation and publication of results that emerge from simulation with IN-CORE, or even couple IN-CORE simulations and codes with those from other DesignSafe tools and resources.
IN-CORE on DesignSafe Cyberinfrastructure (DesignSafe-CI)
The JupyterLab shell on DesignSafe-CI can be used to access the pyincore and pyincore-viz functions on DesignSafe-CI. Computational capabilities within the DesignSafe-CI are leveraged to enhance the regional-scale assessment tools within IN-CORE. DesignSafe users can also use the seamless communication of intermediate and final results from IN-CORE python packages with other DesignSafe tools through the DesignSafe-CI Jupyter Notebooks and Data Depot repositories. For example, high-fidelity hazard estimates can be obtained from different resources at DesignSafe and used as input data for risk and resilience analysis using IN-CORE Python packages. Monte Carlo simulations or optimization can be run leveraging the HPC resources of DesignSafe. The interaction between the data archived in Data Depot, tools and applications’ workflow in DesignSafe-CI, and the use of IN-CORE tools through JupyterLab allows the users to create different roadmaps for analysis, visualization, and results publication to advance the field of regional-scale community resilience estimation.
Using a client-based development, IN-CORE Python libraries can connect directly to the NCSA cloud system to retrieve published models and run analyses. However, to leverage the resources at DesignSafe-CI, the client mode must be disabled (more information is presented below), and the models must be created “locally” (on DesignSafe-CI JupyterHub).
Installation of pyincore on DesignSafe
The user can install pyincore using any of these two options:
1) the temporary user installation 2) creating a specific kernel for pyincore
While option 1 may be faster, option 2 corresponds to the formal (recommended) approach for installing the IN-CORE packages. Additionally, some related packages to pyincore, e.g. pyincore-viz, may present installation conflicts when using the temporary option (option 1). For more information about installing Python libraries on DesignSafe-CI, refer to Installing Packages.
To start, access DesignSafe JupyterHub via the DesignSafe-CI. Select "Tools & Applications" > "Analysis" > "Jupyter". When asked to select a notebook image, select the “Updated Jupyter Image” and click “Start My Server”. Figure 1. Access to the JupyterHub on DesignSafe-CI
Installing pyincore without creating a new environment (temporary installation)
Installing the pyincore package on DesignSafe directly on the "base" subshell in Jupyter can be done using the %pip
line magics as presented below.
!pip3 -q install pyincore --user
After this, you may need to restart your kernel (click on Kernel/Restart Kernel and Clear All Outputs).
Installing pyincore creating a new environment (recommended)
To install the maintained version of the pyincore and the pyincore-viz packages, a particular environment using conda
must be created. This step requires installing the kernelutility
Python package as follows:
!pip3 -q install kernelutility
After this, you may need to restart your kernel (click on Kernel/Restart Kernel and Clear All Outputs). For more information on the use of kernelutility
refer to Custom User-Defined Kernels.
Next, use the kernelutility
package to create a sharable kernel supported by the Updated Jupyter Image on DesignSafe. Using the following command, create a new environment called 'pyincore_on_DS':
from kernelutility import kernelset
kernelset.create('pyincore_on_DS')
After this step, that is, the previous cell has finished running, select the newly created environment in the "switch kernel" panel (right upper corner of the notebook, as shown in Figure 2). Select specifically the one with the name Python: [conda env:pyincore_on_DS]. Then, restart the kernel (click on Kernel/Restart Kernel and Clear All Outputs). Figure 2. Selecting the newly created conda environment
Use the %conda install
command to install pyincore and pyincore-viz and the recently created environment.
%conda install -c in-core pyincore
%conda install -c in-core pyincore-viz
At this point, you have created a new environment, installed pyincore and pyincore-viz with their respective dependencies, and one last restart of the kernel is required. This created environment can be accessed throughout the current and future sessions.
Reproducibility after shutting down your server (if you installed pyincore using kernelutility)
The Jupyter Session will be ended after a few days without any activity or when the user has decided to shut down the server ("File" > "Hub Control Panel" > "Stop My Server" > "Log Out"). In such case, the next time the user accesses the Updated Jupyter Image, the user-defined kernels (pre-existing conda environments, such as the newly created environment 'pyincore_on_DS') will not be immediately visible. If this happens, you will have to run the following commands:
!pip -q install kernelutility
from kernelutility import kernelset
After waiting a few seconds, the pre-existing user-defined kernels may appear after clicking on the "switch kernel" panel (right upper corner, as shown in Figure 2). If not, refresh your browser and check the "switch kernel" panel again.
For more information on accessing created environments, refer to Custom User-Defined Kernels.
Example: IN-CORE tools within DesignSafe-CI
The following example leverages the use case published in the Data Depot as PRJ-4675 “IN-CORE on DesignSafe”. The notebook presents a use case focused on the risk analysis of a regional scale portfolio of bridges exposed to seismic events. The goal of this use case is to show the interaction of DesignSafe with IN-CORE Python tools. You can copy this folder to your “My Data” folder to enable editing permission, thus enabling working directly on the Jupyter Notebook. To access to the main Jupyter notebook of the published use case (called main.ipynb), click on the button below.
For more information about advanced analyses in IN-CORE, including housing unit allocation, population dislocation evaluation, recovery analyses, and computable general equilibrium modeling for quantifying community-level recovery, the reader is referred to the IN-CORE user documentation at the IN-CORE website.
Loading a hazard scenario with pyincore
Create an Earthquake
object from an existing hazard map (in format *.tif) that is available in the use case folder '/hazard'. The hazard map can be obtained from other tools within DesignSafe, IN-CORE, or manually developed by the users.
In this example, use the files '/hazard/eq-mmsa-pga.tif' and '/hazard/eq-dataset.json' to create the hazard map scenario as follows:
from pyincore import Earthquake
route_haz = 'hazard/'
# Create the local earthquake object
eqset = Earthquake.from_json_file(os.path.join(route_haz, "eq-dataset.json"))
# Add the local files that describe the intensities
eqset.hazardDatasets[0].from_file((os.path.join(route_haz, "eq-mmsa-pga.tif")),data_type="ergo:probabilisticEarthquakeRaster")
Definition of the exposed assets
The illustrative example uses a set of hypothetical bridges located in the Memphis Metropolitan Statistical Area (MMSA). THe input data is located in the folder '/exposure' of the use case. First, create a new shapefile adding an identifier to each asset within the 'MMSA_bridges' shapefile. Then, a Dataset py-incore object is created using this newly defined shapefile.
from pyincore import GeoUtil
gdf = gpd.read_file("exposure/MMSA_bridges.shp")
GeoUtil.add_guid("exposure/MMSA_bridges.shp", "exposure/MMSA_bridges_w_guid.shp")
# Create a Dataset object from the modified shapefile
from pyincore import Dataset
MMSA_bridges = Dataset.from_file("exposure/MMSA_bridges_w_guid.shp",
data_type="ergo:bridgesVer3")
You can check the exposed assets by using Python libraries for visualization of geospatially distributed data (see for example PRJ-3939 Jupyter notebook for visualization of spatially distributed data in risk and resilience analysis). For example, interactive exploration using Plotly package can be obtained as follows:
# Retrieve the GeoDataframe
bridges_gdf = MMSA_bridges.get_dataframe_from_shapefile()
# Create the interactive plot
import plotly.express as px
fig = px.scatter_mapbox(bridges_gdf, lat="Latitude", lon="Longitude",
hover_name="STRUCTURE_",
hover_data=["DECK_WIDTH","SPEED_AVG",
"YEAR_BUILT","SPEED_AVG"],
color="MAIN_UNIT_", size="MAIN_UNIT_", size_max=15,
color_continuous_scale=px.colors.sequential.Cividis_r,
title="Bridges in Memphis, TN-MS-AR Metropolitan Statistical Area")
fig.layout.coloraxis.colorbar.title = 'Number of spans'
fig.update_layout(mapbox_style="carto-positron",
height=350, width=600, margin={"r":0,"t":30,"l":0,"b":0},
title_x=0.45, title_y=1)
fig
The obtained figure is presented in Figure 3.
Figure 3. Visualization of the exposed bridges using interactive plots
Fragility models description using pyincore JSON structure
Very different models of fragility curves can be created using pyincore models (from univariate to parametric functions). A system-level fragility curve (see Figure 4) is created below by passing the “fragility description” string to the function FragilityCurveSet.from_json_str()
. Check the details on the use case example.
from pyincore import FragilityCurveSet
from pyincore_viz.plotutil import PlotUtil as plot
# Create the fragility description string
definition_frag1 = """{
"description": "Bridge Class 1",
"authors": ["Rincon, R., Padgett, J.E."],
"paperReference": null,
"resultType": "Limit State",
"hazardType": "earthquake",
"inventoryType": "bridges",
…
}"""
# Create the FragilityCurveSet object and visualize it
fragility_class1 = FragilityCurveSet.from_json_str(definition_frag1)
plt = plot.get_fragility_plot(fragility_class1, start=0, end=1.5)
Figure 4. Univariate visualization of the created fragility functions
Fragility Mapping (fragility functions link to exposure model)
The link between the Dataset
exposure model and the available FragilityCurveSets
is obtained through the MappingSet()
function. Here the link is assumed to be the column 'archetype' of the bridge Dataset
object.
from pyincore import Mapping, MappingSet
fragility_entry_class1 = {"Non-Retrofit Fragility ID Code": fragility_class1}
fragility_rules_class1 = {"OR":["int archetype EQUALS 1"]}
fragility_mapping_class1 = Mapping(fragility_entry_class1, fragility_rules_class1)
… # Here the rest of fragilities should be created
fragility_mapping_definition = {
"id": "N/A",
"name": "MMSA mapping object",
"hazardType": "earthquake",
"inventoryType": "bridges",
'mappings': [fragility_mapping_class1,
… # Here the rest of fragilities should be added to the mapping set
],
"mappingType": "fragility"
}
fragility_mapping_set = MappingSet(fragility_mapping_definition)
Bridge Damage Analysis
The BridgeDamage
object in pyincore is used to compute the damage probabilities conditioned on a given set of hazard events. To be able to run this analysis on DesignSafe, the IncoreClient
object has to be set to 'offline' as explained in the Introduction of this User Guide:
client = IncoreClient(offline=True)
The BridgeDamage object is used to link the hazard, datasets, and computation parameters for the desired type of analysis as well as to run the damage analysis. The following commands are used for this purpose:
from pyincore.analyses.bridgedamage import BridgeDamage
from pyincore import IncoreClient
# Sets the use of INCORE to "offline". This is required to use the capabilities at DesignSafe
client = IncoreClient(offline=True)
# Create a BridgeDamage object and set the input parameters
bridge_dmg = BridgeDamage(client)
bridge_dmg.set_input_dataset("bridges", MMSA_bridges)
bridge_dmg.set_input_dataset('dfr3_mapping_set', fragility_mapping_set)
bridge_dmg.set_input_hazard("hazard", eqset)
bridge_dmg.set_parameter("result_name", "MMSA_bridges_dmg_result")
bridge_dmg.set_parameter("num_cpu", 8)
# Run bridge damage analysis
bridge_dmg.run_analysis()
The obtained results can be later passed for subsequent models within the pyincore library such as MonteCarloFailureProbability
, PopulationDislocation
, CapitalShocks
, among others. For example, using the MonteCarloFailureProbability
model, the scenario-based bridge failure probabilities for the previously created Earthquake
, DataSet
, FragilityCurveSets
, and MappingSet
are presented in Figure 5.
Figure 5. Visualization of the (Monte Carlo-based) failure probability using interactive plots
LS-DYNA User Guide
Ls-Dyna is a general-purpose multi-physics simulation software package. It was originated from DYNA3D, developed by John Hallquist at the Lawrence Livermore National Laboratory in 1976. The software was commercialized as LS-Dyna in 1988 by Livermore Software Technology Corporation (LSTC).
The main Ls-Dyna capabilities are:
- 2D and 3D;
- Nonlinear dynamics (implicit and explicit solvers);
- Static and quasi-static analysis;
- Thermal analysis and electromagnetic analysis;
- Computational fluid dynamics:
- Incompressible and compressible solver;
- Fluid-structure interactions;
- Smoothed-particle hydrodynamics;
- Geotechnical and soil-structure interactions;
- Contact algorithms;
- Large library of elements (shell, discrete, solid, etc).
Ls-Dyna on DesignSafe
In the Workspace are available the following apps:
- LS-PrePost: Pre-processing and post-processing of models (Workspace > Simulation >LS-DYNA> LS-Pre/Post);
- LS-Dyna: Actual solver (version 9.1.0) – serial and parallel versions (after activation - Workspace > My Apps > LS-DYNA).
Activate Ls-Dyna on DesignSafe
DesignSafe (through TACC) has negotiated with LSTC to allow LS-DYNA access on TACC systems for academic research. Users can submit a ticket (https://www.designsafe-ci.org/help/new-ticket/) requesting LS-DYNA access and are granted access upon verification with LSTC that they have an existing academic departmental license or that you acquire such license.
A Request Activation button is also available in Workspace > Simulation > LS-DYNA:
Once activated, Ls-Dyna will appear in Workspace > My Apps tab.
How to launch LS-Dyna
Examples in this guide:
- Launching LS-Pre/Post to generate or visualize model via DesignSafe web portal;
- Launching a single job via DesignSafe web portal;
- Launching batch of jobs via DesignSafe web portal;
- Launching batch of jobs via Command Line Interface.
Launching LS-Pre/Post
- Select the LS-Pre/Post app from the drop-drown menu at (Workspace > Simulation > LS-DYNA):
- Fill the form with the following information:
- Working directory: that contains the files that you want to work on;
- Desktop resolution: select the desktop screen size for the visualization;
- Maximum Job runtime: The maximum time you expect this job to run for. Note that after this amount of time your job will be killed by the job scheduler;
- Job name;
- Job output archive location (optional): location where the job output should be archived;
- Node Count: Number of requested process nodes for the job;
- Processors per Node: numbers of cores per node for the job. The total number of cores used is equal to NodeCount x ProcessorsPerNode.
- Click on Run;
- Once ready a new window will pop-up.
- Click on Connect! and start using LS-Dyna Pre/Post.
Launching a single job via DesignSafe web portal
- Select LS-DYNA (Serial) from the LS-DYNA app in (Workspace > My Apps > LS-DYNA):
- Fill the form with the following information:
- Working directory: that contains the files that you want to work on;
- LS-DYNA Input: provide the input file name;
- Maximum Job runtime: The maximum time you expect this job to run for. Note that after this amount of time your job will be killed by the job scheduler;
- Job name;
- Job output archive location (optional): location where the job output should be archived;
- Node Count: Number of requested process nodes for the job;
- Processors per Node: numbers of cores per node for the job. The total number of cores used is equal to NodeCount x ProcessorsPerNode;
- Click on Run.
- Follow the Job Status on the right tab.
- When the analysis is completed two options available:
- Launching LS-PrePost again to visualize/extract results;
- Transfer output files via Globus (see details at: https://www.designsafe-ci.org/rw/user-guides/globus-data-transfer-guide/).
Launching batch of jobs via DesignSafe web portal
- Select LS-DYNA (Parallel) from the LS-DYNA app in (Workspace > My Apps > LS-DYNA):
- Fill the form with the following information:
- Working directory: that contains the files that you want to work on;
- Number of Processors: insert the number of processors required to solve each job;
- LS-DYNA Input: provide the input file name. This file should be a text file that lists the names of all jobs to run. All jobs need to be in the same directory (see example file below);
- Maximum Job runtime: The maximum time you expect this job to run for. Note that after this amount of time your job will be killed by the job scheduler;
- Job name;
- Job output archive location (optional): location where the job output should be archived;
- Node Count: Number of requested process nodes for the batch of jobs;
- Processors per Node: numbers of cores per node for the batch of jobs. The total number of cores used is equal to NodeCount x ProcessorsPerNode;
- Click on Run;
Example Ls-Dyna input file for parallel jobs.
- Follow the Job Status on the right tab.
- When the analysis is completed two options available:
- Launching LS-PrePost again to visualize/extract results;
- Transfer output files via Globus (see details at: https://www.designsafe-ci.org/rw/user-guides/globus-data-transfer-guide/).
Launching batch of jobs via Command Line Interface (CLI)
- Connect to Frontera using SSH Client. See TACC's [Data Transfer & Management Guide](https://docs.tacc.utexas.edu/hpc/frontera/):
- Host name: frontera.tacc.utexas.edu;
- Username and Password should be the same ones as for DesignSafe.
- Transfer LS-Dyna k files to /scratch or /work directory on Frontera (via Globus);
- Generate 3 files:
- A batch file (launcherLs_Dyna.slurm) that contains all the information about the resources that you need for the parallel job and calls input files (see example file below);
- 2 python routines to launch pylauncher that is a python-based parametric job launcher:
- myparallellines.py: it generates a file with the list of the n jobs that you need to run (see example file below);
- This routine will generate a my_parallel_lines file (for n=4) (see example file below);
- myparallellines.py: it generates a file with the list of the n jobs that you need to run (see example file below);
- launcher_file.py: it calls the application that allows to submit all the jobs in parallel. The number of cores assigned to each job is specified in cores (34 in this example). The script will call the file generated by myparallellines.py script (see example file below);
Example batch file for Ls-Dyna via CLI.
Example myprallellines.py routine.
Example my_parallel_lines file.
Example launcher_file.py routine.
- Launch jobs using SSH client:
- cd directory_where_your_inputs_are
- sbatch launcherLs_Dyna.slurm (slurm file)
- squeue –u user_name (see status of job)
- Emails at begin and end of job will be sent to the email address provided in the batch file.
- Once the analysis is completed, output files can be transferred using Globus (or SCP client) and used LS-PrePost (or else) to visualize them.
OpenFOAM User Guide
The OpenFOAM (Open Field Operation and Manipulation) CFD (Computational Fluid Dynamics) Toolbox is a free, open source CFD software package which has a large user base across most areas of engineering and science, from both commercial and academic organizations. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. It includes tools for meshing, notably snappyHexMesh, a parallelised mesher for complex CAD geometries, and for pre- and post-processing. Almost everything (including meshing, and pre- and post-processing) runs in parallel as standard, enabling users to take full advantage of high performance computing resources.
More detailed information and OpenFOAM user documentation can be found at the OpenFOAM website.
How to Submit an OpenFOAM Job in the Workspace
- Select the OpenFOAM application from the Simulation tab in the Workspace.
- Select the version of OpenFOAM you wanted to work with (Designsafe supports version 6 and 7).
- Locate your Case Directory (Folder) with your input files that are in the Data Depot and follow the onscreen directions to enter this directory in the form. The following figure shows the example case in the community data.
- Select your Solver from the dropdown menu. The workspace has options for 5 OpenFOAM solvers i.e. interFOAM, simpleFOAM, icoFOAM, potentialFOAM and olaFlow. If you need any other specfic solver please submit a ticket.
- Choose decompostion and mesh generation options from the dropdown menu.
- Enter maximum run time.
- Enter a job name.
- Enter an output archive location or use the default provided.
- Select the number of nodes to be used for your job. Larger data files run more efficiently on higher node counts. Follow instructions given in the description for chosing number of processors.
- Click Run to submit your job.
- Check the job status by clicking on the arrow in the upper right of the job submission form.
OpenSees User Guide
The Open System for Earthquake Engineering Simulation (OpenSees) is a software framework for simulating the static and seismic response of structural and geotechnical systems. It has advanced capabilities for modeling and analyzing the nonlinear response of systems using a wide range of material models, elements, and solution algorithms.
One sequential (OpenSees-EXPRESS) and two parallel interpreters (OpenSeesSP and OpenSeesMP) are available on DesignSafe. Please explore the desired interpreter for more details.
OpenSees-EXPRESS
OpenSees-Express provides users with a sequential OpenSees interpreter. It is ideal to run small sequential scripts on DesignSafe resources freeing up your own machine.
OpenSeesSP
OpenSeesSP is an OpenSees interpreter intended for high performance computers for performing finite element simulations of very large models on parallel machines. OpenSeesSP is easy to use even with limited knowledge about parallel computing. It only requires minimal changes to input scripts to make them consistent with the parallel process logic. .
OpenSeesMP
OpenSeesMP is an OpenSees interpreter intended for high performance computers for performing finite element simulations with parameteric studies and very large models on parallel machines. OpenSeesMP requires understanding of parallel processing and the capabilities to write parallel scripts.
How to Submit an OpenSees Job in the Workspace
-
Select the OpenSees application from the simulation tab in the workspace.
-
Choose the application that is best suited for your work.
-
Locate your OpenSees input files and TCL script in the Data Depot and follow the onscreen directions to provide your Input Directory and TCL Script in the form.
-
To test out a tutorial case you can copy paste the link in the description for working directory as well as the TCL script name. As shown in the figure below.
-
Enter a maximum job runtime in the form. See guidance on form for selecting a runtime.
-
Enter a job name (Optional).
-
Enter an output archive location or use the default provided.
-
Node Count: Number of requested process nodes for the job.
-
Processors per Node: numbers of cores per node for the job. The total number of cores used is equal to NodeCount x ProcessorsPerNode.
-
Click Run to submit your job.
-
Check the job status by clicking on the arrow in the upper right of the job submission form.
DesignSafe Tutorial: OpenSees & DesignSafe, October 31, 2018
For detailed explanation of slides below, watch the tutorial above.
Additional Resources
Examples in Community Data
- OpenSees-EXPRESS:
- input directory
- input TCL file: freeFieldEffective.tcl
- OpenSeesSP:
- input directory
- input TCL file: RigidFrame3D.tcl
- resources: 1 Node, 2 Processors
- OpenSeesMP:
- input directory
- input TCL file: parallel_motion.tcl
- resources: 1 Node, 3 Processors
Powerpoint Presentations
-
For models with fewer than 1000 elements, parallel ADCIRC may not be necessary, but it can be used depending on the available computational resources. ↩
-
For small models, SWAN + ADCIRC might be more resource-intensive than required unless specific wave dynamics need to be resolved. ↩
-
For very large models, ADCPREP can take a significant amount of time due to the decomposition of large grids. It is recommended that this data be saved and reused when possible to avoid the need for repeated decomposition. ↩
-
For very large models with complex wave-current interactions, SWAN + ADCIRC in parallel is the recommended approach. ↩