-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add general coordinate ability to MOC analysis member #10
Comments
Any feedback is appreciated. Once we agree on the plan, the alterations should be straightforward. Pinging @xylar @maltrud, @vanroekel, @dengwirda @cbegeman @alicebarthel, @milenaveneziani |
@mark-petersen Thanks for your efforts on this!
|
@mark-petersen, I spent a lot of time working on the overturning streamfunction (OSF) for ISOMIP+, which has a vertical coordinate that is very different from z-level (because of the ice-shelf draft and Haney number constraints). I found I didn't get correct results when I tried to interpolate vertical velocity in the vertical from the z* to the z-level reference grid. I believe the problem is that assuming that the vertical velocity varies linearly within a cell was not consistent with the actual fluxes in and out of the cell. I think that computing an accurate vertical transport at z-level interfaces would likely require bisecting cells with z-level interfaces and summing the know transports for all other faces of the bisected cell to determine the transport through the z-level interface. This is likely too computationally intensive to be feasible. Instead, I found that I needed to do conservative interpolation to get the total horizontal transport across faces (a rectangle defined by cell edges and reference-grid layer thickness) on the z-level grid. I then integrate these vertically to get psi at reference z-level interfaces and then bin them to get the OSF. Here is the python code I use: I also helped the ROMS group use the same technique to get their OSF for ISOMIP+. The meshes used for ISOMIP+ are from |
cee/15.0.0 with GPU MPI buffers can crash in a system lib like this: #4 0x00007fffe159e35b in (anonymous namespace)::do_free_with_callback(void*, void (*)(void*)) [clone .constprop.0] () from /opt/cray/pe/cce/15.0.0/cce/x86_64/lib/libtcmalloc_minimal.so.1 #5 0x00007fffe15a8f16 in tc_free () from /opt/cray/pe/cce/15.0.0/cce/x86_64/lib/libtcmalloc_minimal.so.1 #6 0x00007fffe99c2bcd in _dlerror_run () from /lib64/libdl.so.2 #7 0x00007fffe99c2481 in dlopen@@GLIBC_2.2.5 () from /lib64/libdl.so.2 #8 0x00007fffea7bce42 in _ad_cray_lock_init () from /opt/cray/pe/lib64/libmpi_cray.so.12 #9 0x00007fffed7eb37a in call_init.part () from /lib64/ld-linux-x86-64.so.2 #10 0x00007fffed7eb496 in _dl_init () from /lib64/ld-linux-x86-64.so.2 #11 0x00007fffed7dc58a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2 #12 0x0000000000000001 in ?? () #13 0x00007fffffff42e7 in ?? () #14 0x0000000000000000 in ?? () Work around this by using cee/14.0.3.
Initialize Icepack constants using values in ice_constants_colpkg.F90
Currently, the MOC analysis member is written for z-level coordinates, i.e. it sums horizontally by k levels. When this MOC is computed for any tilted coordinate (sigma-z, for example) it sums different depths together in zonal bins, which is incorrect.
I am numbering these comments, in case people want to reference them below:
to a reference z-grid just before horizontal summations are taken, here:
E3SM/components/mpas-ocean/src/analysis_members/mpas_ocn_moc_streamfunction.F
Lines 506 to 507 in 3a87fa1
E3SM/components/mpas-ocean/src/analysis_members/mpas_ocn_moc_streamfunction.F
Lines 524 to 525 in 3a87fa1
E3SM/components/mpas-ocean/src/analysis_members/mpas_ocn_moc_streamfunction.F
Lines 605 to 606 in 417b03f
E3SM/components/mpas-ocean/src/analysis_members/mpas_ocn_moc_streamfunction.F
Lines 623 to 624 in 3a87fa1
The reference z-grid could be
refZMid
andrefBottomDepth
, and then everything would work in MPAS-Analysis. It could also be a higher-resolution z-grid for this specific purpose. I propose to use the first for simplicity.For background, you can compute a streamfunction
psi(y,z)
by integratingv=-d psi/dz
vertically orw = d psi/dy
horizontally, wherey
here is latitude. In the Fortran code we integratev=-d psi/dz
vertically along the southern transect in line 506 to get the boundary condition. Then we integratew = d psi/dy
in line 524 horizontally by latitude bins (this produces smoother statistics than integrating many transects vertically). Then we repeat that process for the GM Bolus velocity in lines 605 and 624. I know I've written that in latex a few times, can't find it right now...I would add the general-coordinate MOC as a separate subroutine, chosen by a flag, so our current standard z-level runs don't spend time on unneeded interpolation, and so I do not alter z-level code that is trusted.
The current subroutine computes for every edge and cell and multiplies by
regionCellMasks
. I would change it to check the mask first, and only interpolate and sum if the mask = 1.I am seeking advice on the interpolation method. The only one currently in MPAS-Ocean is the linear
ocn_init_interpolation_linear_vert
. We could add @dengwirda's https://github.com/dengwirda/PPR here, or perhaps @cbegeman already has one working on a general vertical coordinate branch, and we can merge that one early to use here as well.Related, we need to decide what order of interpolation is sufficient.
When interpolating horizontal and vertical velocity, is it important to exactly conserve the transport per edge-column and cell-column, respectively? Perhaps exact conservation is not important for a basin-wide diagnostic (as opposed to numerical methods for advection, say). I'm not sure this question even makes sense for interpolating vertical velocity vertically.
The text was updated successfully, but these errors were encountered: