-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update g-w to cycle GDASApp #1067
Comments
Tagging @CoryMartin-NOAA for awareness. |
Are you loading conda in your The ultimate error is something different but also due to conda issues. |
Thank you, @WalterKolczynski-NOAA , for this information. I do not load conda in my |
@RussTreadon-NOAA and @aerorahul had a side discussion on this topic and it was agreed that as a temporary measure, an equivalent |
The
One final example is For the Tagging @CoryMartin-NOAA and @guillaumevernieres for awareness. |
It seems spack-stack modules need variables to be added to support existing applications. |
Agreed. A similar comment applies to some variables and hpc-stack. In addition to adding variables to spack-stack modules, we also need to add some production modules to spack-stack. For example, there is no prod_util module in the spack-stack currently installed on Orion. |
As long as all these variables are defined in the WCOSS2 versions of these modules, the versions on other machines, whether it be in hpc-stack or spack-stack, really need to have them too. This is an NCEP-LIBS issue. |
Changes pertaining to this issue will be committed to g-w branch feature/updates_for_GDASApp. This branch originated from develop at fd771cb. |
Changes committed at 1a1004f New file As noted earlier in this issue, GDASApp j-jobs were loading modulefiles required by GDASApp executables. Hash 1a1004f removes these loads from GDASApp j-jobs. GDASApp modules are now loaded in GDASApp rocoto jobs. This is consistent with GSI-based DA jobs. Attempts to run GDASApp jobs on Orion failed with an error message stating that python module
Adding
While
Both failures suggest an issue (or feature) in the Orion
resulted in This failure does not occur when executing GDASApp jobs on Hera. So while 1a1004f adds the |
Note: The g-w changes committed to |
Expected behavior
g-w develop
jobs/JGDAS_GLOBAL_ATMOS_ANALYSIS_PREP
should run to completion.Current behavior
JGDAS_GLOBAL_ATMOS_ANALYSIS_PREP
fails with the tracebackMachines affected
Running C96L127 parallel on Orion using GDASApp based (UFS-based) DA
To Reproduce
export DO_JEDIVAR="YES"
inconfig.base
A log file with the reported error is
/work/noaa/stmp/rtreadon/comrot/prgdasens4/logs/2021122100/gdasatmanalprep.log
Context
Closed issue #1015 reported another case of
$BASH_SOURCE
errors. #1015 differs from the failure reported in this issue.Detailed Description
The section of
JGDAS_GLOBAL_ATMOS_ANALYSIS_PREP
in which the failure occurs isAccording to gdasatmanalprep.log the failure occurs on the
module purge
line.Possible Implementation
As noted in the comments in
JGDAS_GLOBAL_ATMOS_ANALYSIS_PREP
, the sequence ofmodule purge
,use
, andload
should not be in the j-job. This section of the job needs to be refactored. What's does the g-w team recommend?The text was updated successfully, but these errors were encountered: