-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rt bug fixes #395
rt bug fixes #395
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok as far as I can tell. Will check again and approve when it is time for the PR to go in.
One thing I've noticed is that if the RT fails (for example, one test times out) then the "clean after" command does not take affect. All the fv3_X.exe and matching modules.fv3_X are left in the tests directory. Is this a design feature or is it something we like to fix? |
Design feature I think - you want to be able to see what happened, use the executable without having to recompile. Same reason why the rt_* directories are not removed if the regression tests fail.
… On Jan 27, 2021, at 6:57 AM, Denise Worthen ***@***.***> wrote:
One thing I've noticed is that if the RT fails (for example, one test times out) then the "clean after" command does not take affect. All the fv3_X.exe and matching modules.fv3_X are left in the tests directory. Is this a design feature or is it something we like to fix?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#395 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AB5C2RJK3V7K6VP52CDIEJ3S4ALT3ANCNFSM4WUCLANA>.
|
I can see where you'd use this---the trouble I have is associating the compile line w/ the test number. Short of counting each COMPILE line, is there a way to know which fv3_XX.exe is used for the failed test? |
I think so. The log_hera.intel/run_001 script, for example, should have a line with "cp ... fv3_N.exe RUNDIR/fv3.exe"
… On Jan 27, 2021, at 7:16 AM, Denise Worthen ***@***.***> wrote:
I can see where you'd use this---the trouble I have is associating the compile line w/ the test number. Short of counting each COMPILE line, is there a way to know which fv3_XX.exe is used for the failed test?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#395 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AB5C2RIXWSRQQE6GTFWL7RTS4AN4ZANCNFSM4WUCLANA>.
|
That is a good suggestion. Let me look into implementing it. |
That line is there already. From
|
I have another question. I was running a test with shorter forecast length
(24->3), I noticed RT reports that the atmos_4xdaily.nc reports ALT OK
compared to baseline even though the run has fewer forecast times in the
file, is that expected? I guess this could also be true if the diag_table
is changed to have fewer fields, as long as the file has subset data of
baseline data, the compare_netcdf will report the comparison is OK.
…On Wed, Jan 27, 2021 at 9:23 AM Minsuk Ji ***@***.***> wrote:
I think so. The log_hera.intel/run_001 script, for example, should have a
line with "cp ... fv3_N.exe RUNDIR/fv3.exe"
… <#m_7022545815085251882_>
On Jan 27, 2021, at 7:16 AM, Denise Worthen *@*.***> wrote: I can see
where you'd use this---the trouble I have is associating the compile line
w/ the test number. Short of counting each COMPILE line, is there a way to
know which fv3_XX.exe is used for the failed test? — You are receiving this
because you commented. Reply to this email directly, view it on GitHub <#395
(comment)
<#395 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AB5C2RIXWSRQQE6GTFWL7RTS4AN4ZANCNFSM4WUCLANA
.
That is a good suggestion. Let me look into implementing it.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#395 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI7D6TKBL3HEVBRWFHNNENTS4AOWRANCNFSM4WUCLANA>
.
|
Also, I am wondering if we can have the total running time of each test
added to the log file? We can grab "The total amount of wall time" from out
file.
On Wed, Jan 27, 2021 at 9:29 AM Jun Wang - NOAA Federal <jun.wang@noaa.gov>
wrote:
… I have another question. I was running a test with shorter forecast length
(24->3), I noticed RT reports that the atmos_4xdaily.nc reports ALT OK
compared to baseline even though the run has fewer forecast times in the
file, is that expected? I guess this could also be true if the diag_table
is changed to have fewer fields, as long as the file has subset data of
baseline data, the compare_netcdf will report the comparison is OK.
On Wed, Jan 27, 2021 at 9:23 AM Minsuk Ji ***@***.***>
wrote:
> I think so. The log_hera.intel/run_001 script, for example, should have a
> line with "cp ... fv3_N.exe RUNDIR/fv3.exe"
> … <#m_-4685041434677695492_m_7022545815085251882_>
> On Jan 27, 2021, at 7:16 AM, Denise Worthen *@*.***> wrote: I can see
> where you'd use this---the trouble I have is associating the compile line
> w/ the test number. Short of counting each COMPILE line, is there a way to
> know which fv3_XX.exe is used for the failed test? — You are receiving this
> because you commented. Reply to this email directly, view it on GitHub <#395
> (comment)
> <#395 (comment)>>,
> or unsubscribe
> https://github.com/notifications/unsubscribe-auth/AB5C2RIXWSRQQE6GTFWL7RTS4AN4ZANCNFSM4WUCLANA
> .
>
> That is a good suggestion. Let me look into implementing it.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#395 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AI7D6TKBL3HEVBRWFHNNENTS4AOWRANCNFSM4WUCLANA>
> .
>
|
@junwang-noaa, let me look into both your questions. |
If forecast length is different, ALT CHECK should lead to NOT OK, because the array dimensions are different between baseline and rundir. Changes were made to add dimension check to compare_ncfile.py just now. Similarly, if the number of variables is different in the fields, it should lead to ALT CHECK NOT OK. |
This will be merged together with @DomHeinzeller 's PR #396. |
This will be merged together with #396. |
@MinsukJi-NOAA I believe this was just merged as part of #396. Can you check and, if so, close the PR please? Thanks. |
Merged via #396 |
Description
Bug fix in check_results function of rt_utils.sh
Bug fix for rt.sh -n option flag. This was broken when the MACHINES column of rt.conf was modified.
Issue(s) addressed
#296
Testing
Will run regression tests on supported platforms.