Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating collisional excitation/deexcitation coefficient matrix for NLTE excitation treatment #2385

Merged

Conversation

sonachitchyan
Copy link
Member

@sonachitchyan sonachitchyan commented Aug 9, 2023

📝 Description

Type: 🪲 bugfix | 🚀 feature | ☣️ breaking change | 🚦 testing | 📝 documentation | 🎢 infrastructure

This PR adds a new function which constructs the coefficient matrix portion for NLTE treatment that includes collisional excitation and deexcitation coefficients.

Here is an example of how the matrix should look for 3 levels(0, 1 and 2)
Screen Shot 2023-08-11 at 11 55 41 AM

Also, link issues affected by this pull request by using the keywords: close, closes, closed, fix, fixes, fixed, resolve, resolves or resolved.

📌 Resources

Examples, notebooks, and links to useful references.

🚦 Testing

How did you test these changes?

  • Testing pipeline
  • Other method (describe)
  • My changes can't be tested (explain why)

☑️ Checklist

  • I requested two reviewers for this pull request
  • I updated the documentation according to my changes
  • I built the documentation by applying the build_docs label

Note: If you are not allowed to perform any of these actions, ping (@) a contributor.

@codecov
Copy link

codecov bot commented Aug 9, 2023

Codecov Report

Merging #2385 (2b7936c) into master (a2af876) will increase coverage by 0.05%.
The diff coverage is 100.00%.

❗ Current head 2b7936c differs from pull request most recent head 3aac555. Consider uploading reports for the commit 3aac555 to get more accurate results

@@            Coverage Diff             @@
##           master    #2385      +/-   ##
==========================================
+ Coverage   68.43%   68.49%   +0.05%     
==========================================
  Files         145      145              
  Lines       13221    13245      +24     
==========================================
+ Hits         9048     9072      +24     
  Misses       4173     4173              
Files Changed Coverage Δ
...dis/plasma/properties/nlte_rate_equation_solver.py 96.87% <100.00%> (+0.28%) ⬆️
tardis/plasma/tests/test_nlte_excitation.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@sonachitchyan sonachitchyan requested a review from jvshields August 9, 2023 19:21
andrewfullard
andrewfullard previously approved these changes Aug 10, 2023
Copy link
Contributor

@chvogl chvogl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks good. However, I think all of the for loops can be replaced with standard numpy/pandas indexing operations relatively easily.

Copy link
Contributor

@chvogl chvogl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks very good. I only have some minor final comments.

@sonachitchyan sonachitchyan requested a review from chvogl August 16, 2023 20:21
Comment on lines 878 to 880
coll_exc_coefficient.index.get_level_values(
"level_number_lower"
).unique()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would do coll_exc_coefficient.groupby("level_number_lower").sum() before this and assign the result to a variable. You can then use the index of this variable here and the .values of it later.

-1
* coll_exc_coefficient.groupby("level_number_lower")
.sum()
.values.flatten()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle, coll_exc_coefficient should be a Series since it is one column of a DataFrame (corresponding to a specific shell). The .flatten() should then not be necessary.

Comment on lines 888 to 890
coll_exc_coefficient.index.get_level_values(
"level_number_upper"
).unique()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above.

-1
* coll_deexc_coefficient.groupby("level_number_upper")
.sum()
.values.flatten()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment.

"level_number_upper"
),
)
] = coll_deexc_coefficient.values.flatten()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment.

Comment on lines 162 to 163
exc_coeff = pd.DataFrame(coll_exc_coeff_values, index=index)
deexc_coeff = pd.DataFrame(coll_deexc_coeff_values, index=index)
Copy link
Contributor

@chvogl chvogl Aug 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. These should probably be Series?

@sonachitchyan sonachitchyan requested a review from chvogl August 17, 2023 17:51
Copy link
Contributor

@chvogl chvogl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Happy to see this merged soon.

@andrewfullard andrewfullard enabled auto-merge (squash) August 18, 2023 13:31
@tardis-bot
Copy link
Contributor

*beep* *bop*

Hi, human.

The docs workflow has succeeded ✔️

Click here to see your results.

@andrewfullard andrewfullard merged commit e58d504 into tardis-sn:master Aug 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants