-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP, ENH: add df->rec converter #886
WIP, ENH: add df->rec converter #886
Conversation
* add a per-record converter from DataFrame to darshan-util C record format that passes tests for single records (rows) with counters and fcounters (but not rank & id yet) * add a full df->multi-record converter; this one still leads to segfaults with the C inject func with stuff like: ``` ../darshan-logutils-accumulator.c:145: darshan_accumulator_inject: Assertion `rank < acc->job_nprocs' failed.` ``` * one of the two added tests passes, the other xfails, feel free to push in fixes * as discussed in meeting earlier today (Rob R. seems to agree), this likely shouldn't be a long-term solution--just temporary until we can hoist all analysis infrastructure to the Python level * some of the accumulator API prototypes are copied in here from other PRs for testing purposes only.. since the main purpose of doing something like this would be to allow filtering DataFrames and then passing the raw records to the C accum interface..
* ` _df_to_rec()` has been simplified and is far more robust/correct now that we use use a `recarray`; a separate function for multiple records was no longer needed so it was purged * cleanups in `test_df_to_rec()` and, add checks for id/rank as well now that they are properly re-packed * `test_reverse_record_array()` now demonstrates the Python filter df -> C accum workflow much more clearly; it still has some kind of memory issue/discrepancy with darshan-hpcgh-867, but much closer now...
This is much closer now-- There's a memory access/ |
* fix up `test_reverse_record_array` memory issues, parametrize it over scenarios that do/don't match Perl infra, and some cleanups to that test
* support older `pandas` versions with the slicing index in `_df_to_rec()`
Ok, CI is passing now, so this is getting to be in pretty solid shape. I'll summarize some useful things reviewers could help with:
|
Another thing I thought of--it may be possible to simplify the complexity in |
I still need to review the tests, but the conversion code looks good to me. I don't have any suggestions yet, but maybe I will after I work through some of the issues you mention for reviewers. I'm not sure it's a big deal, but conceptually I would think you could always return a buffer of the underlying record type, as opposed to a
Luckily, all of the modules that would ultimately support accumulation follow the same structure as what you already have implemented for POSIX/STDIO with little tweaks. Lustre is one that won't support that. The only issue I can think of is that the |
* modified expected values for case where filtering is applied, and removed xfail tag - the expected values from the Perl summary report sum the derived metrics across the POSIX and STDIO modules, which is why the old expected values are larger than the values that are now passing * make sure to call accumulator_destroy to avoid mem leaks * cache nprocs/modules so we don't need to reopen the log
I just pushed a fix for this and a couple of other smaller things for this test. The short story is that these derived metrics presented in the old Perl report are actually summed across both POSIX and STDIO modules, so we needed to account for that difference in this test since it's POSIX only. I modified the expected values to reflect the POSIX module only values -- I started to add STDIO support for the test, but it was getting messy and I'm not sure there's much value in us testing the expected values for combining modules. For a little more background, in the past we have often combined POSIX/STDIO data when doing some analyses, since they are both found at the lowest layer of the I/O stack (that we can instrument) and they don't layer on top of each other (we don't have to worry about double counting statistics if the APIs don't layer on top of each other). If you recall, we had a similar issue with the "file access by category" plots, where we needed to update them to additionally include STDIO data. That said, I'm fine with our current approach of providing these stats on a per-module basis in the new PyDarshan job summary tool, so don't think there's much to follow up on here. |
* always return a Python bytes object (return value from recarray tobytes() method) from _df_to_rec * adjust `test_df_to_rec()` to "cast" from returned bytes to a record pointer we can use to sanity check record fields * fix `test_reverse_record_array()` usage of 'summation_record' argument to C library `accumulator_emit()` routine - 'summation_record' is a pointer to a record structure for the corresponding module to write an aggregate record (i.e., a record accumulated from all injected records) - explicitly allocate a 'summation_record' and pass it to the emit routine, similar to what is done with the derived_metrics structure - remove unnecessary secondary call in to `_df_to_rec()`
I just pushed another commit that, among other things, addresses the warning you mentioned related to an implicit cast. I think it simplifies The tests have been updated a bit, too, and are passing for me locally:
|
I checked all 4 of your commits and they look like clear improvements to me. I suppose you could also update the return value docstring for I guess some decision will have to be made on if you proceed with this PR first, then modify some of the older aggregation ones to use this machinery instead of the Python hand-feeding records to the accumulator approach from before. |
|
I think all looks good here. Not an ideal solution, but our hands are tied until we can move more stuff into Python. In the mean time, this will let us use the existing C aggregation stuff while giving us flexibility (i.e., for filtering) using pandas. |
add a per-record converter from DataFrame to darshan-util C record format that passes tests for single records (rows) with
counters
andfcounters
(but notrank
&id
yet)add a full df->multi-record converter; this one still leads to segfaults with the C inject func with stuff like (probably corrupted data..):
one of the two added tests passes, the other xfails, feel free to push in fixes
as discussed in meeting earlier today (Rob R. seems to agree), this likely shouldn't be a long-term solution--just temporary until we can hoist all analysis infrastructure to the Python level
some of the accumulator API prototypes are copied in here from other PRs for testing purposes only.. since the main purpose of doing something like this would be to allow filtering DataFrames and then passing the raw records to the C accum interface.. (well, you also avoid re-reading the log file records)
I'll leave backwards compatibility and support beyond STDIO/POSIX drafted here to Argonne folks most likely