-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Perf] Linux/x64: 38 Regressions on 4/9/2023 11:24:15 AM #15699
Comments
Run Information
Regressions in System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>
ReproGeneral Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md Payloadsgit clone https://github.com/dotnet/performance.git
python3 .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>*' PayloadsHistogramSystem.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeToWriter(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeToString(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeToUtf8Bytes(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeToWriter(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeToString(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeToUtf8Bytes(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<Nullable<DateTimeOffset>>.SerializeObjectProperty(Mode: SourceGen)
Description of detection logic
DocsProfiling workflow for dotnet/runtime repository Run Information
Regressions in MicroBenchmarks.Serializers.Json_ToString<MyEventsListerViewModel>
ReproGeneral Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md Payloadsgit clone https://github.com/dotnet/performance.git
python3 .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'MicroBenchmarks.Serializers.Json_ToString<MyEventsListerViewModel>*' PayloadsHistogramMicroBenchmarks.Serializers.Json_ToString<MyEventsListerViewModel>.SystemTextJson_SourceGen_
Description of detection logic
MicroBenchmarks.Serializers.Json_ToString<MyEventsListerViewModel>.SystemTextJson_Reflection_
Description of detection logic
DocsProfiling workflow for dotnet/runtime repository Run Information
Regressions in System.Globalization.Tests.Perf_DateTimeCultureInfo
ReproGeneral Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md Payloadsgit clone https://github.com/dotnet/performance.git
python3 .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Globalization.Tests.Perf_DateTimeCultureInfo*' PayloadsHistogramSystem.Globalization.Tests.Perf_DateTimeCultureInfo.ToString(culturestring: da)
Description of detection logic
System.Globalization.Tests.Perf_DateTimeCultureInfo.ToString(culturestring: )
Description of detection logic
System.Globalization.Tests.Perf_DateTimeCultureInfo.ToString(culturestring: fr)
Description of detection logic
System.Globalization.Tests.Perf_DateTimeCultureInfo.ToString(culturestring: ja)
Description of detection logic
DocsProfiling workflow for dotnet/runtime repository Run Information
Regressions in System.Buffers.Text.Tests.Utf8FormatterTests
ReproGeneral Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md Payloadsgit clone https://github.com/dotnet/performance.git
python3 .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Buffers.Text.Tests.Utf8FormatterTests*' PayloadsHistogramSystem.Buffers.Text.Tests.Utf8FormatterTests.FormatterInt64(value: -9223372036854775808)
Description of detection logic
System.Buffers.Text.Tests.Utf8FormatterTests.FormatterInt64(value: 9223372036854775807)
Description of detection logic
System.Buffers.Text.Tests.Utf8FormatterTests.FormatterUInt64(value: 18446744073709551615)
Description of detection logic
DocsProfiling workflow for dotnet/runtime repository Run Information
Regressions in System.Text.Json.Serialization.Tests.WriteJson<IndexViewModel>
ReproGeneral Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md Payloadsgit clone https://github.com/dotnet/performance.git
python3 .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Text.Json.Serialization.Tests.WriteJson<IndexViewModel>*' PayloadsHistogramSystem.Text.Json.Serialization.Tests.WriteJson<IndexViewModel>.SerializeToWriter(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<IndexViewModel>.SerializeToUtf8Bytes(Mode: SourceGen)
Description of detection logic
DocsProfiling workflow for dotnet/runtime repository |
looks like dotnet/runtime#84469 cc @stephentoub |
Same comment/question as here: |
Looks like the formatting changes knocked interp off a performance cliff. Not obvious to me whether there's a specific problem change from looking over the diff. EDIT: I can look into what it would take to make CreateTruncating inline. EDIT 2: Nevermind, this was an AOT run? The filter labels changed I guess. |
For reference, this is all in support of being able to format all of these primitives to either UTF16 or UTF8, sharing all of the code to do so. We end up then at the leaf operations needing to store a value as a TChar (a char or a byte), and use TChar.CreateTruncating to do so. When everything inlines as expected, it should become a simple cast or possibly evaporate altogether (in the case where the input and output types match). |
I think interp would probably need to intrinsify it, looking at the implementation, since there are out params and throws in here. |
cc: @tannergooding, @EgorBo |
Is it actionable on our side or just to be aware that such changes could regress mono? |
We should assume at this point, that any changes in the Libraries code will affect all runtimes in some fashion. Like measurements done in the PR for CoreCLR, I suggest for any future changes, create a similar table for Mono and NativeAOT prior to merging. Mono can't keep playing a catch-up game each time a new optimization is introduced, or PR authors needs to be willing to revert the commit till other runtimes update their code bases till regressions are fixed. |
It's worth noting that nothing here is a "new optimization". This is using fairly standard patterns that have been in use throughout the BCL and general .NET ecosystem for years. The "newest" thing here is utilizing generic math/static abstracts in interface; but that was a core .NET 7 feature that previewed in .NET 6 and one that the community is actively using and adopting in various places to simplify their code. We do need to be mindful of how changes will impact all runtimes, but at the same time we shouldn't find ourselves unable to remove thousands of lines of code, simplify overall maintenance, or expose new functionality because one runtime is missing several widely used pieces of functionality. NAOT and Crossgen do not typically have problems here because it shares the same overall underlying codegen and optimizations with RyuJIT. |
"We do need to be mindful of how changes will impact all runtimes" : Are all engineers mindful of changes to impact to all runtimes -- my honest opinion I don't feel that is being done. In the past few months, Mono had several instances of both size and microbenchmark regressions identified after PRs are merged and perf lab runs identify the failures. I think this approach needs to change NOW, and impact on all runtimes identified prior to merging a PR. To know regressions will be introduced with a change is better for us, than reverse engineer PRs to figure out the regression after. |
There is certainly more that can be done by engineers providing the PRs. Part of this may be due to a lack of integration and documentation on the Mono side. This latter bit is unfortunately something only the Mono team can provide as they are the ones with the context/knowledge to ensure it is correct/up to date. For RyuJIT, whether you're working on Windows, Linux, or MacOS; you have the same general support and process for getting disassembly, dumps, debugging, running test, running benchmarks, etc. All of this works in VS or through the remote debuggers. The docs for this are all actively kept up to date and integration with the other relevant repos regularly happens. For Mono, there is workflow on Linux/MacOS. However, much of this workflow differs or doesn't exist on Windows. There are no docs or integration for how to get disassembly, for how to run mono tests against Mono is also missing many pieces of functionality around generics, SIMD, and other core pieces of functionality that make this very difficult. A PR that removes thousands of lines of code while simultaneously not changing or significantly improving perf for RyuJIT, NAOT, and Crossgen can easily regress Mono by just as much. This is likewise something that we need to address longer term as maintaining split managed implementations for core functionality between Mono and RyuJIT isn't tenable. It also represents areas where Mono will actively be hurting for real world user code, particularly in several prominent libraries/applications and where those users will not be willing to spend the time to do significant additional testing. |
We have had the same exact discussion over and over many times in past years. We had a good post-mortem discussion post RC1 regressions last year, and need to possibly revisit that with M2s and SteveC at some point and formulate a plan to avoid this in the future irrespective of how large/small the changes are. Mobile, WASM, MAUI are integral pieces of the .NET releases and there are many developers using Mono in the wild. We can't be regressing their experiences. |
Happy to do so :). There should likely be a general call to action and request for testing/input across the libraries/runtime teams. Ideally this would go out on teams and in e-mail with relevant instructions and a general place to provide feedback.
My main point is that the experience is already "regressed" because Mono is missing many of the core optimizations that .NET developers rely on in the wild. We can workaround these in the BCL and some of our own libraries, but that doesn't fix the issue for major community libraries or applications which are heavily utilizing the same optimizations/patterns or which are relying on NYI functionality such as SIMD. -- I'm happy to share more details of such applications/projects offline |
I go into meetings for a few hours and come back to this fun discussion 😄
Determining impact on all runtimes prior to merging any PR is simply not feasible. What we did decide last fall when we discussed this is that if a developer thought it likely a change could have a significantly different performance profile on one runtime vs another, more scouting would be done. And as this particular change involved generics and generics has been an area of problems in the past for mono, I did run it through various benchmarks and nothing popped as being too problematic. In case I somehow messed that up, I just did it again... here mono_main is just prior to my PR and mono_pr is with my PR:
Now, it's quite likely that this isn't exact configuration that's registering all of these regression (and, btw, there were also perf-autofiling issues opened for this change with improvements), and for example on my machine in this configuration the "o" test is showing as a 10% improvement with my change whereas in the regression numbers in this issue it's showing as 40% regression, but it's not feasible to expect a developer to test every PR on every operating system with every combination of mono vs wasm with AOT vs JIT vs interpreter and whatever other axes are available. We agreed that that's what our perf lab is for, and that when such regressions came about, we'd figure out the right course of action for them. The system flagged this change (great!), and now we need to figure out what to do about it. So, what can we do about it? For context, today our primitive types only support writing themselves out as UTF16. We've had many requests as well as a requirement from work being done in ASP.NET to support UTF8 as well, and we've chosen to do that by adding a new IUtf8SpanFormattable interface that we're rolling out across the types. The implementation of its TryFormat has the exact same semantics as ISpanFormattable, except that it writes out UTF8 bytes rather than UTF16 chars. How do we enable that? We can either duplicate literally thousands of lines of formatting code, one whole set dedicated to UTF16 and one whole set dedicated to UTF8, or we can take advantage of generics and have the single code path support both (and in doing so, also eliminate the duplication in the existing but limited Utf8Formatter), resulting in a net decrease of code to maintain while also supporting new scenarios rather than a net doubling. This PR was the first change to go in employing this: there have been a few more, and there's another big one currently out for PR that addresses the numeric types. Tanner is also working on doing the same for parsing. I see the following options:
Are there other options you had in mind, Sam? Thanks! cc: @jeffhandley, who drove the discussion in the fall about all of this |
If the issue is just or primarily with |
Echoing @stephentoub's comments and referring back to the playbook we wrote up after last year's
Stephen laid out some options for us. I concur that options 1-2 (rolling back the changes indefinitely) are not good. Between options 3-5, it seems the first step is to root cause how the regression surfaces in the mono side of things. From that root cause, we can choose between option 4 and 5 (work around the limitation in the Libraries code, or address the underlying issue on the mono side). Whether or not we need to temporarily roll back the changes (option 3) depends on knowing that root cause and knowing what options 4 and 5 looks like, how long it would take, etc. The other factor for whether or not we need to temporarily roll this back is what it does to .NET 8 Preview 4. I believe the new functionality is valuable to ASP.NET scenarios that are to be highlighted in Preview 4 (and BUILD), but I don't know how adversely this affects mono/wasm scenarios that are under the spotlight for Preview 4 (or BUILD); I'd need @SamMonoRT or @lewing's input on that. |
I was able to repro at least some of the mono regressions with AOT, and at least locally it appears to be a bunch of things, presumably based on their nature all related to inlining though I'm not sure how to tell for sure on mono. I've been iterating on a commit that should hopefully address most things, plus I saw Zoltan put up a few PRs that seem to be related. |
…NDIRECT. Variables in the JIT are marked INDIRECT when their address is taken. Using volatile loads/stores prevents LLVM from optimizing away the address taking. Re: dotnet/perf-autofiling-issues#15699.
Adding a fastpath like this to DateTimeFormat:FormatDigits() might help:
|
@SamMonoRT, @jeffhandley, as for what we could do for the future, I've mentioned this before, but it'd also be really helpful if we could run benchmarks on mono on PRs via the /benchmark CI command or similar. It'd then be trivial to validate a particular change before it's merged against any number of desired targets. Right now to my knowledge all flavors of mono and wasm are missing from that. |
Thanks. I've been experimenting with a bunch of changes locally. I'll explore this as well. It's all on top of my pending numerics PR (to try to get a complete picture), so I'll aim to push up an additional commit to that today. |
Thank you @stephentoub for running some numbers for Mono, unfortunately as we learnt, that wasn't enough.
|
Nothing would ever be "enough". That's the whole point of having our whole perf lab in place for catching and acting quickly upon regressions.
I'm fairly confident the numbers I shared were the mono interpreter.
On which PRs?
This is not feasible on every PR. The time it takes to get the necessary builds locally and get everything set up to do such comparisons across multiple platforms is prohibitive. For one off's where there is an expected impact, sure, totally valid. For everything else, I don't believe it's a reasonable request. There are always going to be regressions, and we react. This isn't limited to mono; similar regressions happen with coreclr and nativeaot, and we react. |
Can we take a sampling approach and do small subset ob benchmarks? One that would take few tens of minutes at most, as opposed to several hours that the full set would take. |
It's not just about running the benchmarks. It's also about getting all the relevant builds created to do the before/after comparison, getting the environment set up appropriately for each (on all the relevant operating systems), and then running the tests on each before/after build for each configuration, and doing so enough times to have confidence in the results. If this is something we care about, we need to invest in the automation that enables it to be done automatically and without requiring physical resources or time investment on the part of the developer. This would be trivial if I could issue a command like |
…NDIRECT. (#84674) Variables in the JIT are marked INDIRECT when their address is taken. Using volatile loads/stores prevents LLVM from optimizing away the address taking. Re: dotnet/perf-autofiling-issues#15699.
On which PRs? This is not feasible on every PR. The time it takes to get the necessary builds locally and get everything set up to do such comparisons across multiple platforms is prohibitive. For one off's where there is an expected impact, sure, totally valid. For everything else, I don't believe it's a reasonable request. There are always going to be regressions, and we react. I agree it isn't possible on all PRs, but as same time for expected impact PRs, we need to start adopting this practice. It will be time consuming and not smooth at first, but we need to start somewhere. This isn't limited to mono; similar regressions happen with coreclr and nativeaot, and we react. |
This is essentially any libraries change, particularly for types in S.P.Corelib. This comes down to practically every PR that some areas do. Those area owners rely heavily on CI and automation to help catch regressions outside the "core" targets that are easily tested locally (which is often just the box being developed on).
This isn't possible to do locally. We simply do not have the hardware, time, or resources per developer. If we want this done, it needs to be part of CI. At the very least this needs to be available via an explicit trigger, but more ideally automatable. We're never going to end up in a world where any developer on the team get and test the coverage of
Mono changes, historically, were not part of the normal results seen for perf triage, didn't have documentation covering valid workflows across the range of devices (so it was difficult to even get working locally in the first place), didn't have integration with many of the core tooling used across the team, etc A lot of this has been actively improved, but there is still a lot missing, improvements needed, and general effort (particularly on behalf of developers) to ensure that it is treated equivalently. Us doing mono perf triage as part of the normal weekly triage has helped with this a lot. Ensuring that there is first class support for testing and validating everything in Windows or WSL would be another huge boon, as would ensuring that coreclr/mono can be built SxS and trivially tested end-to-end.
We don't really do this for any other runtimes. We revert bugs, but perf regressions get filed, get investigated, and then the area owners make the decision around whether something is impactful enough to be reverted or if it another fix can be done instead. Most often, an alternative fix goes in instead. The single most impactful thing to help avoid these regressions would be ensuring that generic specialization for value types and SIMD support is on for all core platforms Mono supports. Beyond that, there is high reliance on the RyuJIT inlining heuristics and few other key patterns (such as box elision and Having up for grabs issues that give some details on how this work can be contributed to or assisted with would be massively helpful. I've done several larger contributions on the Mono SIMD side where possible, but there is still a lot more to be done. |
Preview 4 snap is two weeks from now; the regressions were flagged only yesterday and fixes are already in the works. With all due respect, what about the involvement in this issue suggests a lack of "investigation, passion, and collaboration"? I was exploring root causes and possible fixes on this until midnight last night and it's the only thing other than meetings I've done today. As I write this I'm taking a break from stumbling through creating the various builds to validate the changes I have on various flavors of mono. I really hope your perspective here isn't based on this issue. From my perspective, the right things have already happened here. I'm not sure why this is the one that's causing red flags to be raised. Frankly, I think we should be celebrating (to some extent) when regressions like this happen, as they help to shine a light on real patterns real developers employ and that we’re deficient in handling. We investigate, we evaluate, we make fixes that address more than just the particular code path originally reported. All boats rise.
Too time consuming. I'm sorry, it's simply not practical, as Tanner outlined. If all regressions must be caught prior to merge, what's the point of our regression system at all? Things invariably slip through to being merged; there are too many operating systems and subsystems and backends and whatnot involved for that to not be the case, and we have zero pre-merge perf automation tooling in place. So when an issue arises, we address it. From my perspective, we're following exactly the playbook we all agreed to just a few months ago. If there's an action to be taken, it's on improving our ability to automate all of this. I'd also like to call out that not all regressions are created equally. One of the worst regressions on mono reported in this set, for example, was a particular use of DateTime.ToString/TryFormat regressing 2x by ~300ns. For the server workloads that coreclr often enables, serving millions of requests per second and formatting millions of DateTimes per second, that kind of regression could actually make a noticeable difference. For the client workloads that mono is generally used for today, in what scenario is an extra 300ns when calling DateTime.ToString prohibitive? If we really care about that level of throughput on these microoperations on mono, then we should also care about the gap between coreclr and mono... if you look at the numbers I just posted in dotnet/runtime#84587 (comment), which were all done on the same machine on the same OS, there's a much larger gap between coreclr and mono than any of the regressions here represent. I'm in no way throwing shade at mono (it does a some really cool things coreclr doesn't today, like support wasm); rather, I'm highlighting that they're focused on different things, and we need to use that difference as a lens through which we evaluate the severity of these kinds of issues. |
All right things are happening in this case, and I agree we should celebrate the systems in place are catching issues and we are actively investigating. My perspective is not built on this issue but past occurrences. I will admit the mindset is improving for a certain set of individuals. This issue had 2-3 other follow up PRs in the pipeline and we want to avoid a scenario where we think more regressions will slide in and we have not time to fix those prior to the snap. Too long a time to get results isn't acceptable as a reason to not attempt the run those. I can check if some engineers outside Mono can get hands on MacBooks, build on that, run the microbenchmarks locally to get numbers prior to checkin. The gap between coreclr - mono is huge, we all agree on that. We are trying to narrow the gap the past couple years, but with more regressions introduced in favor of other runtimes is only making it worse. |
At risk of being the stereotypical manager type that swoops in and simply repeats what others have already said... We are following our playbook here, and that's indeed worthy of celebration. Capturing here the details of that playbook and noting where it's working:
I think we're demonstrating that our learnings last fall have been valuable. Concrete action items to help us go even further:
|
@cincuranet and @LoopedBard3 are where to start exploring our options for that. |
It has been another long day so I'm not going to go point by point through the history but I think it is clear everyone here wants to improve the state of things and end up in a better place. I also assume we all also know it isn't great when you have to wait for the once a week report on Tuesday morning to understand if your schedule is still intact. Doubly so when there are often long gaps between changes and reports due to infrastructure. Text serialization is an ongoing concern for Wasm and due to size constraints we have some very brute force methods to decide to what to Aot. It's not because we don't want to improve things. It would be great if big text serialization changes were tested on Wasm prior to landing them with and without Aot. Even Just as a heads up. I haven't seen any feedback on the wasm docs. If they don't work for you please file an issue. I'm not at all happy with how hard it is to make those measurements, but that is also not entirely within my control. That said, if the documentation or procedures are incorrect or too onerous, please file an issue so that we can work on that. I'm optimistic we'll improve the performance of these changes soon but it doesn't always feel like we're working on solving this as a team. |
Run Information
Regressions in System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
Payloads
Baseline
Compare
Payloads
Baseline
Compare
Histogram
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeToUtf8Bytes(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeToWriter(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeToString(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeObjectProperty(Mode: SourceGen)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeObjectProperty(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeToString(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeToWriter(Mode: Reflection)
Description of detection logic
System.Text.Json.Serialization.Tests.WriteJson<MyEventsListerViewModel>.SerializeToUtf8Bytes(Mode: SourceGen)
Description of detection logic
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
Run Information
Regressions in System.Tests.Perf_DateTime
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
Payloads
Baseline
Compare
Payloads
Baseline
Compare
Histogram
System.Tests.Perf_DateTime.ToString(format: null)
Description of detection logic
System.Tests.Perf_DateTime.ToString(format: "o")
Description of detection logic
System.Tests.Perf_DateTime.ToString(format: "s")
Description of detection logic
System.Tests.Perf_DateTime.ToString(format: "G")
Description of detection logic
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
Run Information
Regressions in System.Text.Json.Tests.Perf_DateTimes
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
Payloads
Baseline
Compare
Payloads
Baseline
Compare
Histogram
System.Text.Json.Tests.Perf_DateTimes.WriteDateTimes(Formatted: True, SkipValidation: False)
Description of detection logic
System.Text.Json.Tests.Perf_DateTimes.WriteDateTimes(Formatted: False, SkipValidation: True)
Description of detection logic
System.Text.Json.Tests.Perf_DateTimes.WriteDateTimes(Formatted: True, SkipValidation: True)
Description of detection logic
System.Text.Json.Tests.Perf_DateTimes.WriteDateTimes(Formatted: False, SkipValidation: False)
Description of detection logic
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
Run Information
Regressions in System.Text.Tests.Perf_StringBuilder
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
Payloads
Baseline
Compare
Payloads
Baseline
Compare
Histogram
System.Text.Tests.Perf_StringBuilder.Append_ValueTypes
Description of detection logic
System.Text.Tests.Perf_StringBuilder.Append_ValueTypes_Interpolated
Description of detection logic
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
Run Information
Regressions in MicroBenchmarks.Serializers.Json_ToStream<MyEventsListerViewModel>
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
Payloads
Baseline
Compare
Payloads
Baseline
Compare
Histogram
MicroBenchmarks.Serializers.Json_ToStream<MyEventsListerViewModel>.SystemTextJson_SourceGen_
Description of detection logic
MicroBenchmarks.Serializers.Json_ToStream<MyEventsListerViewModel>.SystemTextJson_Reflection_
Description of detection logic
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
The text was updated successfully, but these errors were encountered: