-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster get lookup name #16
Conversation
Funny thing: as it is now, docpipeline took 4:48 vs 4:34 in master. But, if symbols.py is cythonized, |
From the above it sounds like master is somewhere between 4:34 and 4:47. The fact that in the cython run it happened to get the slower number is immaterial. And the spread of 13 seconds may be in the noise range. |
@TiagoCavalcante - thanks. I am guessing that these results wobble around less since they are more targeted. It would be good to have somehow in the reports something about the number of iterations since the belief is that the larger the number of iterations, the more confidence we should have in the results. |
@mmatera is this ready for final review or are you contemplating more work on this? |
@rocky what do you think of putting it in the title: "String_vs_StringQ - n iterations"? |
@TiagoCavalcante - that is a simple and low-tech way to address this. (Because it not automated, it does rely on a person getting the title correct and updating the title should the number of iterations change.) As I look again on both benchmarks, it looks to me like the results are marginal.... The benchmark over all doctests the improvement seems within the margin of wobble. In the targeted benchmarks, the improvement is greatest in a construct we know to now to avoid: using @mmatera and @TiagoCavalcante your reading and thoughts on the benchmarks? |
@rocky I didn't express myself well. The information would be grabbed from the result file (that already contains the number of iterations) and automatically be put in the title. |
@rocky I used |
@rocky now I remember that each group can have its own number of iterations, so I guess this would just confuse people. Another idea is show the time it took to run the benchmarks, as more iterations = more time. |
Yes, but in many times that is not the case. So in those common situation this could be used.
The number of iterations generally, not the time spent, is an indicator of the standard deviation or variation which is what we want a sense of. Also, I don't want to have to do mental calculations. There could be a legend for the number of iterations like there is for the colors. The color of a test or maybe a box after the test could indicate how many iterations for that test or test pair/group. Which reminds me, if the legend were put outside the graph then there wouldn't be any chance of it obscuring boxes or running into boundaries. |
@@ -170,8 +170,7 @@ def is_machine_precision(self) -> bool: | |||
|
|||
def get_lookup_name(self): | |||
"Returns symbol name of leftmost head" | |||
|
|||
return self.get_name() | |||
return "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In master, BaseExpression returns self.get_name()
. This requires at least one function call that at the end is useless: you cannot associate definitions to Atom
s that are not Symbol
s
@rocky: I do not understand: Is this related to this PR or to the ShowSteps? BTW: I call it |
It was related to ShowSteps, not this PR. Sure, I gathered it was the first thing that came to mind. The first thing though sometimes is not the best thing. PR #80 is now in. |
@mmatera what is the status of this? I think the slowdowns are just fluctuations, in Python function calls are slow. |
I think this PR is better implemented now in #86 |
This PR should provide a small improvement in the pattern matching mechanism. @rocky, benchmarks are coming...