Skip to content

Check if the new L-BFGS-B algorithm in scipy 1.15 is generally less precise or our test case was an exception #556

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
janosg opened this issue Jan 21, 2025 · 5 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@janosg
Copy link
Member

janosg commented Jan 21, 2025

In #555 we had to reduce the precision of a test case due to the new L-BFGS-B implementation in scipy 1.15; We should run a full benchmark or at least create a self-contained small example that shows the difference before we reach out to scipy maintainers. The test case here is not small enough because it uses optimagic constraints.

@janosg janosg added the enhancement New feature or request label Jan 21, 2025
timmens added a commit that referenced this issue Jan 21, 2025
@janosg
Copy link
Member Author

janosg commented Mar 11, 2025

To do find out if the new implementation is really different from the old one, use optimagic's benchmarking capabilities. The "More Wild" benchmark set would be a good choice.

The only difference to the how-to-guide is that you need to combine the results from multiple invocations to run_benchmark to enable benchmarking across library version. I.e. the process is:

  • Create a fresh development environment with an older scipy version
  • Do all the steps from the how-to-guide until run_benchmark and save the result as a pickle file
  • Create an environment with the latest scipy version
  • Do all the steps from the how-to-guide until run_benchmark and save the result as a pickle file
  • Load and combine the result dictionaries
  • Create the profile plots

@gauravmanmode
Copy link
Contributor

hi, i followed the steps and was able to generate the profile plots.
here is the notebook and pickle files for reference
https://github.com/gauravmanmode/sharing/blob/main/test_optimagic.html
this is the profile plot

Image

@janosg
Copy link
Member Author

janosg commented Mar 15, 2025

Thanks @gauravmanmode for creating the plots.

Ideally the two lines would have been exactly on top of each other, but at least none of them is strictly better than the other.

Can you do a few more plots before we can be sure to close this:

  • profile_plots with y_precision=0.00001 and y_precision=0.000001 so we can see if stricter precision requirements change how many problems are solved by each optimizer
  • a convergence_plot that will show even minor differences in the behavior of the two optimizers.

You don't have to re-run the benchmark and can just create the plots from your previous results.

@gauravmanmode
Copy link
Contributor

Here is the profile plot with y_precision = 0.00001
Image
Here is the profile plot with y_precision = 0.000001
Image
convergence_plot figure
Image
the convergence_plot for both optimizers is totally overlapping.

@janosg
Copy link
Member Author

janosg commented Mar 19, 2025

Thank you very much for the new plots. I think we can close this issue then. There are differences between the two optimizers but they are tiny and it would not be justified to open an issue at scipy.

@janosg janosg closed this as completed Mar 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants