Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_random_search_layout_opt is failing locally on my machine but not on github #936

Closed
paulf81 opened this issue Jul 4, 2024 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@paulf81
Copy link
Collaborator

paulf81 commented Jul 4, 2024

test_random_search_layout_opt is failing locally on my machine but not on github

I think this must be a difference in package version, or os, but on the version of develop (current July 4), running test_random_search_layout_opt produces the error:

/Users/pfleming/Projects/FLORIS/floris/tests/reg_tests/random_search_layout_opt_regression_test.py::test_random_search_layout_opt failed: sample_inputs_fixture = <tests.conftest.SampleInputs object at 0x7fefe38b8640>

    def test_random_search_layout_opt(sample_inputs_fixture):
        """
        The SciPy optimization method optimizes turbine layout using SciPy's minimize method. This test
        compares the optimization results from the SciPy layout optimization for a simple farm with a
        simple wind rose to stored baseline results.
        """
        sample_inputs_fixture.core["wake"]["model_strings"]["velocity_model"] = VELOCITY_MODEL
        sample_inputs_fixture.core["wake"]["model_strings"]["deflection_model"] = DEFLECTION_MODEL
    
        boundaries = [(0.0, 0.0), (0.0, 1000.0), (1000.0, 1000.0), (1000.0, 0.0), (0.0, 0.0)]
    
        fmodel = FlorisModel(sample_inputs_fixture.core)
        wd_array = np.arange(0, 360.0, 5.0)
        ws_array = 8.0 * np.ones_like(wd_array)
    
        wind_rose = WindRose(
            wind_directions=wd_array,
            wind_speeds=ws_array,
            ti_table=0.1,
        )
        D = 126.0 # Rotor diameter for the NREL 5 MW
        fmodel.set(
            layout_x=[0.0, 5 * D, 10 * D],
            layout_y=[0.0, 0.0, 0.0],
            wind_data=wind_rose
        )
    
        layout_opt = LayoutOptimizationRandomSearch(
            fmodel=fmodel,
            boundaries=boundaries,
            min_dist_D=5,
            seconds_per_iteration=1,
            total_optimization_seconds=1,
            use_dist_based_init=False,
            random_seed=0,
        )
        sol = layout_opt.optimize()
        optimized_aep = sol[0]
        locations_opt = np.array([sol[1], sol[2]])
    
        if DEBUG:
            print(locations_opt)
            print(optimized_aep)
    
>       assert_results_arrays(locations_opt, locations_baseline_aep)

tests/reg_tests/random_search_layout_opt_regression_test.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test = array([[   0.        ,  440.07873658, 1260.        ],
       [   0.        ,  563.92108396,    0.        ]])
baseline = array([[   0.        ,  619.07183266, 1260.        ],
       [   0.        ,  499.88056089,    0.        ]])

    def assert_results_arrays(test: np.array, baseline: np.array):
        if np.shape(test) != np.shape(baseline):
            raise ValueError("test and baseline results have mismatched shapes.")
    
        for test_dim0, baseline_dim0 in zip(test, baseline):
            for test_dim1, baseline_dim1 in zip(test_dim0, baseline_dim0):
>               assert np.allclose(test_dim1, baseline_dim1)
E               assert False
E                +  where False = <function allclose at 0x7fefe0dd2fb0>(440.07873657855356, 619.07183266)
E                +    where <function allclose at 0x7fefe0dd2fb0> = np.allclose

tests/conftest.py:28: AssertionError

How to reproduce

Run test_random_search_layout_opt in develop on local machine

System Information

Python 3.10.4 (main, Mar 31 2022, 03:38:35) [Clang 12.0.0 ] on darwin
numpy==1.26.4
pandas==2.1.4
@paulf81 paulf81 added the bug Something isn't working label Jul 4, 2024
@misi9170
Copy link
Collaborator

misi9170 commented Jul 8, 2024

Interesting; presumably this was introduced when I updated the reg test comparison values in #934 .

Tests pass for me locally, with the following package versions:

python = 3.10.6
numpy = 1.25.1
pandas = 1.5.0
scipy = 1.11.1

@paulf81 , if you switch the DEBUG flag to True at the top of the file, what gets printed out for locations_opt?

@paulf81
Copy link
Collaborator Author

paulf81 commented Jul 8, 2024

Ah you're right, this appears to be a precision thing, I can only spot it when following assert_results_arrays in debug, but for example this now passes, if I change the final checks to:

    np.allclose(locations_opt, locations_baseline_aep, atol=1e-1)
    assert np.abs((optimized_aep - baseline_aep)/baseline_aep) < 0.01

But as is both are just different enough to fail

@misi9170
Copy link
Collaborator

misi9170 commented Jul 8, 2024

Ok, I think we could switch to using np.allclose instead of the assert_results_arrays() to give us the tolerance options (or add optional atol and rtol keyword arguments to assert_results_arrays() for better future-proofing)?

@paulf81
Copy link
Collaborator Author

paulf81 commented Jul 8, 2024

Hi @misi9170 , somehow I was a bit wrong about this, the answers are just quite different if I don't run in debug mode.

I changed the DEBUG output to:

print(locations_baseline_aep)
print(locations_opt)
print('----------------')
print(baseline_aep)
print(optimized_aep)

If I run the test normally this is:

[[   0.          619.07183266 1260.        ]
 [   0.          499.88056089    0.        ]]
[[   0.          440.07873658 1260.        ]
 [   0.          563.92108396    0.        ]]
----------------
44798828639.17205
44841782747.77768

But if I run it in debug mode:

[[   0.          619.07183266 1260.        ]
 [   0.          499.88056089    0.        ]]
[[   0.          619.07183266 1260.        ]
 [   0.          499.88056089    0.        ]]
----------------
44798828639.17205
44798828639.17205

So I get the matched results when I run in debug, but pretty different if I just run :

pytest -rA tests/

@misi9170
Copy link
Collaborator

Closed by #940

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants