Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve error messages when IPU config is not compatible with model #210

Conversation

payoto
Copy link
Contributor

@payoto payoto commented Nov 24, 2022

What does this PR do?

This PR improves error messages which can occur when the layers_per_ipu attribute of the config does not create an array with enough entries for the number of layers in the model.

Before this PR a raw IndexError was returned to the user, with no suggestion of what might be the cause.

This PR introduces two layers of error handling:

  1. When using a pipeline, if the poplar executor creation fails, additional context is added to the error by suggesting that the IPU Config might be incompatible.
  2. The models now call get_ipu_layers with an additional argument which is used for checking if the number of layers is too much. That is still not ideal as it requires models to call this properly (although could remove the default argument to make sure of that).

The new error message looks like this:

>>> data = ["I love you", "I hate you"]
>>> specific_model = pipelines.pipeline(model="cardiffnlp/twitter-roberta-base-sentiment")
>>> specific_model(data)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork/optimum/graphcore/pipelines/__init__.py:169, in get_poplar_executor(model, ipu_config, fp16)
    168     model = to_pipelined(model, ipu_config, force=False)
--> 169     model.parallelize()
    170 except Exception as error:
File /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork/optimum/graphcore/models/roberta/modeling_roberta.py:250, in PipelinedRobertaForSequenceClassification.parallelize(self)
    249 def parallelize(self):
--> 250     super().parallelize()
    251     last_ipu = self.ipu_config.ipus_per_replica - 1
File /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork/optimum/graphcore/models/roberta/modeling_roberta.py:67, in RobertaPipelineMixin.parallelize(self)
     65 self._hooks.extend(hs)
---> 67 layer_ipu = get_layer_ipu(self.ipu_config.layers_per_ipu, self.roberta.encoder.layer)
     68 for index, layer in enumerate(self.roberta.encoder.layer):
File /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork/optimum/graphcore/modeling_utils.py:226, in get_layer_ipu(layers_per_ipu, target_number_of_layers)
    225     if len(layer_ipu) < target_number_of_layers:
--> 226         raise ValueError(
    227             "layers_per_ipu does not support enough layers for the current model."
    228             " The current IPU Config specifies IPU assignments for "
    229             f"{len(layer_ipu)} but there are {target_number_of_layers}. "
    230             f"layers_per_ipu={layers_per_ipu}"
    231         )
    232 return layer_ipu
ValueError: layers_per_ipu does not support enough layers for the current model. The current IPU Config specifies IPU assignments for 6 but there are 12. layers_per_ipu=[2, 4]
The above exception was the direct cause of the following exception:
IncompatibleIPUConfigError                Traceback (most recent call last)
Cell In [8], line 1
----> 1 specific_model = pipelines.pipeline(
      2     model="cardiffnlp/twitter-roberta-base-sentiment", 
      3     # ipu_config="Graphcore/roberta-base-ipu"
      4 )
      5 specific_model(data)
File /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork/optimum/graphcore/pipelines/__init__.py:274, in pipeline(task, model, ipu_config, tokenizer, feature_extractor, revision, use_fast, use_auth_token, pipeline_class, fp16, **kwargs)
    272     model_id = model
    273     model = SUPPORTED_TASKS[targeted_task]["class"][0].from_pretrained(model_id, revision=revision)
--> 274     model = get_poplar_executor(model, ipu_config, fp16)
    275 elif isinstance(model, PreTrainedModel):
    276     model = get_poplar_executor(model, ipu_config, fp16)
File /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork/optimum/graphcore/pipelines/__init__.py:176, in get_poplar_executor(model, ipu_config, fp16)
    170 except Exception as error:
    171     new_message = (
    172         "The model and ipu_config seem to be incompatible,"
    173         " please try a different IPU config or customizing it for the model."
    174         f" The config provided is '{ipu_config_arg}'"
    175     )
--> 176     raise IncompatibleIPUConfigError(new_message) from error
    177 if fp16:
    178     model.half()
IncompatibleIPUConfigError: The model and ipu_config seem to be incompatible, please try a different IPU config or customizing it for the model. The config provided is 'Graphcore/distilbert-base-ipu'

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@payoto
Copy link
Contributor Author

payoto commented Nov 24, 2022

Docs are broken because this PR is coming from a fork which has a different name

Copy link
Member

@michaelbenayoun michaelbenayoun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
Could you run a few training jobs just to validate that eveyrthing works since you edited the parallelize methods?

@payoto
Copy link
Contributor Author

payoto commented Nov 24, 2022

Yep, no problem. In your opinion, would it be enough to run the test_examples file? I'm not very familiar with what the test coverage is like.

@payoto
Copy link
Contributor Author

payoto commented Nov 25, 2022

I ran test_examples.py all passed except 1 which failed because we were 0.004 from the threshold (1%):

============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /localdata/alexandrep/paperspace-forks/optimum-graphcore-fork, configfile: setup.cfg
plugins: anyio-3.6.1, cov-4.0.0
collected 38 items

tests/test_examples_match_transformers.py ............
tests/test_examples.py .....................................x

=========================================================================
>                   self.assertGreaterEqual(float(results[self.SCORE_NAME]), threshold)
E                   AssertionError: 0.6461915373802185 not greater than or equal to 0.65

tests/test_examples.py:185: AssertionError

=========================== short test summary info ============================
FAILED tests/test_examples.py::MultipleChoiceExampleTester::test_run_swag_distilbert
============ 1 failed, 37 passed, 1 warning in 30425.34s (8:27:05) =============

@michaelbenayoun
Copy link
Member

Yes that's great, thanks!

@payoto
Copy link
Contributor Author

payoto commented Nov 25, 2022

@michaelbenayoun, I don't have write access to the repository so I can't merge it, could you either merge or grant me write access?

@michaelbenayoun michaelbenayoun merged commit 13fc11e into huggingface:main Nov 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants