Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Bayesian neural network quantization #23

Merged
merged 71 commits into from
Jun 27, 2023

Conversation

junliang-lin
Copy link
Contributor

No description provided.

ranganathkrishnan and others added 30 commits November 14, 2022 09:41
Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
to compute kl when 'return_kl' flag is set to False.
Fix for issue#12.

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
update usage instructions in README file
Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
junliang-lin and others added 27 commits February 6, 2023 12:45
* fix minor typo.

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update links in README.md

* update MOPED layer example utility function

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update README.md

* feat: add possibility to return no kl, save it as attribute

* feat: add possibility to return no kl on flipout layers, save it as attribute

* updates to support dnn to bnn imodel auto conversion

* updates to support dnn to bnn imodel auto conversion

* remove duplicate kl_loss definition in Conv1dReparameterization layer

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* include kl_loss() function in Convolutional flipout layers,
to compute kl when 'return_kl' flag is set to False.
Fix for issue#12.

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update README.md

* Update README.md

* update the posterior variational param init value

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update release version with dnn_to_bnn() feature

* Update README.md

update usage instructions in README file

* Update requirements.txt

* Include training, testing and uncertainty quantification snippet in README.md

* update version in setup.py

* Update bayesian_torch.layers.md

* Update links in README.md

* Update setup.py

* Update README.md

* include assets folder

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update setup.py

* release to PyPI, update install instruction through "pip" command

* Switched to permanent URL for the top image.

* changing to raw.githubusercontent.com Url for top image.

* Update README.md

* update links and release number for PyPI documentation

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update README.md

add downloads statistics badge

* update download count badge

* Added support for arbitrary kernel sizes for Bayesian Conv layers

* update version number

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>

* Update README.md

* Add support for output padding in flipout layers

---------

Signed-off-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
Co-authored-by: Ranganath Krishnan <ranganath.krishnan@intel.com>
Co-authored-by: Pi <piero.skywalker@gmail.com>
Co-authored-by: msubedar <mahesh.subedar@intel.com>
Co-authored-by: Michael Beale <michael.beale@intel.com>
@ranganathkrishnan ranganathkrishnan self-requested a review April 26, 2023 18:18
@ranganathkrishnan ranganathkrishnan merged commit b0fc3a1 into main Jun 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants