You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the feature and the current behavior/state.
After this article I tried to implement a perceptron with an apical activation function similar to the real.
An approximation function of the apical was the gaussian, thus I implemented it.
In my fork you can find it.
Now, also the perceptron can learn the XOR logic function:
Relevant information
Are you willing to contribute it? (yes/no): yes.
Are you willing to maintain it going forward? (yes/no): no.
Is there a relevant academic paper? (if so, where): I don't know.
Is there already an implementation in another framework? (if so, where): I did not find anything.
Was it part of tf.contrib? (if so, where): no.
Which API type would this fall under (layer, metric, optimizer, etc.)
activations.
Who will benefit with this feature?
researchers, developers.
Any other info.
The text was updated successfully, but these errors were encountered:
@damiano9669 Apologies for the delay in response. So I think this is an interesting idea, but I do not have a lot of intuition on how useful this is to the community at large. Without an academic paper describing the results and uses it's difficult to asses.
I would propose we leave this issue open and if community interests builds we could accept a PR for this. Alternatively if you proposed (and then included) a nice tutorial that shows its use and how it accomplishes something improved or novel then it would be a potential addition IMO.
TensorFlow Addons is transitioning to a minimal maintenance and release mode. New features will not be added to this repository. For more information, please see our public messaging on this decision: TensorFlow Addons Wind Down
Please consider sending feature requests / contributions to other repositories in the TF community with a similar charters to TFA: Keras Keras-CV Keras-NLP
Describe the feature and the current behavior/state.
After this article I tried to implement a perceptron with an apical activation function similar to the real.
An approximation function of the apical was the gaussian, thus I implemented it.
In my fork you can find it.
Now, also the perceptron can learn the XOR logic function:

Relevant information
Which API type would this fall under (layer, metric, optimizer, etc.)
activations.
Who will benefit with this feature?
researchers, developers.
Any other info.
The text was updated successfully, but these errors were encountered: