Skip to content

Commit

Permalink
Merge pull request #661 from FarukhS52/master
Browse files Browse the repository at this point in the history
Fix typo errors
  • Loading branch information
Hananel-Hazan authored Dec 8, 2023
2 parents 9654163 + 12ca274 commit d8e6286
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ We have provided some simple starter scripts for doing unsupervised learning (le
## Benchmarking
We simulated a network with a population of n Poisson input neurons with firing rates (in Hertz) drawn randomly from U(0, 100), connected all-to-all with a equally-sized population of leaky integrate-and-fire (LIF) neurons, with connection weights sampled from N(0,1). We varied n systematically from 250 to 10,000 in steps of 250, and ran each simulation with every library for 1,000ms with a time resolution dt = 1.0. We tested BindsNET (with CPU and GPU computation), BRIAN2, PyNEST (the Python interface to the NEST SLI interface that runs the C++NEST core simulator), ANNarchy (with CPU and GPU computation), and BRIAN2genn (the BRIAN2 front-end to the GeNN simulator).

Several packages, including BRIAN and PyNEST, allow the setting of certain global preferences; e.g., the number of CPU threads, the number of OpenMP processes, etc. We chose these settings for our benchmark study in an attempt to maximize each library's speed, but note that BindsNET requires no setting of such options. Our approach, inheriting the computational model of PyTorch, appears to make the best use of the available hardware, and therefore makes it simple for practicioners to get the best performance from their system with the least effort.
Several packages, including BRIAN and PyNEST, allow the setting of certain global preferences; e.g., the number of CPU threads, the number of OpenMP processes, etc. We chose these settings for our benchmark study in an attempt to maximize each library's speed, but note that BindsNET requires no setting of such options. Our approach, inheriting the computational model of PyTorch, appears to make the best use of the available hardware, and therefore makes it simple for practitioners to get the best performance from their system with the least effort.

<p align="middle">
<img src="https://github.com/Hananel-Hazan/bindsnet/blob/master/docs/BindsNET%20benchmark.png" alt="BindsNET%20Benchmark" width="503" height="403">
Expand Down
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ parameters|default|description
## Tensorboard example
*/examples/tensorboard/*
Google's Tensorboard is a powerful tool to analyze Deep Learning models. It helps visualizing data flows, or any changes happening during a training process.
First developped for Google's Tensorflow, it is now available as **TensorboardX** (https://tensorboardx.readthedocs.io/en/latest/index.html) for Py-Torch or other DL fameworks *(under development)*.
First developed for Google's Tensorflow, it is now available as **TensorboardX** (https://tensorboardx.readthedocs.io/en/latest/index.html) for Py-Torch or other DL frameworks *(under development)*.

```tensorboard.py``` shows how to use the ```TensorboardAnalyzer``` class, graphically monitoring the weights of 2D convolutional SNN during its training process.

Expand Down

0 comments on commit d8e6286

Please sign in to comment.