From dea6866f82621ef903b0d0c2569e3bb86907a262 Mon Sep 17 00:00:00 2001 From: 12345407 <72138442+12345407@users.noreply.github.com> Date: Thu, 1 Oct 2020 08:15:24 +0530 Subject: [PATCH 1/2] Update README.md --- README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/README.md b/README.md index 70b03a6..cd21a58 100644 --- a/README.md +++ b/README.md @@ -12,9 +12,15 @@ We soon plan to add other useful scripts, such as: * Our useful modifications over Caffe - the image augmentation layer, and triplet accuracy layer to aid the training of Visnet ## Visnet Architecture +The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes + VisNet is a Convolutional Neural Network (CNN) trained using triplet based deep ranking paradigm. It contains a deep CNN modelled after the VGG-16 network, coupled with parallel shallow convolution layers in order to capture both high-level and low-level image details simultaneously. ![img](https://drive.google.com/uc?export=view&id=0B4toQpysgMLVd09nNEJEVWc4VmM) +content://com.android.chrome.FileProvider/images/screenshot/16015201831931118090739.jpg + + + ## Training In order to train you need a set of triplets . For compatibility with Caffe's ImageData layer, you need 3 sets of triplet files (one each for q, p and n). The lines in those files should correspond to triplets, i.e. line#i in each file should correspond to the i'th triplet. From ff69c43f6626805a9427a750b8269f4f5c0ff54b Mon Sep 17 00:00:00 2001 From: 12345407 <72138442+12345407@users.noreply.github.com> Date: Thu, 1 Oct 2020 08:16:41 +0530 Subject: [PATCH 2/2] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index cd21a58..1da5760 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ The visual development of hand-centered receptive fields in a neural network mod VisNet is a Convolutional Neural Network (CNN) trained using triplet based deep ranking paradigm. It contains a deep CNN modelled after the VGG-16 network, coupled with parallel shallow convolution layers in order to capture both high-level and low-level image details simultaneously. ![img](https://drive.google.com/uc?export=view&id=0B4toQpysgMLVd09nNEJEVWc4VmM) -content://com.android.chrome.FileProvider/images/screenshot/16015201831931118090739.jpg +