Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle multiple data layers? #1381

Closed
zhushun0008 opened this issue Oct 30, 2014 · 3 comments
Closed

How to handle multiple data layers? #1381

zhushun0008 opened this issue Oct 30, 2014 · 3 comments

Comments

@zhushun0008
Copy link

The tutorials say that Multiple inputs are useful for non-trivial ground truth: one data layer loads the actual data and the other data layer loads the ground truth in lock-step.
I do not know what the meaning of lock-step.I Have two data set, one is noise images and the other is ground true images.I used two data layers to feed into the Caffe. The important thing is to make sure the data was one to one mapping(noised image with ground true image). I have this worry because the training process is strange and the loss could not decrease.
I actually do not know what's wrong with my net.
Solver scaffolding done.
Network configure is showed below

test_iter: 200
test_interval: 1000
base_lr: 0.01
display: 1000
max_iter: 152500
lr_policy: "step"
gamma: 0.1
momentum: 0
weight_decay: 0
stepsize: 30000
snapshot: 10000
snapshot_prefix: "models/low_tohighresolution_net/low_tohighresolution_train"
random_seed: 1701

name: "myNet"
layers {
  top: "lowdata"
  name: "lowdata"
  type: DATA
  data_param {
    source: "examples/low_tohighresolution_net/lowImage_train_lmdb"
    batch_size: 128
    backend: LMDB
  }
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
}
layers {
  top: "highdata"
  name: "highdata"
  type: DATA
  data_param {
    source: "examples/low_tohighresolution_net/highImage_train_lmdb"
    batch_size: 128
    backend: LMDB
  }
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
}
layers {
  bottom: "lowdata"
  top: "conv1"
  name: "conv1"
  type: CONVOLUTION
  blobs_lr: 1
  blobs_lr: 2
  weight_decay: 1
  weight_decay: 0
  convolution_param {
    num_output: 64
    kernel_size: 9
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layers {
  bottom: "conv1"
  top: "conv1"
  name: "relu1"
  type: RELU
}
layers {
  bottom: "conv1"
  top: "conv2"
  name: "conv2"
  type: CONVOLUTION
  blobs_lr: 1
  blobs_lr: 2
  weight_decay: 1
  weight_decay: 0
  convolution_param {
    num_output: 32
    kernel_size: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 1
    }
  }
}
layers {
  bottom: "conv2"
  top: "conv2"
  name: "relu2"
  type: RELU
}
layers {
  bottom: "conv2"
  top: "conv3"
  name: "conv3"
  type: CONVOLUTION
  blobs_lr: 1
  blobs_lr: 2
  weight_decay: 1
  weight_decay: 0
  convolution_param {
    num_output: 1
    kernel_size: 5
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layers {
  bottom: "conv3"
  top: "conv3"
  name: "relu3"
  type: RELU
}
layers {
  bottom: "conv3"
  bottom: "highdata"
  top: "loss"
  name: "loss"
  type: EUCLIDEAN_LOSS
}
state {
  phase: TRAIN
}
I1030 20:10:26.099817 15224 solver.cpp:160] Solving myNet
I1030 20:10:26.099863 15224 solver.cpp:247] Iteration 0, Testing net (#0)
I1030 20:11:17.045397 15224 solver.cpp:298]     Test net output #0: loss = 6.87695 (* 1 = 6.87695 loss)
I1030 20:11:17.106524 15224 solver.cpp:191] Iteration 0, loss = 6.89616
I1030 20:11:17.106585 15224 solver.cpp:206]     Train net output #0: loss = 6.89616 (* 1 = 6.89616 loss)
I1030 20:11:17.106621 15224 solver.cpp:403] Iteration 0, lr = 0.01
I1030 20:12:59.435927 15224 solver.cpp:247] Iteration 1000, Testing net (#0)
I1030 20:13:50.402863 15224 solver.cpp:298]     Test net output #0: loss = 50.0646 (* 1 = 50.0646 loss)
I1030 20:13:50.463141 15224 solver.cpp:191] Iteration 1000, loss = 51.3084
I1030 20:13:50.463181 15224 solver.cpp:206]     Train net output #0: loss = 51.3084 (* 1 = 51.3084 loss)
I1030 20:13:50.463214 15224 solver.cpp:403] Iteration 1000, lr = 0.01
I1030 20:15:32.789667 15224 solver.cpp:247] Iteration 2000, Testing net (#0)
I1030 20:16:23.757944 15224 solver.cpp:298]     Test net output #0: loss = 50.0636 (* 1 = 50.0636 loss)
I1030 20:16:23.818539 15224 solver.cpp:191] Iteration 2000, loss = 54.8957
I1030 20:16:23.818614 15224 solver.cpp:206]     Train net output #0: loss = 54.8957 (* 1 = 54.8957 loss)
I1030 20:16:23.818640 15224 solver.cpp:403] Iteration 2000, lr = 0.01
I1030 20:18:06.259873 15224 solver.cpp:247] Iteration 3000, Testing net (#0)
I1030 20:18:57.236999 15224 solver.cpp:298]     Test net output #0: loss = 50.0647 (* 1 = 50.0647 loss)
I1030 20:18:57.297591 15224 solver.cpp:191] Iteration 3000, loss = 52.0777
I1030 20:18:57.297629 15224 solver.cpp:206]     Train net output #0: loss = 52.0777 (* 1 = 52.0777 loss)
I1030 20:18:57.297668 15224 solver.cpp:403] Iteration 3000, lr = 0.01
I1030 20:20:39.633867 15224 solver.cpp:247] Iteration 4000, Testing net (#0)
I1030 20:21:30.606236 15224 solver.cpp:298]     Test net output #0: loss = 50.0638 (* 1 = 50.0638 loss)
I1030 20:21:30.667162 15224 solver.cpp:191] Iteration 4000, loss = 51.5993
I1030 20:21:30.667229 15224 solver.cpp:206]     Train net output #0: loss = 51.5993 (* 1 = 51.5993 loss)
I1030 20:21:30.667258 15224 solver.cpp:403] Iteration 4000, lr = 0.01
@dennis2030
Copy link

Let me guess: Do you set the options "--shuffle" when you create the LMDB?
If you shuffle them randomly, you won't get the one to one mapping.

@shelhamer
Copy link
Member

Please ask on the caffe-users group.

@csyking
Copy link

csyking commented Sep 8, 2016

@zhushun0008 Do you have solved it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants