Qstairs

起業に向けた活動、およびAndroid・画像認識(人工知能、Deep Learning等)の技術を紹介します

広告

【画像認識】「Caffe」をWindowsで使う!MNIST実践編

f:id:qstairs:20160607230240j:plain:h300

はじめに

前回はWindows用に環境構築した「Caffe」をMNISTで試そうということで、
MNISTのデータの準備について書きました。(以下参照)

qstairs.hatenablog.com

準備は完了したので、今回は実際に動かしてみます。

実行方法

動かす実行ファイルは前々回にビルドして作成した「caffe.exe」を使用します。

※以降、カレントディレクトリは「caffe.exe」が格納されたフォルダとします。

まず、カレントディレクトリに
caffe-masterフォルダ直下のexamplesフォルダをコピーします。

続いて、前回作成した、以下のフォルダを「カレントディレクトリ\examples\mnist」直下にコピーします。

  • mnist_test_lmdb
  • mnist_train_lmdb

 
そして、
コマンドプロンプトを立ち上げて、
カレントディレクトリに移動し、以下を実行すると学習と(恐らく)評価も処理されます。

caffe.exe train --solver=examples\mnist\lenet_solver.prototxt


実行した結果は以下のようになります。
#出力ログを全部載せてみました。
テストした結果(精度)は99.11%となりました。

精度は出力ログの下から4行目に書かれています。
Test net output #0: accuracy = 0.9911

E:\99_tmp\caffe>caffe.exe train --solver=examples\mnist\lenet_solver.prototxt
I0609 21:58:55.126268  2348 caffe.cpp:186] Using GPUs 0
I0609 21:58:55.404929  2348 caffe.cpp:191] GPU 0: GeForce GTX 745
I0609 21:58:55.596211  2348 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0609 21:58:55.597213  2348 solver.cpp:48] Initializing solver from parameters:
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: GPU
device_id: 0
net: "examples/mnist/lenet_train_test.prototxt"
I0609 21:58:55.601205  2348 solver.cpp:91] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
I0609 21:58:55.602207  2348 net.cpp:313] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0609 21:58:55.603210  2348 net.cpp:313] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0609 21:58:55.604213  2348 net.cpp:49] Initializing net from parameters:
name: "LeNet"
state {
  phase: TRAIN
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/data/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0609 21:58:55.616746  2348 layer_factory.hpp:77] Creating layer mnist
I0609 21:58:55.617748  2348 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0609 21:58:55.619688  2348 net.cpp:91] Creating Layer mnist
I0609 21:58:55.620190  2348 net.cpp:399] mnist -> data
I0609 21:58:55.621192  2348 net.cpp:399] mnist -> label
I0609 21:58:55.620694  5676 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0609 21:58:55.623699  5676 db_lmdb.cpp:40] Opened lmdb examples/mnist/data/mnist_train_lmdb
I0609 21:58:58.823786  2348 data_layer.cpp:41] output data size: 64,1,28,28
I0609 21:58:58.826282  2348 net.cpp:141] Setting up mnist
I0609 21:58:58.827285  2348 net.cpp:148] Top shape: 64 1 28 28 (50176)
I0609 21:58:58.828287  2348 net.cpp:148] Top shape: 64 (64)
I0609 21:58:58.827285 13328 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0609 21:58:58.828789  2348 net.cpp:156] Memory required for data: 200960
I0609 21:58:58.831279  2348 layer_factory.hpp:77] Creating layer conv1
I0609 21:58:58.832279  2348 net.cpp:91] Creating Layer conv1
I0609 21:58:58.833281  2348 net.cpp:425] conv1 <- data
I0609 21:58:58.833783  2348 net.cpp:399] conv1 -> conv1
I0609 21:58:59.551329  2348 net.cpp:141] Setting up conv1
I0609 21:58:59.551831  2348 net.cpp:148] Top shape: 64 20 24 24 (737280)
I0609 21:58:59.552834  2348 net.cpp:156] Memory required for data: 3150080
I0609 21:58:59.553334  2348 layer_factory.hpp:77] Creating layer pool1
I0609 21:58:59.554337  2348 net.cpp:91] Creating Layer pool1
I0609 21:58:59.554839  2348 net.cpp:425] pool1 <- conv1
I0609 21:58:59.555344  2348 net.cpp:399] pool1 -> pool1
I0609 21:58:59.555841  2348 net.cpp:141] Setting up pool1
I0609 21:58:59.556844  2348 net.cpp:148] Top shape: 64 20 12 12 (184320)
I0609 21:58:59.557345  2348 net.cpp:156] Memory required for data: 3887360
I0609 21:58:59.558348  2348 layer_factory.hpp:77] Creating layer conv2
I0609 21:58:59.558850  2348 net.cpp:91] Creating Layer conv2
I0609 21:58:59.559833  2348 net.cpp:425] conv2 <- pool1
I0609 21:58:59.560335  2348 net.cpp:399] conv2 -> conv2
I0609 21:58:59.562841  2348 net.cpp:141] Setting up conv2
I0609 21:58:59.563344  2348 net.cpp:148] Top shape: 64 50 8 8 (204800)
I0609 21:58:59.564345  2348 net.cpp:156] Memory required for data: 4706560
I0609 21:58:59.564847  2348 layer_factory.hpp:77] Creating layer pool2
I0609 21:58:59.565850  2348 net.cpp:91] Creating Layer pool2
I0609 21:58:59.566351  2348 net.cpp:425] pool2 <- conv2
I0609 21:58:59.566853  2348 net.cpp:399] pool2 -> pool2
I0609 21:58:59.567862  2348 net.cpp:141] Setting up pool2
I0609 21:58:59.568356  2348 net.cpp:148] Top shape: 64 50 4 4 (51200)
I0609 21:58:59.568356  2348 net.cpp:156] Memory required for data: 4911360
I0609 21:58:59.569713  2348 layer_factory.hpp:77] Creating layer ip1
I0609 21:58:59.570718  2348 net.cpp:91] Creating Layer ip1
I0609 21:58:59.571219  2348 net.cpp:425] ip1 <- pool2
I0609 21:58:59.571720  2348 net.cpp:399] ip1 -> ip1
I0609 21:58:59.575229  2348 net.cpp:141] Setting up ip1
I0609 21:58:59.575731  2348 net.cpp:148] Top shape: 64 500 (32000)
I0609 21:58:59.576232  2348 net.cpp:156] Memory required for data: 5039360
I0609 21:58:59.576733  2348 layer_factory.hpp:77] Creating layer relu1
I0609 21:58:59.577736  2348 net.cpp:91] Creating Layer relu1
I0609 21:58:59.578238  2348 net.cpp:425] relu1 <- ip1
I0609 21:58:59.579241  2348 net.cpp:386] relu1 -> ip1 (in-place)
I0609 21:58:59.580744  2348 net.cpp:141] Setting up relu1
I0609 21:58:59.580744  2348 net.cpp:148] Top shape: 64 500 (32000)
I0609 21:58:59.581746  2348 net.cpp:156] Memory required for data: 5167360
I0609 21:58:59.582752  2348 layer_factory.hpp:77] Creating layer ip2
I0609 21:58:59.583250  2348 net.cpp:91] Creating Layer ip2
I0609 21:58:59.583752  2348 net.cpp:425] ip2 <- ip1
I0609 21:58:59.584264  2348 net.cpp:399] ip2 -> ip2
I0609 21:58:59.585757  2348 net.cpp:141] Setting up ip2
I0609 21:58:59.586258  2348 net.cpp:148] Top shape: 64 10 (640)
I0609 21:58:59.586771  2348 net.cpp:156] Memory required for data: 5169920
I0609 21:58:59.587764  2348 layer_factory.hpp:77] Creating layer loss
I0609 21:58:59.588264  2348 net.cpp:91] Creating Layer loss
I0609 21:58:59.589267  2348 net.cpp:425] loss <- ip2
I0609 21:58:59.589768  2348 net.cpp:425] loss <- label
I0609 21:58:59.590270  2348 net.cpp:399] loss -> loss
I0609 21:58:59.590771  2348 layer_factory.hpp:77] Creating layer loss
I0609 21:58:59.591773  2348 net.cpp:141] Setting up loss
I0609 21:58:59.592274  2348 net.cpp:148] Top shape: (1)
I0609 21:58:59.593277  2348 net.cpp:151]     with loss weight 1
I0609 21:58:59.593780  2348 net.cpp:156] Memory required for data: 5169924
I0609 21:58:59.594781  2348 net.cpp:217] loss needs backward computation.
I0609 21:58:59.595283  2348 net.cpp:217] ip2 needs backward computation.
I0609 21:58:59.595788  2348 net.cpp:217] relu1 needs backward computation.
I0609 21:58:59.596787  2348 net.cpp:217] ip1 needs backward computation.
I0609 21:58:59.597288  2348 net.cpp:217] pool2 needs backward computation.
I0609 21:58:59.598290  2348 net.cpp:217] conv2 needs backward computation.
I0609 21:58:59.598290  2348 net.cpp:217] pool1 needs backward computation.
I0609 21:58:59.600255  2348 net.cpp:217] conv1 needs backward computation.
I0609 21:58:59.600757  2348 net.cpp:219] mnist does not need backward computation.
I0609 21:58:59.601759  2348 net.cpp:261] This network produces output loss
I0609 21:58:59.602260  2348 net.cpp:274] Network initialization done.
I0609 21:58:59.603263  2348 solver.cpp:181] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
I0609 21:58:59.604266  2348 net.cpp:313] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0609 21:58:59.605268  2348 net.cpp:49] Initializing net from parameters:
name: "LeNet"
state {
  phase: TEST
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/data/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0609 21:58:59.617784  2348 layer_factory.hpp:77] Creating layer mnist
I0609 21:58:59.618788  2348 net.cpp:91] Creating Layer mnist
I0609 21:58:59.621775  2348 net.cpp:399] mnist -> data
I0609 21:58:59.622778  2348 net.cpp:399] mnist -> label
I0609 21:58:59.622287 12064 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0609 21:58:59.624783 12064 db_lmdb.cpp:40] Opened lmdb examples/mnist/data/mnist_test_lmdb
I0609 21:58:59.625785  2348 data_layer.cpp:41] output data size: 100,1,28,28
I0609 21:58:59.627791  2348 net.cpp:141] Setting up mnist
I0609 21:58:59.628293  2348 net.cpp:148] Top shape: 100 1 28 28 (78400)
I0609 21:58:59.628293 16168 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0609 21:58:59.628293  2348 net.cpp:148] Top shape: 100 (100)
I0609 21:58:59.631743  2348 net.cpp:156] Memory required for data: 314000
I0609 21:58:59.633249  2348 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0609 21:58:59.634250  2348 net.cpp:91] Creating Layer label_mnist_1_split
I0609 21:58:59.634752  2348 net.cpp:425] label_mnist_1_split <- label
I0609 21:58:59.635253  2348 net.cpp:399] label_mnist_1_split -> label_mnist_1_split_0
I0609 21:58:59.636255  2348 net.cpp:399] label_mnist_1_split -> label_mnist_1_split_1
I0609 21:58:59.636757  2348 net.cpp:141] Setting up label_mnist_1_split
I0609 21:58:59.637259  2348 net.cpp:148] Top shape: 100 (100)
I0609 21:58:59.637760  2348 net.cpp:148] Top shape: 100 (100)
I0609 21:58:59.638262  2348 net.cpp:156] Memory required for data: 314800
I0609 21:58:59.638262  2348 layer_factory.hpp:77] Creating layer conv1
I0609 21:58:59.639732  2348 net.cpp:91] Creating Layer conv1
I0609 21:58:59.640235  2348 net.cpp:425] conv1 <- data
I0609 21:58:59.640736  2348 net.cpp:399] conv1 -> conv1
I0609 21:58:59.642741  2348 net.cpp:141] Setting up conv1
I0609 21:58:59.643244  2348 net.cpp:148] Top shape: 100 20 24 24 (1152000)
I0609 21:58:59.643744  2348 net.cpp:156] Memory required for data: 4922800
I0609 21:58:59.644747  2348 layer_factory.hpp:77] Creating layer pool1
I0609 21:58:59.645248  2348 net.cpp:91] Creating Layer pool1
I0609 21:58:59.646250  2348 net.cpp:425] pool1 <- conv1
I0609 21:58:59.646752  2348 net.cpp:399] pool1 -> pool1
I0609 21:58:59.647254  2348 net.cpp:141] Setting up pool1
I0609 21:58:59.647755  2348 net.cpp:148] Top shape: 100 20 12 12 (288000)
I0609 21:58:59.649691  2348 net.cpp:156] Memory required for data: 6074800
I0609 21:58:59.650194  2348 layer_factory.hpp:77] Creating layer conv2
I0609 21:58:59.651197  2348 net.cpp:91] Creating Layer conv2
I0609 21:58:59.651698  2348 net.cpp:425] conv2 <- pool1
I0609 21:58:59.652199  2348 net.cpp:399] conv2 -> conv2
I0609 21:58:59.654706  2348 net.cpp:141] Setting up conv2
I0609 21:58:59.655207  2348 net.cpp:148] Top shape: 100 50 8 8 (320000)
I0609 21:58:59.655709  2348 net.cpp:156] Memory required for data: 7354800
I0609 21:58:59.656711  2348 layer_factory.hpp:77] Creating layer pool2
I0609 21:58:59.657213  2348 net.cpp:91] Creating Layer pool2
I0609 21:58:59.657714  2348 net.cpp:425] pool2 <- conv2
I0609 21:58:59.658215  2348 net.cpp:399] pool2 -> pool2
I0609 21:58:59.658717  2348 net.cpp:141] Setting up pool2
I0609 21:58:59.659699  2348 net.cpp:148] Top shape: 100 50 4 4 (80000)
I0609 21:58:59.660202  2348 net.cpp:156] Memory required for data: 7674800
I0609 21:58:59.660717  2348 layer_factory.hpp:77] Creating layer ip1
I0609 21:58:59.661706  2348 net.cpp:91] Creating Layer ip1
I0609 21:58:59.662207  2348 net.cpp:425] ip1 <- pool2
I0609 21:58:59.663209  2348 net.cpp:399] ip1 -> ip1
I0609 21:58:59.666218  2348 net.cpp:141] Setting up ip1
I0609 21:58:59.666719  2348 net.cpp:148] Top shape: 100 500 (50000)
I0609 21:58:59.667722  2348 net.cpp:156] Memory required for data: 7874800
I0609 21:58:59.668725  2348 layer_factory.hpp:77] Creating layer relu1
I0609 21:58:59.668725  2348 net.cpp:91] Creating Layer relu1
I0609 21:58:59.670668  2348 net.cpp:425] relu1 <- ip1
I0609 21:58:59.671670  2348 net.cpp:386] relu1 -> ip1 (in-place)
I0609 21:58:59.673177  2348 net.cpp:141] Setting up relu1
I0609 21:58:59.673676  2348 net.cpp:148] Top shape: 100 500 (50000)
I0609 21:58:59.674679  2348 net.cpp:156] Memory required for data: 8074800
I0609 21:58:59.675180  2348 layer_factory.hpp:77] Creating layer ip2
I0609 21:58:59.676182  2348 net.cpp:91] Creating Layer ip2
I0609 21:58:59.676684  2348 net.cpp:425] ip2 <- ip1
I0609 21:58:59.677197  2348 net.cpp:399] ip2 -> ip2
I0609 21:58:59.678189  2348 net.cpp:141] Setting up ip2
I0609 21:58:59.678689  2348 net.cpp:148] Top shape: 100 10 (1000)
I0609 21:58:59.679190  2348 net.cpp:156] Memory required for data: 8078800
I0609 21:58:59.679692  2348 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0609 21:58:59.680694  2348 net.cpp:91] Creating Layer ip2_ip2_0_split
I0609 21:58:59.681196  2348 net.cpp:425] ip2_ip2_0_split <- ip2
I0609 21:58:59.682199  2348 net.cpp:399] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0609 21:58:59.682700  2348 net.cpp:399] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0609 21:58:59.683702  2348 net.cpp:141] Setting up ip2_ip2_0_split
I0609 21:58:59.684204  2348 net.cpp:148] Top shape: 100 10 (1000)
I0609 21:58:59.684705  2348 net.cpp:148] Top shape: 100 10 (1000)
I0609 21:58:59.685206  2348 net.cpp:156] Memory required for data: 8086800
I0609 21:58:59.686209  2348 layer_factory.hpp:77] Creating layer accuracy
I0609 21:58:59.686710  2348 net.cpp:91] Creating Layer accuracy
I0609 21:58:59.687211  2348 net.cpp:425] accuracy <- ip2_ip2_0_split_0
I0609 21:58:59.688215  2348 net.cpp:425] accuracy <- label_mnist_1_split_0
I0609 21:58:59.688716  2348 net.cpp:399] accuracy -> accuracy
I0609 21:58:59.689719  2348 net.cpp:141] Setting up accuracy
I0609 21:58:59.690220  2348 net.cpp:148] Top shape: (1)
I0609 21:58:59.690721  2348 net.cpp:156] Memory required for data: 8086804
I0609 21:58:59.691725  2348 layer_factory.hpp:77] Creating layer loss
I0609 21:58:59.692225  2348 net.cpp:91] Creating Layer loss
I0609 21:58:59.693228  2348 net.cpp:425] loss <- ip2_ip2_0_split_1
I0609 21:58:59.693730  2348 net.cpp:425] loss <- label_mnist_1_split_1
I0609 21:58:59.694231  2348 net.cpp:399] loss -> loss
I0609 21:58:59.695233  2348 layer_factory.hpp:77] Creating layer loss
I0609 21:58:59.696238  2348 net.cpp:141] Setting up loss
I0609 21:58:59.696738  2348 net.cpp:148] Top shape: (1)
I0609 21:58:59.697239  2348 net.cpp:151]     with loss weight 1
I0609 21:58:59.697741  2348 net.cpp:156] Memory required for data: 8086808
I0609 21:58:59.698745  2348 net.cpp:217] loss needs backward computation.
I0609 21:58:59.698745  2348 net.cpp:219] accuracy does not need backward computation.
I0609 21:58:59.700230  2348 net.cpp:217] ip2_ip2_0_split needs backward computation.
I0609 21:58:59.700731  2348 net.cpp:217] ip2 needs backward computation.
I0609 21:58:59.701735  2348 net.cpp:217] relu1 needs backward computation.
I0609 21:58:59.702235  2348 net.cpp:217] ip1 needs backward computation.
I0609 21:58:59.702741  2348 net.cpp:217] pool2 needs backward computation.
I0609 21:58:59.703739  2348 net.cpp:217] conv2 needs backward computation.
I0609 21:58:59.704241  2348 net.cpp:217] pool1 needs backward computation.
I0609 21:58:59.705242  2348 net.cpp:217] conv1 needs backward computation.
I0609 21:58:59.706245  2348 net.cpp:219] label_mnist_1_split does not need backward computation.
I0609 21:58:59.707249  2348 net.cpp:219] mnist does not need backward computation.
I0609 21:58:59.707751  2348 net.cpp:261] This network produces output accuracy
I0609 21:58:59.708753  2348 net.cpp:261] This network produces output loss
I0609 21:58:59.709735  2348 net.cpp:274] Network initialization done.
I0609 21:58:59.710247  2348 solver.cpp:60] Solver scaffolding done.
I0609 21:58:59.711746  2348 caffe.cpp:220] Starting Optimization
I0609 21:58:59.712244  2348 solver.cpp:279] Solving LeNet
I0609 21:58:59.712746  2348 solver.cpp:280] Learning Rate Policy: inv
I0609 21:58:59.715257  2348 solver.cpp:337] Iteration 0, Testing net (#0)
I0609 21:59:00.121788  2348 solver.cpp:404]     Test net output #0: accuracy = 0.0815
I0609 21:59:00.122314  2348 solver.cpp:404]     Test net output #1: loss = 2.40893 (* 1 = 2.40893 loss)
I0609 21:59:00.132297  2348 solver.cpp:228] Iteration 0, loss = 2.44759
I0609 21:59:00.132798  2348 solver.cpp:244]     Train net output #0: loss = 2.44759 (* 1 = 2.44759 loss)
I0609 21:59:00.133800  2348 sgd_solver.cpp:106] Iteration 0, lr = 0.01
I0609 21:59:01.148155  2348 solver.cpp:228] Iteration 100, loss = 0.251657
I0609 21:59:01.148669  2348 solver.cpp:244]     Train net output #0: loss = 0.251657 (* 1 = 0.251657 loss)
I0609 21:59:01.150401  2348 sgd_solver.cpp:106] Iteration 100, lr = 0.00992565
I0609 21:59:02.165963  2348 solver.cpp:228] Iteration 200, loss = 0.169378
I0609 21:59:02.166465  2348 solver.cpp:244]     Train net output #0: loss = 0.169378 (* 1 = 0.169378 loss)
I0609 21:59:02.167490  2348 sgd_solver.cpp:106] Iteration 200, lr = 0.00985258
I0609 21:59:03.187507  2348 solver.cpp:228] Iteration 300, loss = 0.19771
I0609 21:59:03.188009  2348 solver.cpp:244]     Train net output #0: loss = 0.19771 (* 1 = 0.19771 loss)
I0609 21:59:03.189012  2348 sgd_solver.cpp:106] Iteration 300, lr = 0.00978075
I0609 21:59:04.209780  2348 solver.cpp:228] Iteration 400, loss = 0.0709589
I0609 21:59:04.210305  2348 solver.cpp:244]     Train net output #0: loss = 0.0709588 (* 1 = 0.0709588 loss)
I0609 21:59:04.211309  2348 sgd_solver.cpp:106] Iteration 400, lr = 0.00971013
I0609 21:59:05.221642  2348 solver.cpp:337] Iteration 500, Testing net (#0)
I0609 21:59:05.616386  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9703
I0609 21:59:05.617415  2348 solver.cpp:404]     Test net output #1: loss = 0.0891924 (* 1 = 0.0891924 loss)
I0609 21:59:05.620885  2348 solver.cpp:228] Iteration 500, loss = 0.0975274
I0609 21:59:05.621381  2348 solver.cpp:244]     Train net output #0: loss = 0.0975274 (* 1 = 0.0975274 loss)
I0609 21:59:05.622406  2348 sgd_solver.cpp:106] Iteration 500, lr = 0.00964069
I0609 21:59:06.647480  2348 solver.cpp:228] Iteration 600, loss = 0.0815395
I0609 21:59:06.648496  2348 solver.cpp:244]     Train net output #0: loss = 0.0815395 (* 1 = 0.0815395 loss)
I0609 21:59:06.650513  2348 sgd_solver.cpp:106] Iteration 600, lr = 0.0095724
I0609 21:59:07.672715  2348 solver.cpp:228] Iteration 700, loss = 0.143028
I0609 21:59:07.673744  2348 solver.cpp:244]     Train net output #0: loss = 0.143028 (* 1 = 0.143028 loss)
I0609 21:59:07.674721  2348 sgd_solver.cpp:106] Iteration 700, lr = 0.00950522
I0609 21:59:08.695850  2348 solver.cpp:228] Iteration 800, loss = 0.191974
I0609 21:59:08.696853  2348 solver.cpp:244]     Train net output #0: loss = 0.191974 (* 1 = 0.191974 loss)
I0609 21:59:08.698863  2348 sgd_solver.cpp:106] Iteration 800, lr = 0.00943913
I0609 21:59:09.723435  2348 solver.cpp:228] Iteration 900, loss = 0.164836
I0609 21:59:09.724433  2348 solver.cpp:244]     Train net output #0: loss = 0.164836 (* 1 = 0.164836 loss)
I0609 21:59:09.725436  2348 sgd_solver.cpp:106] Iteration 900, lr = 0.00937411
I0609 21:59:10.738751  2348 solver.cpp:337] Iteration 1000, Testing net (#0)
I0609 21:59:11.137553  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9815
I0609 21:59:11.138049  2348 solver.cpp:404]     Test net output #1: loss = 0.0602186 (* 1 = 0.0602186 loss)
I0609 21:59:11.141587  2348 solver.cpp:228] Iteration 1000, loss = 0.105756
I0609 21:59:11.142087  2348 solver.cpp:244]     Train net output #0: loss = 0.105756 (* 1 = 0.105756 loss)
I0609 21:59:11.143090  2348 sgd_solver.cpp:106] Iteration 1000, lr = 0.00931012
I0609 21:59:12.167500  2348 solver.cpp:228] Iteration 1100, loss = 0.00832521
I0609 21:59:12.168500  2348 solver.cpp:244]     Train net output #0: loss = 0.0083252 (* 1 = 0.0083252 loss)
I0609 21:59:12.169531  2348 sgd_solver.cpp:106] Iteration 1100, lr = 0.00924715
I0609 21:59:13.195940  2348 solver.cpp:228] Iteration 1200, loss = 0.0214419
I0609 21:59:13.196943  2348 solver.cpp:244]     Train net output #0: loss = 0.0214419 (* 1 = 0.0214419 loss)
I0609 21:59:13.197944  2348 sgd_solver.cpp:106] Iteration 1200, lr = 0.00918515
I0609 21:59:14.228776  2348 solver.cpp:228] Iteration 1300, loss = 0.0214616
I0609 21:59:14.229806  2348 solver.cpp:244]     Train net output #0: loss = 0.0214616 (* 1 = 0.0214616 loss)
I0609 21:59:14.231287  2348 sgd_solver.cpp:106] Iteration 1300, lr = 0.00912412
I0609 21:59:15.256608  2348 solver.cpp:228] Iteration 1400, loss = 0.00489997
I0609 21:59:15.257612  2348 solver.cpp:244]     Train net output #0: loss = 0.00489997 (* 1 = 0.00489997 loss)
I0609 21:59:15.258613  2348 sgd_solver.cpp:106] Iteration 1400, lr = 0.00906403
I0609 21:59:16.276813  2348 solver.cpp:337] Iteration 1500, Testing net (#0)
I0609 21:59:16.680711  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9856
I0609 21:59:16.681715  2348 solver.cpp:404]     Test net output #1: loss = 0.0481414 (* 1 = 0.0481414 loss)
I0609 21:59:16.685223  2348 solver.cpp:228] Iteration 1500, loss = 0.0776493
I0609 21:59:16.686233  2348 solver.cpp:244]     Train net output #0: loss = 0.0776493 (* 1 = 0.0776493 loss)
I0609 21:59:16.687228  2348 sgd_solver.cpp:106] Iteration 1500, lr = 0.00900485
I0609 21:59:17.710029  2348 solver.cpp:228] Iteration 1600, loss = 0.104206
I0609 21:59:17.711032  2348 solver.cpp:244]     Train net output #0: loss = 0.104206 (* 1 = 0.104206 loss)
I0609 21:59:17.712563  2348 sgd_solver.cpp:106] Iteration 1600, lr = 0.00894657
I0609 21:59:18.737736  2348 solver.cpp:228] Iteration 1700, loss = 0.027304
I0609 21:59:18.738772  2348 solver.cpp:244]     Train net output #0: loss = 0.027304 (* 1 = 0.027304 loss)
I0609 21:59:18.739742  2348 sgd_solver.cpp:106] Iteration 1700, lr = 0.00888916
I0609 21:59:19.766360  2348 solver.cpp:228] Iteration 1800, loss = 0.023843
I0609 21:59:19.766862  2348 solver.cpp:244]     Train net output #0: loss = 0.023843 (* 1 = 0.023843 loss)
I0609 21:59:19.768401  2348 sgd_solver.cpp:106] Iteration 1800, lr = 0.0088326
I0609 21:59:20.791230  2348 solver.cpp:228] Iteration 1900, loss = 0.137871
I0609 21:59:20.792232  2348 solver.cpp:244]     Train net output #0: loss = 0.137871 (* 1 = 0.137871 loss)
I0609 21:59:20.793262  2348 sgd_solver.cpp:106] Iteration 1900, lr = 0.00877687
I0609 21:59:21.805945  2348 solver.cpp:337] Iteration 2000, Testing net (#0)
I0609 21:59:22.198869  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9858
I0609 21:59:22.198869  2348 solver.cpp:404]     Test net output #1: loss = 0.042713 (* 1 = 0.042713 loss)
I0609 21:59:22.203277  2348 solver.cpp:228] Iteration 2000, loss = 0.0157902
I0609 21:59:22.203778  2348 solver.cpp:244]     Train net output #0: loss = 0.0157902 (* 1 = 0.0157902 loss)
I0609 21:59:22.204782  2348 sgd_solver.cpp:106] Iteration 2000, lr = 0.00872196
I0609 21:59:23.225020  2348 solver.cpp:228] Iteration 2100, loss = 0.0210831
I0609 21:59:23.226011  2348 solver.cpp:244]     Train net output #0: loss = 0.0210831 (* 1 = 0.0210831 loss)
I0609 21:59:23.227027  2348 sgd_solver.cpp:106] Iteration 2100, lr = 0.00866784
I0609 21:59:24.245699  2348 solver.cpp:228] Iteration 2200, loss = 0.0173401
I0609 21:59:24.246728  2348 solver.cpp:244]     Train net output #0: loss = 0.0173401 (* 1 = 0.0173401 loss)
I0609 21:59:24.248229  2348 sgd_solver.cpp:106] Iteration 2200, lr = 0.0086145
I0609 21:59:25.268486  2348 solver.cpp:228] Iteration 2300, loss = 0.092744
I0609 21:59:25.269476  2348 solver.cpp:244]     Train net output #0: loss = 0.092744 (* 1 = 0.092744 loss)
I0609 21:59:25.270503  2348 sgd_solver.cpp:106] Iteration 2300, lr = 0.00856192
I0609 21:59:26.290300  2348 solver.cpp:228] Iteration 2400, loss = 0.0131563
I0609 21:59:26.291321  2348 solver.cpp:244]     Train net output #0: loss = 0.0131563 (* 1 = 0.0131563 loss)
I0609 21:59:26.292809  2348 sgd_solver.cpp:106] Iteration 2400, lr = 0.00851008
I0609 21:59:27.302772  2348 solver.cpp:337] Iteration 2500, Testing net (#0)
I0609 21:59:27.698292  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9863
I0609 21:59:27.698792  2348 solver.cpp:404]     Test net output #1: loss = 0.0437391 (* 1 = 0.0437391 loss)
I0609 21:59:27.702771  2348 solver.cpp:228] Iteration 2500, loss = 0.0611808
I0609 21:59:27.703295  2348 solver.cpp:244]     Train net output #0: loss = 0.0611808 (* 1 = 0.0611808 loss)
I0609 21:59:27.704299  2348 sgd_solver.cpp:106] Iteration 2500, lr = 0.00845897
I0609 21:59:28.729406  2348 solver.cpp:228] Iteration 2600, loss = 0.0616705
I0609 21:59:28.730437  2348 solver.cpp:244]     Train net output #0: loss = 0.0616705 (* 1 = 0.0616705 loss)
I0609 21:59:28.731914  2348 sgd_solver.cpp:106] Iteration 2600, lr = 0.00840857
I0609 21:59:29.756451  2348 solver.cpp:228] Iteration 2700, loss = 0.0355545
I0609 21:59:29.757477  2348 solver.cpp:244]     Train net output #0: loss = 0.0355545 (* 1 = 0.0355545 loss)
I0609 21:59:29.758982  2348 sgd_solver.cpp:106] Iteration 2700, lr = 0.00835886
I0609 21:59:30.784987  2348 solver.cpp:228] Iteration 2800, loss = 0.00188944
I0609 21:59:30.786484  2348 solver.cpp:244]     Train net output #0: loss = 0.0018894 (* 1 = 0.0018894 loss)
I0609 21:59:30.787995  2348 sgd_solver.cpp:106] Iteration 2800, lr = 0.00830984
I0609 21:59:31.807837  2348 solver.cpp:228] Iteration 2900, loss = 0.0199334
I0609 21:59:31.808862  2348 solver.cpp:244]     Train net output #0: loss = 0.0199334 (* 1 = 0.0199334 loss)
I0609 21:59:31.810272  2348 sgd_solver.cpp:106] Iteration 2900, lr = 0.00826148
I0609 21:59:32.850883  2348 solver.cpp:337] Iteration 3000, Testing net (#0)
I0609 21:59:33.250350  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9864
I0609 21:59:33.251353  2348 solver.cpp:404]     Test net output #1: loss = 0.0400717 (* 1 = 0.0400717 loss)
I0609 21:59:33.255363  2348 solver.cpp:228] Iteration 3000, loss = 0.00727182
I0609 21:59:33.255890  2348 solver.cpp:244]     Train net output #0: loss = 0.0072718 (* 1 = 0.0072718 loss)
I0609 21:59:33.256867  2348 sgd_solver.cpp:106] Iteration 3000, lr = 0.00821377
I0609 21:59:34.318763  2348 solver.cpp:228] Iteration 3100, loss = 0.0180837
I0609 21:59:34.319263  2348 solver.cpp:244]     Train net output #0: loss = 0.0180837 (* 1 = 0.0180837 loss)
I0609 21:59:34.321131  2348 sgd_solver.cpp:106] Iteration 3100, lr = 0.0081667
I0609 21:59:35.356736  2348 solver.cpp:228] Iteration 3200, loss = 0.00759276
I0609 21:59:35.357760  2348 solver.cpp:244]     Train net output #0: loss = 0.00759278 (* 1 = 0.00759278 loss)
I0609 21:59:35.358741  2348 sgd_solver.cpp:106] Iteration 3200, lr = 0.00812025
I0609 21:59:36.393894  2348 solver.cpp:228] Iteration 3300, loss = 0.032726
I0609 21:59:36.394395  2348 solver.cpp:244]     Train net output #0: loss = 0.032726 (* 1 = 0.032726 loss)
I0609 21:59:36.395925  2348 sgd_solver.cpp:106] Iteration 3300, lr = 0.00807442
I0609 21:59:37.450358  2348 solver.cpp:228] Iteration 3400, loss = 0.00748132
I0609 21:59:37.450861  2348 solver.cpp:244]     Train net output #0: loss = 0.00748132 (* 1 = 0.00748132 loss)
I0609 21:59:37.451864  2348 sgd_solver.cpp:106] Iteration 3400, lr = 0.00802918
I0609 21:59:38.472447  2348 solver.cpp:337] Iteration 3500, Testing net (#0)
I0609 21:59:38.880427  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9864
I0609 21:59:38.881430  2348 solver.cpp:404]     Test net output #1: loss = 0.0396068 (* 1 = 0.0396068 loss)
I0609 21:59:38.885442  2348 solver.cpp:228] Iteration 3500, loss = 0.00684103
I0609 21:59:38.886445  2348 solver.cpp:244]     Train net output #0: loss = 0.00684103 (* 1 = 0.00684103 loss)
I0609 21:59:38.888450  2348 sgd_solver.cpp:106] Iteration 3500, lr = 0.00798454
I0609 21:59:39.934547  2348 solver.cpp:228] Iteration 3600, loss = 0.0373588
I0609 21:59:39.935590  2348 solver.cpp:244]     Train net output #0: loss = 0.0373588 (* 1 = 0.0373588 loss)
I0609 21:59:39.936553  2348 sgd_solver.cpp:106] Iteration 3600, lr = 0.00794046
I0609 21:59:40.960068  2348 solver.cpp:228] Iteration 3700, loss = 0.0152083
I0609 21:59:40.961071  2348 solver.cpp:244]     Train net output #0: loss = 0.0152083 (* 1 = 0.0152083 loss)
I0609 21:59:40.962097  2348 sgd_solver.cpp:106] Iteration 3700, lr = 0.00789695
I0609 21:59:42.000985  2348 solver.cpp:228] Iteration 3800, loss = 0.00452279
I0609 21:59:42.001987  2348 solver.cpp:244]     Train net output #0: loss = 0.00452277 (* 1 = 0.00452277 loss)
I0609 21:59:42.002990  2348 sgd_solver.cpp:106] Iteration 3800, lr = 0.007854
I0609 21:59:43.033639  2348 solver.cpp:228] Iteration 3900, loss = 0.0299793
I0609 21:59:43.034148  2348 solver.cpp:244]     Train net output #0: loss = 0.0299792 (* 1 = 0.0299792 loss)
I0609 21:59:43.035667  2348 sgd_solver.cpp:106] Iteration 3900, lr = 0.00781158
I0609 21:59:44.058666  2348 solver.cpp:337] Iteration 4000, Testing net (#0)
I0609 21:59:44.474100  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9893
I0609 21:59:44.475113  2348 solver.cpp:404]     Test net output #1: loss = 0.0324524 (* 1 = 0.0324524 loss)
I0609 21:59:44.478615  2348 solver.cpp:228] Iteration 4000, loss = 0.0161738
I0609 21:59:44.479115  2348 solver.cpp:244]     Train net output #0: loss = 0.0161738 (* 1 = 0.0161738 loss)
I0609 21:59:44.479619  2348 sgd_solver.cpp:106] Iteration 4000, lr = 0.00776969
I0609 21:59:45.522294  2348 solver.cpp:228] Iteration 4100, loss = 0.0233402
I0609 21:59:45.523298  2348 solver.cpp:244]     Train net output #0: loss = 0.0233402 (* 1 = 0.0233402 loss)
I0609 21:59:45.524301  2348 sgd_solver.cpp:106] Iteration 4100, lr = 0.00772833
I0609 21:59:46.557157  2348 solver.cpp:228] Iteration 4200, loss = 0.0079118
I0609 21:59:46.558182  2348 solver.cpp:244]     Train net output #0: loss = 0.00791177 (* 1 = 0.00791177 loss)
I0609 21:59:46.559185  2348 sgd_solver.cpp:106] Iteration 4200, lr = 0.00768748
I0609 21:59:47.588677  2348 solver.cpp:228] Iteration 4300, loss = 0.0314846
I0609 21:59:47.589680  2348 solver.cpp:244]     Train net output #0: loss = 0.0314845 (* 1 = 0.0314845 loss)
I0609 21:59:47.591207  2348 sgd_solver.cpp:106] Iteration 4300, lr = 0.00764712
I0609 21:59:48.616732  2348 solver.cpp:228] Iteration 4400, loss = 0.0195216
I0609 21:59:48.617235  2348 solver.cpp:244]     Train net output #0: loss = 0.0195215 (* 1 = 0.0195215 loss)
I0609 21:59:48.618263  2348 sgd_solver.cpp:106] Iteration 4400, lr = 0.00760726
I0609 21:59:49.642638  2348 solver.cpp:337] Iteration 4500, Testing net (#0)
I0609 21:59:50.047365  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9878
I0609 21:59:50.047868  2348 solver.cpp:404]     Test net output #1: loss = 0.0374535 (* 1 = 0.0374535 loss)
I0609 21:59:50.051880  2348 solver.cpp:228] Iteration 4500, loss = 0.00524942
I0609 21:59:50.052379  2348 solver.cpp:244]     Train net output #0: loss = 0.00524939 (* 1 = 0.00524939 loss)
I0609 21:59:50.053405  2348 sgd_solver.cpp:106] Iteration 4500, lr = 0.00756788
I0609 21:59:51.075937  2348 solver.cpp:228] Iteration 4600, loss = 0.0127813
I0609 21:59:51.076992  2348 solver.cpp:244]     Train net output #0: loss = 0.0127812 (* 1 = 0.0127812 loss)
I0609 21:59:51.078007  2348 sgd_solver.cpp:106] Iteration 4600, lr = 0.00752897
I0609 21:59:52.104898  2348 solver.cpp:228] Iteration 4700, loss = 0.00635904
I0609 21:59:52.105911  2348 solver.cpp:244]     Train net output #0: loss = 0.00635898 (* 1 = 0.00635898 loss)
I0609 21:59:52.107391  2348 sgd_solver.cpp:106] Iteration 4700, lr = 0.00749052
I0609 21:59:53.135903  2348 solver.cpp:228] Iteration 4800, loss = 0.0132648
I0609 21:59:53.136395  2348 solver.cpp:244]     Train net output #0: loss = 0.0132648 (* 1 = 0.0132648 loss)
I0609 21:59:53.137398  2348 sgd_solver.cpp:106] Iteration 4800, lr = 0.00745253
I0609 21:59:54.159410  2348 solver.cpp:228] Iteration 4900, loss = 0.0026975
I0609 21:59:54.159912  2348 solver.cpp:244]     Train net output #0: loss = 0.00269743 (* 1 = 0.00269743 loss)
I0609 21:59:54.161918  2348 sgd_solver.cpp:106] Iteration 4900, lr = 0.00741498
I0609 21:59:55.177345  2348 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_5000.caffemodel
I0609 21:59:55.197401  2348 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_5000.solverstate
I0609 21:59:55.203416  2348 solver.cpp:337] Iteration 5000, Testing net (#0)
I0609 21:59:55.590181  2348 solver.cpp:404]     Test net output #0: accuracy = 0.989
I0609 21:59:55.591202  2348 solver.cpp:404]     Test net output #1: loss = 0.0324299 (* 1 = 0.0324299 loss)
I0609 21:59:55.595188  2348 solver.cpp:228] Iteration 5000, loss = 0.0379492
I0609 21:59:55.595721  2348 solver.cpp:244]     Train net output #0: loss = 0.0379491 (* 1 = 0.0379491 loss)
I0609 21:59:55.596724  2348 sgd_solver.cpp:106] Iteration 5000, lr = 0.00737788
I0609 21:59:56.624313  2348 solver.cpp:228] Iteration 5100, loss = 0.018057
I0609 21:59:56.625315  2348 solver.cpp:244]     Train net output #0: loss = 0.018057 (* 1 = 0.018057 loss)
I0609 21:59:56.626318  2348 sgd_solver.cpp:106] Iteration 5100, lr = 0.0073412
I0609 21:59:57.653751  2348 solver.cpp:228] Iteration 5200, loss = 0.00522353
I0609 21:59:57.654777  2348 solver.cpp:244]     Train net output #0: loss = 0.00522349 (* 1 = 0.00522349 loss)
I0609 21:59:57.655779  2348 sgd_solver.cpp:106] Iteration 5200, lr = 0.00730495
I0609 21:59:58.682821  2348 solver.cpp:228] Iteration 5300, loss = 0.00132337
I0609 21:59:58.683823  2348 solver.cpp:244]     Train net output #0: loss = 0.00132333 (* 1 = 0.00132333 loss)
I0609 21:59:58.685328  2348 sgd_solver.cpp:106] Iteration 5300, lr = 0.00726911
I0609 21:59:59.714751  2348 solver.cpp:228] Iteration 5400, loss = 0.00640987
I0609 21:59:59.715754  2348 solver.cpp:244]     Train net output #0: loss = 0.00640985 (* 1 = 0.00640985 loss)
I0609 21:59:59.717280  2348 sgd_solver.cpp:106] Iteration 5400, lr = 0.00723368
I0609 22:00:00.736883  2348 solver.cpp:337] Iteration 5500, Testing net (#0)
I0609 22:00:01.129827  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9888
I0609 22:00:01.130858  2348 solver.cpp:404]     Test net output #1: loss = 0.0355597 (* 1 = 0.0355597 loss)
I0609 22:00:01.134837  2348 solver.cpp:228] Iteration 5500, loss = 0.00897519
I0609 22:00:01.135354  2348 solver.cpp:244]     Train net output #0: loss = 0.00897517 (* 1 = 0.00897517 loss)
I0609 22:00:01.135864  2348 sgd_solver.cpp:106] Iteration 5500, lr = 0.00719865
I0609 22:00:02.165493  2348 solver.cpp:228] Iteration 5600, loss = 0.00140956
I0609 22:00:02.166020  2348 solver.cpp:244]     Train net output #0: loss = 0.00140954 (* 1 = 0.00140954 loss)
I0609 22:00:02.166996  2348 sgd_solver.cpp:106] Iteration 5600, lr = 0.00716402
I0609 22:00:03.193999  2348 solver.cpp:228] Iteration 5700, loss = 0.00917903
I0609 22:00:03.195029  2348 solver.cpp:244]     Train net output #0: loss = 0.00917901 (* 1 = 0.00917901 loss)
I0609 22:00:03.196033  2348 sgd_solver.cpp:106] Iteration 5700, lr = 0.00712977
I0609 22:00:04.224375  2348 solver.cpp:228] Iteration 5800, loss = 0.0258
I0609 22:00:04.225404  2348 solver.cpp:244]     Train net output #0: loss = 0.0258 (* 1 = 0.0258 loss)
I0609 22:00:04.226403  2348 sgd_solver.cpp:106] Iteration 5800, lr = 0.0070959
I0609 22:00:05.249439  2348 solver.cpp:228] Iteration 5900, loss = 0.00583253
I0609 22:00:05.251418  2348 solver.cpp:244]     Train net output #0: loss = 0.00583252 (* 1 = 0.00583252 loss)
I0609 22:00:05.252424  2348 sgd_solver.cpp:106] Iteration 5900, lr = 0.0070624
I0609 22:00:06.271405  2348 solver.cpp:337] Iteration 6000, Testing net (#0)
I0609 22:00:06.664938  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9903
I0609 22:00:06.665462  2348 solver.cpp:404]     Test net output #1: loss = 0.02914 (* 1 = 0.02914 loss)
I0609 22:00:06.668948  2348 solver.cpp:228] Iteration 6000, loss = 0.00317923
I0609 22:00:06.669474  2348 solver.cpp:244]     Train net output #0: loss = 0.00317922 (* 1 = 0.00317922 loss)
I0609 22:00:06.669474  2348 sgd_solver.cpp:106] Iteration 6000, lr = 0.00702927
I0609 22:00:07.726944  2348 solver.cpp:228] Iteration 6100, loss = 0.00174165
I0609 22:00:07.727445  2348 solver.cpp:244]     Train net output #0: loss = 0.00174163 (* 1 = 0.00174163 loss)
I0609 22:00:07.728948  2348 sgd_solver.cpp:106] Iteration 6100, lr = 0.0069965
I0609 22:00:08.808486  2348 solver.cpp:228] Iteration 6200, loss = 0.00775676
I0609 22:00:08.809015  2348 solver.cpp:244]     Train net output #0: loss = 0.00775674 (* 1 = 0.00775674 loss)
I0609 22:00:08.809990  2348 sgd_solver.cpp:106] Iteration 6200, lr = 0.00696408
I0609 22:00:09.867480  2348 solver.cpp:228] Iteration 6300, loss = 0.00732531
I0609 22:00:09.868479  2348 solver.cpp:244]     Train net output #0: loss = 0.00732529 (* 1 = 0.00732529 loss)
I0609 22:00:09.869983  2348 sgd_solver.cpp:106] Iteration 6300, lr = 0.00693201
I0609 22:00:10.910008  2348 solver.cpp:228] Iteration 6400, loss = 0.00703314
I0609 22:00:10.911516  2348 solver.cpp:244]     Train net output #0: loss = 0.00703313 (* 1 = 0.00703313 loss)
I0609 22:00:10.913007  2348 sgd_solver.cpp:106] Iteration 6400, lr = 0.00690029
I0609 22:00:11.943835  2348 solver.cpp:337] Iteration 6500, Testing net (#0)
I0609 22:00:12.348021  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9893
I0609 22:00:12.348515  2348 solver.cpp:404]     Test net output #1: loss = 0.0328561 (* 1 = 0.0328561 loss)
I0609 22:00:12.353009  2348 solver.cpp:228] Iteration 6500, loss = 0.00939411
I0609 22:00:12.353512  2348 solver.cpp:244]     Train net output #0: loss = 0.00939411 (* 1 = 0.00939411 loss)
I0609 22:00:12.355015  2348 sgd_solver.cpp:106] Iteration 6500, lr = 0.0068689
I0609 22:00:13.408545  2348 solver.cpp:228] Iteration 6600, loss = 0.016508
I0609 22:00:13.409046  2348 solver.cpp:244]     Train net output #0: loss = 0.016508 (* 1 = 0.016508 loss)
I0609 22:00:13.409548  2348 sgd_solver.cpp:106] Iteration 6600, lr = 0.00683784
I0609 22:00:14.451058  2348 solver.cpp:228] Iteration 6700, loss = 0.0076011
I0609 22:00:14.452060  2348 solver.cpp:244]     Train net output #0: loss = 0.0076011 (* 1 = 0.0076011 loss)
I0609 22:00:14.452585  2348 sgd_solver.cpp:106] Iteration 6700, lr = 0.00680711
I0609 22:00:15.485703  2348 solver.cpp:228] Iteration 6800, loss = 0.00344433
I0609 22:00:15.486706  2348 solver.cpp:244]     Train net output #0: loss = 0.00344433 (* 1 = 0.00344433 loss)
I0609 22:00:15.488713  2348 sgd_solver.cpp:106] Iteration 6800, lr = 0.0067767
I0609 22:00:16.521100  2348 solver.cpp:228] Iteration 6900, loss = 0.00540986
I0609 22:00:16.522101  2348 solver.cpp:244]     Train net output #0: loss = 0.00540986 (* 1 = 0.00540986 loss)
I0609 22:00:16.523131  2348 sgd_solver.cpp:106] Iteration 6900, lr = 0.0067466
I0609 22:00:17.547116  2348 solver.cpp:337] Iteration 7000, Testing net (#0)
I0609 22:00:17.943941  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9902
I0609 22:00:17.944941  2348 solver.cpp:404]     Test net output #1: loss = 0.0297431 (* 1 = 0.0297431 loss)
I0609 22:00:17.948951  2348 solver.cpp:228] Iteration 7000, loss = 0.00382005
I0609 22:00:17.949452  2348 solver.cpp:244]     Train net output #0: loss = 0.00382005 (* 1 = 0.00382005 loss)
I0609 22:00:17.949954  2348 sgd_solver.cpp:106] Iteration 7000, lr = 0.00671681
I0609 22:00:18.987884  2348 solver.cpp:228] Iteration 7100, loss = 0.0125011
I0609 22:00:18.988886  2348 solver.cpp:244]     Train net output #0: loss = 0.0125011 (* 1 = 0.0125011 loss)
I0609 22:00:18.989917  2348 sgd_solver.cpp:106] Iteration 7100, lr = 0.00668733
I0609 22:00:20.030164  2348 solver.cpp:228] Iteration 7200, loss = 0.00423594
I0609 22:00:20.031647  2348 solver.cpp:244]     Train net output #0: loss = 0.00423592 (* 1 = 0.00423592 loss)
I0609 22:00:20.033653  2348 sgd_solver.cpp:106] Iteration 7200, lr = 0.00665815
I0609 22:00:21.080216  2348 solver.cpp:228] Iteration 7300, loss = 0.016576
I0609 22:00:21.081199  2348 solver.cpp:244]     Train net output #0: loss = 0.016576 (* 1 = 0.016576 loss)
I0609 22:00:21.082736  2348 sgd_solver.cpp:106] Iteration 7300, lr = 0.00662927
I0609 22:00:22.122757  2348 solver.cpp:228] Iteration 7400, loss = 0.00299412
I0609 22:00:22.123258  2348 solver.cpp:244]     Train net output #0: loss = 0.00299412 (* 1 = 0.00299412 loss)
I0609 22:00:22.124785  2348 sgd_solver.cpp:106] Iteration 7400, lr = 0.00660067
I0609 22:00:23.151199  2348 solver.cpp:337] Iteration 7500, Testing net (#0)
I0609 22:00:23.551210  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9894
I0609 22:00:23.552215  2348 solver.cpp:404]     Test net output #1: loss = 0.0335477 (* 1 = 0.0335477 loss)
I0609 22:00:23.555724  2348 solver.cpp:228] Iteration 7500, loss = 0.00158821
I0609 22:00:23.556226  2348 solver.cpp:244]     Train net output #0: loss = 0.00158823 (* 1 = 0.00158823 loss)
I0609 22:00:23.557756  2348 sgd_solver.cpp:106] Iteration 7500, lr = 0.00657236
I0609 22:00:24.594072  2348 solver.cpp:228] Iteration 7600, loss = 0.00663249
I0609 22:00:24.595088  2348 solver.cpp:244]     Train net output #0: loss = 0.00663251 (* 1 = 0.00663251 loss)
I0609 22:00:24.596068  2348 sgd_solver.cpp:106] Iteration 7600, lr = 0.00654433
I0609 22:00:25.632128  2348 solver.cpp:228] Iteration 7700, loss = 0.0273988
I0609 22:00:25.633129  2348 solver.cpp:244]     Train net output #0: loss = 0.0273988 (* 1 = 0.0273988 loss)
I0609 22:00:25.634132  2348 sgd_solver.cpp:106] Iteration 7700, lr = 0.00651658
I0609 22:00:26.674262  2348 solver.cpp:228] Iteration 7800, loss = 0.00437337
I0609 22:00:26.674789  2348 solver.cpp:244]     Train net output #0: loss = 0.0043734 (* 1 = 0.0043734 loss)
I0609 22:00:26.675792  2348 sgd_solver.cpp:106] Iteration 7800, lr = 0.00648911
I0609 22:00:27.714426  2348 solver.cpp:228] Iteration 7900, loss = 0.00459187
I0609 22:00:27.715453  2348 solver.cpp:244]     Train net output #0: loss = 0.0045919 (* 1 = 0.0045919 loss)
I0609 22:00:27.717434  2348 sgd_solver.cpp:106] Iteration 7900, lr = 0.0064619
I0609 22:00:28.744298  2348 solver.cpp:337] Iteration 8000, Testing net (#0)
I0609 22:00:29.134865  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9892
I0609 22:00:29.135869  2348 solver.cpp:404]     Test net output #1: loss = 0.0302072 (* 1 = 0.0302072 loss)
I0609 22:00:29.139376  2348 solver.cpp:228] Iteration 8000, loss = 0.00470957
I0609 22:00:29.139878  2348 solver.cpp:244]     Train net output #0: loss = 0.00470959 (* 1 = 0.00470959 loss)
I0609 22:00:29.139878  2348 sgd_solver.cpp:106] Iteration 8000, lr = 0.00643496
I0609 22:00:30.168918  2348 solver.cpp:228] Iteration 8100, loss = 0.0204394
I0609 22:00:30.169944  2348 solver.cpp:244]     Train net output #0: loss = 0.0204394 (* 1 = 0.0204394 loss)
I0609 22:00:30.169944  2348 sgd_solver.cpp:106] Iteration 8100, lr = 0.00640827
I0609 22:00:31.195981  2348 solver.cpp:228] Iteration 8200, loss = 0.00728066
I0609 22:00:31.196984  2348 solver.cpp:244]     Train net output #0: loss = 0.00728066 (* 1 = 0.00728066 loss)
I0609 22:00:31.198485  2348 sgd_solver.cpp:106] Iteration 8200, lr = 0.00638185
I0609 22:00:32.224184  2348 solver.cpp:228] Iteration 8300, loss = 0.0334625
I0609 22:00:32.225174  2348 solver.cpp:244]     Train net output #0: loss = 0.0334625 (* 1 = 0.0334625 loss)
I0609 22:00:32.226177  2348 sgd_solver.cpp:106] Iteration 8300, lr = 0.00635568
I0609 22:00:33.248881  2348 solver.cpp:228] Iteration 8400, loss = 0.00712529
I0609 22:00:33.249883  2348 solver.cpp:244]     Train net output #0: loss = 0.00712529 (* 1 = 0.00712529 loss)
I0609 22:00:33.251371  2348 sgd_solver.cpp:106] Iteration 8400, lr = 0.00632975
I0609 22:00:34.266192  2348 solver.cpp:337] Iteration 8500, Testing net (#0)
I0609 22:00:34.670419  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9896
I0609 22:00:34.671445  2348 solver.cpp:404]     Test net output #1: loss = 0.0309825 (* 1 = 0.0309825 loss)
I0609 22:00:34.675432  2348 solver.cpp:228] Iteration 8500, loss = 0.00653579
I0609 22:00:34.675935  2348 solver.cpp:244]     Train net output #0: loss = 0.0065358 (* 1 = 0.0065358 loss)
I0609 22:00:34.676944  2348 sgd_solver.cpp:106] Iteration 8500, lr = 0.00630407
I0609 22:00:35.699957  2348 solver.cpp:228] Iteration 8600, loss = 0.00119654
I0609 22:00:35.701396  2348 solver.cpp:244]     Train net output #0: loss = 0.00119654 (* 1 = 0.00119654 loss)
I0609 22:00:35.703403  2348 sgd_solver.cpp:106] Iteration 8600, lr = 0.00627864
I0609 22:00:36.726616  2348 solver.cpp:228] Iteration 8700, loss = 0.00364678
I0609 22:00:36.727633  2348 solver.cpp:244]     Train net output #0: loss = 0.00364679 (* 1 = 0.00364679 loss)
I0609 22:00:36.728613  2348 sgd_solver.cpp:106] Iteration 8700, lr = 0.00625344
I0609 22:00:37.751993  2348 solver.cpp:228] Iteration 8800, loss = 0.0016851
I0609 22:00:37.752996  2348 solver.cpp:244]     Train net output #0: loss = 0.00168511 (* 1 = 0.00168511 loss)
I0609 22:00:37.754510  2348 sgd_solver.cpp:106] Iteration 8800, lr = 0.00622847
I0609 22:00:38.781461  2348 solver.cpp:228] Iteration 8900, loss = 0.000729938
I0609 22:00:38.782488  2348 solver.cpp:244]     Train net output #0: loss = 0.000729947 (* 1 = 0.000729947 loss)
I0609 22:00:38.783972  2348 sgd_solver.cpp:106] Iteration 8900, lr = 0.00620374
I0609 22:00:39.797982  2348 solver.cpp:337] Iteration 9000, Testing net (#0)
I0609 22:00:40.195657  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9899
I0609 22:00:40.196158  2348 solver.cpp:404]     Test net output #1: loss = 0.0301065 (* 1 = 0.0301065 loss)
I0609 22:00:40.201172  2348 solver.cpp:228] Iteration 9000, loss = 0.0112834
I0609 22:00:40.202204  2348 solver.cpp:244]     Train net output #0: loss = 0.0112834 (* 1 = 0.0112834 loss)
I0609 22:00:40.203207  2348 sgd_solver.cpp:106] Iteration 9000, lr = 0.00617924
I0609 22:00:41.237516  2348 solver.cpp:228] Iteration 9100, loss = 0.00836366
I0609 22:00:41.238016  2348 solver.cpp:244]     Train net output #0: loss = 0.00836367 (* 1 = 0.00836367 loss)
I0609 22:00:41.239549  2348 sgd_solver.cpp:106] Iteration 9100, lr = 0.00615496
I0609 22:00:42.279135  2348 solver.cpp:228] Iteration 9200, loss = 0.00204895
I0609 22:00:42.280139  2348 solver.cpp:244]     Train net output #0: loss = 0.00204896 (* 1 = 0.00204896 loss)
I0609 22:00:42.281595  2348 sgd_solver.cpp:106] Iteration 9200, lr = 0.0061309
I0609 22:00:43.314584  2348 solver.cpp:228] Iteration 9300, loss = 0.00582986
I0609 22:00:43.315587  2348 solver.cpp:244]     Train net output #0: loss = 0.00582986 (* 1 = 0.00582986 loss)
I0609 22:00:43.317090  2348 sgd_solver.cpp:106] Iteration 9300, lr = 0.00610706
I0609 22:00:44.349597  2348 solver.cpp:228] Iteration 9400, loss = 0.0217738
I0609 22:00:44.350112  2348 solver.cpp:244]     Train net output #0: loss = 0.0217738 (* 1 = 0.0217738 loss)
I0609 22:00:44.351552  2348 sgd_solver.cpp:106] Iteration 9400, lr = 0.00608343
I0609 22:00:45.373124  2348 solver.cpp:337] Iteration 9500, Testing net (#0)
I0609 22:00:45.770292  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9888
I0609 22:00:45.770292  2348 solver.cpp:404]     Test net output #1: loss = 0.0370887 (* 1 = 0.0370887 loss)
I0609 22:00:45.775274  2348 solver.cpp:228] Iteration 9500, loss = 0.00312861
I0609 22:00:45.775801  2348 solver.cpp:244]     Train net output #0: loss = 0.00312861 (* 1 = 0.00312861 loss)
I0609 22:00:45.776777  2348 sgd_solver.cpp:106] Iteration 9500, lr = 0.00606002
I0609 22:00:46.804632  2348 solver.cpp:228] Iteration 9600, loss = 0.00250086
I0609 22:00:46.805649  2348 solver.cpp:244]     Train net output #0: loss = 0.00250085 (* 1 = 0.00250085 loss)
I0609 22:00:46.806653  2348 sgd_solver.cpp:106] Iteration 9600, lr = 0.00603682
I0609 22:00:47.845692  2348 solver.cpp:228] Iteration 9700, loss = 0.0032868
I0609 22:00:47.846194  2348 solver.cpp:244]     Train net output #0: loss = 0.0032868 (* 1 = 0.0032868 loss)
I0609 22:00:47.848222  2348 sgd_solver.cpp:106] Iteration 9700, lr = 0.00601382
I0609 22:00:48.884227  2348 solver.cpp:228] Iteration 9800, loss = 0.0146238
I0609 22:00:48.885254  2348 solver.cpp:244]     Train net output #0: loss = 0.0146238 (* 1 = 0.0146238 loss)
I0609 22:00:48.887243  2348 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102
I0609 22:00:49.915205  2348 solver.cpp:228] Iteration 9900, loss = 0.00428488
I0609 22:00:49.916237  2348 solver.cpp:244]     Train net output #0: loss = 0.00428487 (* 1 = 0.00428487 loss)
I0609 22:00:49.917237  2348 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843
I0609 22:00:50.934748  2348 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0609 22:00:50.965857  2348 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0609 22:00:50.975267  2348 solver.cpp:317] Iteration 10000, loss = 0.0040412
I0609 22:00:50.975769  2348 solver.cpp:337] Iteration 10000, Testing net (#0)
I0609 22:00:51.370457  2348 solver.cpp:404]     Test net output #0: accuracy = 0.9911
I0609 22:00:51.370457  2348 solver.cpp:404]     Test net output #1: loss = 0.0288451 (* 1 = 0.0288451 loss)
I0609 22:00:51.372915  2348 solver.cpp:322] Optimization Done.
I0609 22:00:51.373416  2348 caffe.cpp:223] Optimization Done.

上手くできたでしょうか。
これから他の画像でも試していきたいと思います。


関連記事
qstairs.hatenablog.com
qstairs.hatenablog.com
qstairs.hatenablog.com

広告