티스토리 뷰
출처¶
http://jorditores.org/first-contact-with-tensorflow/#cap5 (First Contact with tensorflow)
https://tensorflow.rstudio.com/ (TensorFlow™ for R)
http://motioninsocial.com/tufte/ (Tufte in R)
Convolutional Neural Network (ver.R)¶
The MNIST data-set¶
require(tensorflow)
datasets <- tf$contrib$learn$datasets
mnist <- datasets$mnist$read_data_sets("MNIST-data",one_hot = TRUE)
모형에 대한 자세한 설명은 생략하도록 하겠습니다.¶
CNN(convolutional neural network), convolutions, max-pooling, ReLU, softmax, cross entropy, Adam
input -> conv1 -> pool1 -> conv2 -> pool2 -> [inner product -> relu] -> dropout -> [inner product -> softmax] -> output
Loss : cross entropy, Optimizer : Adam
x <- tf$placeholder("float", shape(NULL, 784L))
y_ <- tf$placeholder("float", shape(NULL, 10L))
x_image <- tf$reshape(x, shape(-1 , 28 , 28 , 1))
print(x_image)
함수 정의 : weight_variable - truncated normal distribution에서 난수 발생해서 원하는 모양으로 weight tensor를 만드는 함수.
weight_variable <- function(shape){
initial <- tf$truncated_normal(as.integer(shape))
return(tf$Variable(initial))
}
함수 정의 : bias_variable - 원하는 모양으로 bias tensor를 만드는 함수.
bias_variable <- function(shape){
initial <- tf$constant(rep(1,shape))
return(tf$Variable(initial))
}
함수 정의 : conv2d - 20D convolution 계산하는 함수
conv2d <- function(x,W){
return(tf$nn$conv2d(x, W, strides = c(1L, 1L, 1L, 1L), padding = 'SAME'))
}
함수 정의 : max_pool_2x2 - max-pooling 하는 함수
max_pool_2x2 <- function(x){
return(tf$nn$max_pool(x, ksize = c(1L, 2L, 2L, 1L), strides = c(1L, 2L, 2L, 1L), padding = 'SAME'))
}
모형 설정¶
convolution 1
W_conv1 <- weight_variable(c(5, 5, 1, 32))
b_conv1 <- bias_variable(32)
h_conv1 <- tf$nn$relu(conv2d(x_image, W_conv1) + b_conv1)
max-pooling 1
h_pool1 <- max_pool_2x2(h_conv1)
convolution 2
W_conv2 <- weight_variable(c(5,5,32,64))
b_conv2 <- bias_variable(64)
h_conv2 <- tf$nn$relu(conv2d(h_pool1, W_conv2) + b_conv2)
max-pooling 2
h_pool2 <- max_pool_2x2(h_conv2)
reshaping and [inner product - ReLU (activate function)] 1
W_fc1 <- weight_variable(c(7*7*64, 1024))
b_fc1 <- bias_variable(1024)
h_pool2_flat <- tf$reshape(h_pool2, c(-1L, as.integer(7*7*64)))
h_fc1 <- tf$nn$relu(tf$matmul(h_pool2_flat, W_fc1) + b_fc1)
dropping out
keep_prob <- tf$placeholder("float")
h_fc1_drop <- tf$nn$dropout(h_fc1, keep_prob)
[inner product - softmax (activate function)] 2
W_fc2 <- weight_variable(c(1024,10))
b_fc2 <- bias_variable(10)
y_conv <- tf$nn$softmax(tf$matmul(h_fc1_drop, W_fc2) + b_fc2)
Loss 와 Optimizer 설정¶
cross_entropy <- -tf$reduce_sum(y_*log(y_conv))
train_step <- tf$train$AdamOptimizer(1e-4)$minimize(cross_entropy)
defining accuracy
correct_prediction <- tf$equal(tf$argmax(y_conv, 1L), tf$argmax(y_, 1L))
accuracy <- tf$reduce_mean(tf$cast(correct_prediction, "float"))
Session 반복 실행¶
sess <- tf$Session()
sess$run(tf$global_variables_initializer())
for(i in 0:200){
batch = mnist$train$next_batch(50L)
sess$run(train_step, feed_dict = dict(x = batch[[1]], y_ = batch[[2]], keep_prob = 0.5))
if(i%%10 == 0){
train_accuracy = sess$run(accuracy, feed_dict = dict(x = batch[[1]], y_ = batch[[2]], keep_prob = 1))
cat("step ", i, "training accuracy ", train_accuracy , "\n")
}
if(i%%50 == 0){
cat("test accuracy ", sess$run(accuracy, feed_dict = dict(x = mnist$test$images, y_ = mnist$test$labels, keep_prob = 1)), "step ", i, "\n")
}
}
같이 보기¶
http://deepstat.tistory.com/11 (Convolutional Neural Network (ver.Python))
'Tensorflow > Tensorflow for R' 카테고리의 다른 글
Autoencoder (ver.R) (0) | 2017.10.01 |
---|---|
Multilayer Perceptron (ver.R) (0) | 2017.09.30 |
단일신경망 Single Layer Neural Network (ver. R) (0) | 2017.06.24 |
군집화 k-means Clustering (ver.R) (0) | 2017.06.08 |
선형 회귀분석 Linear regression (ver.R) (0) | 2017.06.06 |