티스토리 뷰
Flux in Julia/Flux in Julia
CNN Convolutional Neural Network (ver. Flux in Julia)
딥스탯 2018. 12. 3. 20:48참고자료¶
http://jorditorres.org/first-contact-with-tnesorflow/#cap5 (First Contact with tensorflow)
https://deepstat.tistory.com/11 (Convolutional Neural Network (ver. Python)
https://deepstat.tistory.com/12 (Convolutional Neural Network (ver. R)
http://fluxml.ai/ (flux: The Elegant Machine Learning Stack)
CNN Convolutional Neural Network (ver. Flux in Julia)¶
The MNIST data-set¶
In [1]:
using PyCall
@pyimport tensorflow.keras.datasets.mnist as MNIST
In [2]:
mnist_train, mnist_test = MNIST.load_data()
println(typeof(mnist_train))
println(typeof(mnist_test))
next_batch
라는 함수 만들기.¶
In [3]:
mutable struct My_data
data::Tuple
start_n::Int
end
In [4]:
function next_batch(data::My_data, n::Int)
start_n = data.start_n
end_n = data.start_n + n - 1
batch_X = float(data.data[1][start_n:end_n,:,:])
reshape_batch_X = reshape(batch_X, (:,28,28,1))
batch_Y = data.data[2][start_n:end_n]
data.start_n = (end_n+1) % (size(data.data[1])[1])
return (permutedims(reshape_batch_X, (2,3,4,1)), Flux.onehotbatch(batch_Y, 0:9))
end
Out[4]:
In [5]:
train_dat = My_data(mnist_train,1)
test_dat = My_data(mnist_test,1);
CNN Convolutional Neural Network¶
In [6]:
using Flux, Statistics
모형 설정¶
input -> conv1 -> pool1 -> conv2 -> pool2 -> [inner product -> relu] -> dropout -> [inner product -> softmax] -> output
In [7]:
m = Chain(
Conv((5,5), 1=>32, relu, pad = 2),
x -> maxpool(x, (2,2)),
Conv((5,5), 32=>64, relu, pad = 2),
x -> maxpool(x, (2,2)),
x -> reshape(x, :, size(x, 4)),
Dense(7*7*64,1024),Dropout(0.5),
Dense(1024,10),softmax);
Loss function : cross-entropy¶
In [8]:
loss(x, y) = Flux.crossentropy(m(x),y);
In [9]:
Accuracy(x, y) = mean(Flux.onecold(m(x)) .== Flux.onecold(y));
Optimizer : ADAM¶
In [10]:
PARS = params(m)
function my_opt(n, lr)
for i = 0:n
train_loss_vec = zeros(30)
test_acc = zeros(2)
for j in 1:30
train_X, train_Y = next_batch(train_dat,2000)
Flux.train!(loss, [(train_X, train_Y)], ADAM(PARS, lr))
train_loss_vec[j] = loss(train_X,train_Y).data
end
if i % 1 == 0
Flux.testmode!(m)
for k in 1:2
test_X, test_Y = next_batch(test_dat,5000)
test_acc[k] = Accuracy(test_X, test_Y)
end
Flux.testmode!(m,false)
println("step:",i," train_loss:" ,mean(train_loss_vec)," test_acc:" ,mean(test_acc))
end
end
end
Out[10]:
In [11]:
my_opt(0,0.001)
In [12]:
my_opt(10, 0.0001)
'Flux in Julia > Flux in Julia' 카테고리의 다른 글
Autoencoder (ver. Flux in Julia) (0) | 2018.12.05 |
---|---|
단일신경망 Single Layer Neural Network (ver. Flux in Julia) (수정) (0) | 2018.11.27 |
선형 회귀분석 Linear regression (ver. Flux in Julia) (0) | 2018.11.25 |