강의영상

import

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.experimental.numpy as tnp
tf.config.experimental.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
import graphviz
def gv(s): return graphviz.Source('digraph G{ rankdir="LR"'+ s + ';}')

중간고사 관련 잡담

중간고사 3번문제

- 특이한모형: 오버핏이 일어날 수 없는 모형이다.

  • 유의미한 coef: 상수항(bias), $\cos(t)$의 계수, $\cos(2t)$의 계수, $\cos(5t)$의 계수.
  • 유의미하지 않은 coef: $\cos(3t)$의 계수, $\cos(4t)$의 계수
  • 유의미하지 않은 계수는 $n%$이 커질수록 0으로 추정된다 = $\cos(3t)$와 $\cos(5t)$는 사용자가 임의로 제외하지 않아도 결국 모형에서 알아서 제거된다 = overfit이 일어나지 않는다. 모형이 알아서 유의미한 변수만 뽑아서 fit하는 느낌

- 3번문제는 overfit이 일어나지 않는다. 이러한 신기한 일이 일어나는 이유는 모든 설명변수가 직교하기 때문임.

  • 이런 모형의 장점: overfit이 일어날 위험이 없으므로 train/test로 나누어 학습할 이유가 없다. (샘플만 버리는 꼴, test에 빼둔 observation까지 모아서 학습해 $\beta$를 좀 더 정확히 추론하는게 차라리 더 이득)
  • 이러한 모형에서 할일: 추정된 계수들이 0인지 아닌지만 test하면 된다. (이것을 유의성검정이라고 한다)

- 직교기저의 예시

  • 빨강과 파랑을 255,255만큼 섞으면 보라색이 된다.
  • 빨강과 파랑과 노랑을 각각 255,255,255만큼 섞으면 검은색이 된다.
  • 임의의 어떠한 색도 빨강,파랑,노랑의 조합으로 표현가능하다. 즉 $\text{color}= \text{red}*\beta_1 + \text{blue}*\beta_2 + \text{yellow}*\beta_3$ 이다.
  • (빨,파,노)는 색을 표현하는 basis이다. (적절한 $\beta_1,\beta_2,\beta_3$을 구하기만 하면 임의의 색도 표현가능)
  • (빨,보,노)역시 색을 표현하는 basis라 볼 수 있다. (파란색이 필요할때 보라색-빨간색을 하면되니까)
  • (빨,보,검)역시 색을 표현하는 basis라 볼 수 있다. (파란색이 필요하면 보라색-빨간색을 하면되고, 노란색이 필요하면 검정색-보라색을 하면 되니까)
  • (빨,파,노)는 직교기저이다.

- 3번에서 알아둘 것: (1) 직교기저의 개념 (추후 재설명) (2) 임의의 색을 표현하려면 3개의 basis가 필요함

중간고사 1-(3)번 문제

- 그림을 그려보자.

_x= tf.constant(np.arange(1,10001)/10000)
_y= tnp.random.randn(10000) + (0.5 + 2*_x) 
plt.plot(_x,_y,'.',alpha=0.1)
[<matplotlib.lines.Line2D at 0x7efe0046c580>]

- 저것 꼭 10000개 다 모아서 loss계산해야할까?

plt.plot(_x,_y,'.',alpha=0.1)
plt.plot(_x[::10],_y[::10],'.')
[<matplotlib.lines.Line2D at 0x7efe00376850>]

- 대충 이정도만 모아서 해도 비슷하지 않을까? $\to$ 해보자!

경사하강법과 확률적경사하강법

- 10개의 샘플이 있다고 가정. $\{(x_i,y_i)\}_{i=1}^{10}$

ver1: 모든 샘플을 사용하여 slope계산

(epoch1) $loss=\sum_{i=1}^{10}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$

(epoch2) $loss=\sum_{i=1}^{10}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$

...

ver2: 하나의 샘플만 사용하여 slope계산

(epoch1)

  • $loss=(y_1-\beta_0-\beta_1x_1)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=(y_2-\beta_0-\beta_1x_1)^2 \quad \to \quad slope \quad \to \quad update$
  • ...
  • $loss=(y_{10}-\beta_0-\beta_1x_{10})^2 \quad \to \quad slope \quad \to \quad update$

(epoch2)

  • $loss=(y_1-\beta_0-\beta_1x_1)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=(y_2-\beta_0-\beta_1x_1)^2 \quad \to \quad slope \quad \to \quad update$
  • ...
  • $loss=(y_{10}-\beta_0-\beta_1x_{10})^2 \quad \to \quad slope \quad \to \quad update$

...

ver3: $m(\leq n)$개의 샘플만 사용하여 slope계산

$m=3$이라고 하자.

(epoch1)

  • $loss=\sum_{i=1}^{3}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=\sum_{i=4}^{6}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=\sum_{i=7}^{9}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=(y_{10}-\beta_0-\beta_1x_{10})^2 \quad \to \quad slope \quad \to \quad update$

(epoch2)

  • $loss=\sum_{i=1}^{3}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=\sum_{i=4}^{6}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=\sum_{i=7}^{9}(y_i-\beta_0-\beta_1x_i)^2 \quad \to \quad slope \quad \to \quad update$
  • $loss=(y_{10}-\beta_0-\beta_1x_{10})^2 \quad \to \quad slope \quad \to \quad update$

...

용어의 정리

옛날 (좀 더 엄밀)

- ver1: gradient descent, batch gradient descent

- ver2: stochastic gradient descent

- ver3: mini-batch gradient descent, mini-batch stochastic gradient descent

요즘

- ver1: gradient descent

- ver2: stochastic gradient descent with batch size = 1

- ver3: stochastic gradient descent

note: 이렇게 많이 쓰는 이유? ver1,2는 사실상 없는 방법이므로

ver1,2,3 이외에 좀 더 지저분한 것들이 있다.

- ver2,3에서 샘플을 셔플할 수도 있다.

- ver3에서 일부 샘플이 학습에 참여 안하는 버전도 있다.

- 개인적 생각: 크게3개정도만 알면 괜찮고 나머지는 그렇게 유의미하지 않아보인다.

Discussion

- 핵심개념

  • 메모리사용량: ver1 > ver3 > ver2
  • 계산속도: ver1 > ver3 > ver2
  • local-min에 갇힘: ver1 > ver3 > ver2

- 본질: GPU 메모리가 한정되어 있어서 ver1을 쓰지는 못한다. GPU 메모리를 가장 적게쓰는것은 ver2인데 이것은 너무 불안정하다.

- 틀리진 않지만 어색한 블로그 정리 내용들

  • 경사하강법은 종종 국소최소점에 갇히는 문제가 있다. 이를 해결하기 위해서 등장한 방법이 확률적 경사하강법이다. --> 어쩌다 보니까 확률적 경사하강법이 더 잘 빠져나오는것
  • 경사하강법은 계산시간이 오래걸린다. 계산을 빠르게 하기 위해서 등장한 방법이 확률적 경사하강법이다. --> 계산이 빠르단 의미는 1회 업데이트 하는 속도가 빠르다는 의미임. 최종적으루 수렴을 빨리시키는지는 미지수임. 이것은 돌려봐야 안다.

fashion_mnist 모듈

tf.keras.datasets.fashion_mnist.load_data()

type(tf.keras.datasets.fashion_mnist)
module
type(tf.keras.datasets.fashion_mnist.load_data)
function

데이터생성 및 탐색

- tf.keras.datasets.fashion_mnist.load_data()를 이용한 데이터 생성

tf.keras.datasets.fashion_mnist.load_data??
Signature: tf.keras.datasets.fashion_mnist.load_data()
Source:   
@keras_export('keras.datasets.fashion_mnist.load_data')
def load_data():
  """Loads the Fashion-MNIST dataset.

  This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories,
  along with a test set of 10,000 images. This dataset can be used as
  a drop-in replacement for MNIST.

  The classes are:

  | Label | Description |
  |:-----:|-------------|
  |   0   | T-shirt/top |
  |   1   | Trouser     |
  |   2   | Pullover    |
  |   3   | Dress       |
  |   4   | Coat        |
  |   5   | Sandal      |
  |   6   | Shirt       |
  |   7   | Sneaker     |
  |   8   | Bag         |
  |   9   | Ankle boot  |

  Returns:
    Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.

  **x_train**: uint8 NumPy array of grayscale image data with shapes
    `(60000, 28, 28)`, containing the training data.

  **y_train**: uint8 NumPy array of labels (integers in range 0-9)
    with shape `(60000,)` for the training data.

  **x_test**: uint8 NumPy array of grayscale image data with shapes
    (10000, 28, 28), containing the test data.

  **y_test**: uint8 NumPy array of labels (integers in range 0-9)
    with shape `(10000,)` for the test data.

  Example:

  ```python
  (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
  assert x_train.shape == (60000, 28, 28)
  assert x_test.shape == (10000, 28, 28)
  assert y_train.shape == (60000,)
  assert y_test.shape == (10000,)
  ```

  License:
    The copyright for Fashion-MNIST is held by Zalando SE.
    Fashion-MNIST is licensed under the [MIT license](
    https://github.com/zalandoresearch/fashion-mnist/blob/master/LICENSE).

  """
  dirname = os.path.join('datasets', 'fashion-mnist')
  base = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/'
  files = [
      'train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz',
      't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz'
  ]

  paths = []
  for fname in files:
    paths.append(get_file(fname, origin=base + fname, cache_subdir=dirname))

  with gzip.open(paths[0], 'rb') as lbpath:
    y_train = np.frombuffer(lbpath.read(), np.uint8, offset=8)

  with gzip.open(paths[1], 'rb') as imgpath:
    x_train = np.frombuffer(
        imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28)

  with gzip.open(paths[2], 'rb') as lbpath:
    y_test = np.frombuffer(lbpath.read(), np.uint8, offset=8)

  with gzip.open(paths[3], 'rb') as imgpath:
    x_test = np.frombuffer(
        imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)

  return (x_train, y_train), (x_test, y_test)
File:      ~/anaconda3/envs/tfcpu/lib/python3.9/site-packages/keras/datasets/fashion_mnist.py
Type:      function

- 함수의 return값을 확인하면 (x_train, y_train), (x_test, y_test)로 결과를 받아야 코드가 예뻐짐을 알 수 있다.

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()

- 데이터의 확인

x_train.shape, y_train.shape
((60000, 28, 28), (60000,))
  • x는 60000개의 관측치(이미지의 수)를 가지고 있는듯 보인다.
  • 하나의 obs에 대한 x의 차원은 (28,28)이다.
  • 하나의 obs에 대한 y의 차원은 스칼라다.
x_test.shape, y_test.shape
((10000, 28, 28), (10000,))
  • train과 test의 비율을 6:1로 나눔

- 하나의 관측치가 무엇을 의미할까?

plt.imshow(x_train[0])
<matplotlib.image.AxesImage at 0x7efe002ab880>
  • 신발?
y_train[0]
9
  • 신발이미지=x, 신발임의의미하는숫자9=y
np.where(y_train == 9)
(array([    0,    11,    15, ..., 59932, 59970, 59978]),)
  • y==9인 다른 obs조사
plt.imshow(x_train[11])
<matplotlib.image.AxesImage at 0x7efe0021e3d0>
  • obs0과 유사함
y_train
array([9, 0, 0, ..., 3, 0, 5], dtype=uint8)

데이터구조

x_train.shape
(60000, 28, 28)

- $\bf{X}$: (n,28,28), 픽셀크기가 28$\times$28 인 이미지

- ${\bf y}$: (n,), 이미지에 대응하는 라벨 (0~9까지의 숫자로 되어있다)

예제1

데이터 정리

- y=0,1에 대응하는 이미지만 정리하자. (우리가 배운건 로지스틱이니깐)

y=y_train[(y_train == 0) | (y_train == 1)].reshape(-1,1)
X=x_train[(y_train == 0) | (y_train == 1)].reshape(-1,784)
yy= y_test[(y_test == 0) | (y_test == 1)].reshape(-1,1)
XX= x_test[(y_test == 0) | (y_test== 1)].reshape(-1,784)
X.shape,y.shape, XX.shape,yy.shape
((12000, 784), (12000, 1), (2000, 784), (2000, 1))

풀이1: 은닉층을 포함한 신경망 // epochs=100

gv('''
splines=line
subgraph cluster_1{
    style=filled;
    color=lightgrey;
    "x1"
    "x2"
    ".."
    "x784"
    label = "Layer 0"
}
subgraph cluster_2{
    style=filled;
    color=lightgrey;
    "x1" -> "node1"
    "x2" -> "node1"
    ".." -> "node1"
    
    "x784" -> "node1"
    "x1" -> "node2"
    "x2" -> "node2"
    ".." -> "node2"
    "x784" -> "node2"
    
    "x1" -> "..."
    "x2" -> "..."
    ".." -> "..."
    "x784" -> "..."

    "x1" -> "node30"
    "x2" -> "node30"
    ".." -> "node30"
    "x784" -> "node30"


    label = "Layer 1: relu"
}
subgraph cluster_3{
    style=filled;
    color=lightgrey;
    "node1" -> "y"
    "node2" -> "y"
    "..." -> "y"
    "node30" -> "y"
    label = "Layer 2: sigmoid"
}
''')
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> G cluster_1 Layer 0 cluster_2 Layer 1: relu cluster_3 Layer 2: sigmoid x1 x1 node1 node1 x1->node1 node2 node2 x1->node2 ... ... x1->... node30 node30 x1->node30 x2 x2 x2->node1 x2->node2 x2->... x2->node30 .. .. ..->node1 ..->node2 ..->... ..->node30 x784 x784 x784->node1 x784->node2 x784->... x784->node30 y y node1->y node2->y ...->y node30->y
tf.random.set_seed(43052)
net = tf.keras.Sequential() 
net.add(tf.keras.layers.Dense(30,activation='relu'))
net.add(tf.keras.layers.Dense(1,activation='sigmoid'))
net.compile(optimizer='sgd',loss=tf.losses.binary_crossentropy)
net.fit(X,y,epochs=100,batch_size=12000) 

Epoch 1/100
1/1 [==============================] - 0s 150ms/step - loss: 220.9145
Epoch 2/100
1/1 [==============================] - 0s 9ms/step - loss: 6800.3174
Epoch 3/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7045
Epoch 4/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7012
Epoch 5/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7004
Epoch 6/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6997
Epoch 7/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6991
Epoch 8/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6985
Epoch 9/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6979
Epoch 10/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6976
Epoch 11/100
1/1 [==============================] - 0s 10ms/step - loss: 0.6973
Epoch 12/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6970
Epoch 13/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6968
Epoch 14/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6966
Epoch 15/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6964
Epoch 16/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6963
Epoch 17/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6961
Epoch 18/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6959
Epoch 19/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6958
Epoch 20/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6956
Epoch 21/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6955
Epoch 22/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6953
Epoch 23/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6952
Epoch 24/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6951
Epoch 25/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6949
Epoch 26/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6948
Epoch 27/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6947
Epoch 28/100
1/1 [==============================] - 0s 10ms/step - loss: 0.6946
Epoch 29/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6945
Epoch 30/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6944
Epoch 31/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6943
Epoch 32/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6942
Epoch 33/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6942
Epoch 34/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6941
Epoch 35/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6940
Epoch 36/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6940
Epoch 37/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6939
Epoch 38/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6939
Epoch 39/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6938
Epoch 40/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6937
Epoch 41/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6937
Epoch 42/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6936
Epoch 43/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6936
Epoch 44/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6935
Epoch 45/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6935
Epoch 46/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6934
Epoch 47/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6934
Epoch 48/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6934
Epoch 49/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 50/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 51/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 52/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 53/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 54/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 55/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 56/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 57/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 58/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 59/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 60/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 61/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 62/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 63/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 64/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6933
Epoch 65/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 66/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 67/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6933
Epoch 68/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 69/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 70/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 71/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 72/100
1/1 [==============================] - 0s 10ms/step - loss: 0.6932
Epoch 73/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 74/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 75/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 76/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 77/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 78/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 79/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 80/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 81/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 82/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 83/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 84/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 85/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 86/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 87/100
1/1 [==============================] - 0s 7ms/step - loss: 0.6932
Epoch 88/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 89/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 90/100
1/1 [==============================] - 0s 7ms/step - loss: 0.6932
Epoch 91/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 92/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 93/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 94/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 95/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6932
Epoch 96/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 97/100
1/1 [==============================] - 0s 7ms/step - loss: 0.6932
Epoch 98/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 99/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
Epoch 100/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6932
<keras.callbacks.History at 0x7efe760c9d00>
np.mean((net(X)>0.5) == y)
0.5000833333333333
np.mean((net(XX)>0.5) == yy)
0.5

풀이2: 옵티마이저 개선

tf.random.set_seed(43055)
net = tf.keras.Sequential() 
net.add(tf.keras.layers.Dense(30,activation='relu'))
net.add(tf.keras.layers.Dense(1,activation='sigmoid'))
net.compile(optimizer='adam',loss=tf.losses.binary_crossentropy)
net.fit(X,y,epochs=100,batch_size=12000) 

Epoch 1/100
1/1 [==============================] - 0s 143ms/step - loss: 100.9425
Epoch 2/100
1/1 [==============================] - 0s 9ms/step - loss: 44.4441
Epoch 3/100
1/1 [==============================] - 0s 9ms/step - loss: 29.2322
Epoch 4/100
1/1 [==============================] - 0s 9ms/step - loss: 22.6921
Epoch 5/100
1/1 [==============================] - 0s 9ms/step - loss: 8.7741
Epoch 6/100
1/1 [==============================] - 0s 9ms/step - loss: 4.6409
Epoch 7/100
1/1 [==============================] - 0s 9ms/step - loss: 5.2642
Epoch 8/100
1/1 [==============================] - 0s 9ms/step - loss: 6.1993
Epoch 9/100
1/1 [==============================] - 0s 9ms/step - loss: 6.5543
Epoch 10/100
1/1 [==============================] - 0s 9ms/step - loss: 6.3454
Epoch 11/100
1/1 [==============================] - 0s 9ms/step - loss: 5.7887
Epoch 12/100
1/1 [==============================] - 0s 9ms/step - loss: 5.1074
Epoch 13/100
1/1 [==============================] - 0s 9ms/step - loss: 4.4821
Epoch 14/100
1/1 [==============================] - 0s 9ms/step - loss: 3.9864
Epoch 15/100
1/1 [==============================] - 0s 9ms/step - loss: 3.6388
Epoch 16/100
1/1 [==============================] - 0s 9ms/step - loss: 3.4077
Epoch 17/100
1/1 [==============================] - 0s 9ms/step - loss: 3.2687
Epoch 18/100
1/1 [==============================] - 0s 8ms/step - loss: 3.1829
Epoch 19/100
1/1 [==============================] - 0s 9ms/step - loss: 3.1196
Epoch 20/100
1/1 [==============================] - 0s 8ms/step - loss: 3.0526
Epoch 21/100
1/1 [==============================] - 0s 9ms/step - loss: 2.9635
Epoch 22/100
1/1 [==============================] - 0s 9ms/step - loss: 2.8397
Epoch 23/100
1/1 [==============================] - 0s 9ms/step - loss: 2.6812
Epoch 24/100
1/1 [==============================] - 0s 9ms/step - loss: 2.4916
Epoch 25/100
1/1 [==============================] - 0s 9ms/step - loss: 2.2804
Epoch 26/100
1/1 [==============================] - 0s 9ms/step - loss: 2.0630
Epoch 27/100
1/1 [==============================] - 0s 9ms/step - loss: 1.8600
Epoch 28/100
1/1 [==============================] - 0s 8ms/step - loss: 1.6744
Epoch 29/100
1/1 [==============================] - 0s 9ms/step - loss: 1.5003
Epoch 30/100
1/1 [==============================] - 0s 9ms/step - loss: 1.3529
Epoch 31/100
1/1 [==============================] - 0s 8ms/step - loss: 1.2575
Epoch 32/100
1/1 [==============================] - 0s 9ms/step - loss: 1.1763
Epoch 33/100
1/1 [==============================] - 0s 9ms/step - loss: 1.0853
Epoch 34/100
1/1 [==============================] - 0s 9ms/step - loss: 0.9978
Epoch 35/100
1/1 [==============================] - 0s 9ms/step - loss: 0.9337
Epoch 36/100
1/1 [==============================] - 0s 8ms/step - loss: 0.8893
Epoch 37/100
1/1 [==============================] - 0s 8ms/step - loss: 0.8503
Epoch 38/100
1/1 [==============================] - 0s 9ms/step - loss: 0.8154
Epoch 39/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7843
Epoch 40/100
1/1 [==============================] - 0s 8ms/step - loss: 0.7548
Epoch 41/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7288
Epoch 42/100
1/1 [==============================] - 0s 8ms/step - loss: 0.7061
Epoch 43/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6844
Epoch 44/100
1/1 [==============================] - 0s 8ms/step - loss: 0.6640
Epoch 45/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6427
Epoch 46/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6187
Epoch 47/100
1/1 [==============================] - 0s 8ms/step - loss: 0.5933
Epoch 48/100
1/1 [==============================] - 0s 9ms/step - loss: 0.5693
Epoch 49/100
1/1 [==============================] - 0s 8ms/step - loss: 0.5471
Epoch 50/100
1/1 [==============================] - 0s 9ms/step - loss: 0.5253
Epoch 51/100
1/1 [==============================] - 0s 9ms/step - loss: 0.5031
Epoch 52/100
1/1 [==============================] - 0s 9ms/step - loss: 0.4805
Epoch 53/100
1/1 [==============================] - 0s 8ms/step - loss: 0.4572
Epoch 54/100
1/1 [==============================] - 0s 8ms/step - loss: 0.4368
Epoch 55/100
1/1 [==============================] - 0s 8ms/step - loss: 0.4180
Epoch 56/100
1/1 [==============================] - 0s 8ms/step - loss: 0.3991
Epoch 57/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3828
Epoch 58/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3701
Epoch 59/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3568
Epoch 60/100
1/1 [==============================] - 0s 8ms/step - loss: 0.3426
Epoch 61/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3286
Epoch 62/100
1/1 [==============================] - 0s 8ms/step - loss: 0.3165
Epoch 63/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3051
Epoch 64/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2929
Epoch 65/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2836
Epoch 66/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2753
Epoch 67/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2664
Epoch 68/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2574
Epoch 69/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2493
Epoch 70/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2411
Epoch 71/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2335
Epoch 72/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2267
Epoch 73/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2198
Epoch 74/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2137
Epoch 75/100
1/1 [==============================] - 0s 10ms/step - loss: 0.2075
Epoch 76/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2013
Epoch 77/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1954
Epoch 78/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1903
Epoch 79/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1850
Epoch 80/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1801
Epoch 81/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1751
Epoch 82/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1708
Epoch 83/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1671
Epoch 84/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1629
Epoch 85/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1592
Epoch 86/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1553
Epoch 87/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1514
Epoch 88/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1479
Epoch 89/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1441
Epoch 90/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1409
Epoch 91/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1373
Epoch 92/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1340
Epoch 93/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1305
Epoch 94/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1275
Epoch 95/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1242
Epoch 96/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1213
Epoch 97/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1184
Epoch 98/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1157
Epoch 99/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1132
Epoch 100/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1110
<keras.callbacks.History at 0x7efe756ece50>
np.mean((net(X)>0.5) == y)
0.99175
np.mean((net(XX)>0.5) == yy)
0.98

풀이3: 컴파일시 metrics=['accuracy'] 추가

tf.random.set_seed(43055)
net = tf.keras.Sequential() 
net.add(tf.keras.layers.Dense(30,activation='relu'))
net.add(tf.keras.layers.Dense(1,activation='sigmoid'))
net.compile(optimizer='adam',loss=tf.losses.binary_crossentropy,metrics=['accuracy'])
net.fit(X,y,epochs=100,batch_size=12000) 

Epoch 1/100
1/1 [==============================] - 0s 234ms/step - loss: 100.9425 - accuracy: 0.4988
Epoch 2/100
1/1 [==============================] - 0s 9ms/step - loss: 44.4441 - accuracy: 0.3741
Epoch 3/100
1/1 [==============================] - 0s 8ms/step - loss: 29.2322 - accuracy: 0.4321
Epoch 4/100
1/1 [==============================] - 0s 9ms/step - loss: 22.6921 - accuracy: 0.5399
Epoch 5/100
1/1 [==============================] - 0s 9ms/step - loss: 8.7741 - accuracy: 0.7321
Epoch 6/100
1/1 [==============================] - 0s 9ms/step - loss: 4.6409 - accuracy: 0.8516
Epoch 7/100
1/1 [==============================] - 0s 10ms/step - loss: 5.2642 - accuracy: 0.8711
Epoch 8/100
1/1 [==============================] - 0s 9ms/step - loss: 6.1993 - accuracy: 0.8771
Epoch 9/100
1/1 [==============================] - 0s 9ms/step - loss: 6.5543 - accuracy: 0.8845
Epoch 10/100
1/1 [==============================] - 0s 10ms/step - loss: 6.3454 - accuracy: 0.8953
Epoch 11/100
1/1 [==============================] - 0s 9ms/step - loss: 5.7887 - accuracy: 0.9062
Epoch 12/100
1/1 [==============================] - 0s 9ms/step - loss: 5.1074 - accuracy: 0.9168
Epoch 13/100
1/1 [==============================] - 0s 9ms/step - loss: 4.4821 - accuracy: 0.9276
Epoch 14/100
1/1 [==============================] - 0s 9ms/step - loss: 3.9864 - accuracy: 0.9359
Epoch 15/100
1/1 [==============================] - 0s 9ms/step - loss: 3.6388 - accuracy: 0.9402
Epoch 16/100
1/1 [==============================] - 0s 10ms/step - loss: 3.4077 - accuracy: 0.9412
Epoch 17/100
1/1 [==============================] - 0s 9ms/step - loss: 3.2687 - accuracy: 0.9415
Epoch 18/100
1/1 [==============================] - 0s 9ms/step - loss: 3.1829 - accuracy: 0.9404
Epoch 19/100
1/1 [==============================] - 0s 9ms/step - loss: 3.1196 - accuracy: 0.9388
Epoch 20/100
1/1 [==============================] - 0s 9ms/step - loss: 3.0526 - accuracy: 0.9380
Epoch 21/100
1/1 [==============================] - 0s 9ms/step - loss: 2.9635 - accuracy: 0.9371
Epoch 22/100
1/1 [==============================] - 0s 9ms/step - loss: 2.8397 - accuracy: 0.9376
Epoch 23/100
1/1 [==============================] - 0s 9ms/step - loss: 2.6812 - accuracy: 0.9389
Epoch 24/100
1/1 [==============================] - 0s 9ms/step - loss: 2.4916 - accuracy: 0.9396
Epoch 25/100
1/1 [==============================] - 0s 9ms/step - loss: 2.2804 - accuracy: 0.9408
Epoch 26/100
1/1 [==============================] - 0s 9ms/step - loss: 2.0630 - accuracy: 0.9433
Epoch 27/100
1/1 [==============================] - 0s 9ms/step - loss: 1.8600 - accuracy: 0.9470
Epoch 28/100
1/1 [==============================] - 0s 9ms/step - loss: 1.6744 - accuracy: 0.9488
Epoch 29/100
1/1 [==============================] - 0s 9ms/step - loss: 1.5003 - accuracy: 0.9510
Epoch 30/100
1/1 [==============================] - 0s 9ms/step - loss: 1.3529 - accuracy: 0.9531
Epoch 31/100
1/1 [==============================] - 0s 9ms/step - loss: 1.2575 - accuracy: 0.9542
Epoch 32/100
1/1 [==============================] - 0s 9ms/step - loss: 1.1763 - accuracy: 0.9553
Epoch 33/100
1/1 [==============================] - 0s 9ms/step - loss: 1.0853 - accuracy: 0.9567
Epoch 34/100
1/1 [==============================] - 0s 9ms/step - loss: 0.9978 - accuracy: 0.9587
Epoch 35/100
1/1 [==============================] - 0s 9ms/step - loss: 0.9337 - accuracy: 0.9603
Epoch 36/100
1/1 [==============================] - 0s 9ms/step - loss: 0.8893 - accuracy: 0.9617
Epoch 37/100
1/1 [==============================] - 0s 9ms/step - loss: 0.8503 - accuracy: 0.9627
Epoch 38/100
1/1 [==============================] - 0s 9ms/step - loss: 0.8154 - accuracy: 0.9632
Epoch 39/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7843 - accuracy: 0.9642
Epoch 40/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7548 - accuracy: 0.9654
Epoch 41/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7288 - accuracy: 0.9663
Epoch 42/100
1/1 [==============================] - 0s 9ms/step - loss: 0.7061 - accuracy: 0.9674
Epoch 43/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6844 - accuracy: 0.9687
Epoch 44/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6640 - accuracy: 0.9693
Epoch 45/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6427 - accuracy: 0.9710
Epoch 46/100
1/1 [==============================] - 0s 9ms/step - loss: 0.6187 - accuracy: 0.9716
Epoch 47/100
1/1 [==============================] - 0s 9ms/step - loss: 0.5933 - accuracy: 0.9723
Epoch 48/100
1/1 [==============================] - 0s 9ms/step - loss: 0.5693 - accuracy: 0.9730
Epoch 49/100
1/1 [==============================] - 0s 9ms/step - loss: 0.5471 - accuracy: 0.9733
Epoch 50/100
1/1 [==============================] - 0s 8ms/step - loss: 0.5253 - accuracy: 0.9737
Epoch 51/100
1/1 [==============================] - 0s 8ms/step - loss: 0.5031 - accuracy: 0.9744
Epoch 52/100
1/1 [==============================] - 0s 8ms/step - loss: 0.4805 - accuracy: 0.9750
Epoch 53/100
1/1 [==============================] - 0s 9ms/step - loss: 0.4572 - accuracy: 0.9767
Epoch 54/100
1/1 [==============================] - 0s 8ms/step - loss: 0.4368 - accuracy: 0.9775
Epoch 55/100
1/1 [==============================] - 0s 9ms/step - loss: 0.4180 - accuracy: 0.9778
Epoch 56/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3991 - accuracy: 0.9783
Epoch 57/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3828 - accuracy: 0.9791
Epoch 58/100
1/1 [==============================] - 0s 8ms/step - loss: 0.3701 - accuracy: 0.9793
Epoch 59/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3568 - accuracy: 0.9793
Epoch 60/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3426 - accuracy: 0.9805
Epoch 61/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3286 - accuracy: 0.9813
Epoch 62/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3165 - accuracy: 0.9822
Epoch 63/100
1/1 [==============================] - 0s 9ms/step - loss: 0.3051 - accuracy: 0.9827
Epoch 64/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2929 - accuracy: 0.9827
Epoch 65/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2836 - accuracy: 0.9827
Epoch 66/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2753 - accuracy: 0.9830
Epoch 67/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2664 - accuracy: 0.9835
Epoch 68/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2574 - accuracy: 0.9843
Epoch 69/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2493 - accuracy: 0.9845
Epoch 70/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2411 - accuracy: 0.9852
Epoch 71/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2335 - accuracy: 0.9852
Epoch 72/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2267 - accuracy: 0.9852
Epoch 73/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2198 - accuracy: 0.9858
Epoch 74/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2137 - accuracy: 0.9866
Epoch 75/100
1/1 [==============================] - 0s 9ms/step - loss: 0.2075 - accuracy: 0.9866
Epoch 76/100
1/1 [==============================] - 0s 8ms/step - loss: 0.2013 - accuracy: 0.9873
Epoch 77/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1954 - accuracy: 0.9875
Epoch 78/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1903 - accuracy: 0.9879
Epoch 79/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1850 - accuracy: 0.9883
Epoch 80/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1801 - accuracy: 0.9882
Epoch 81/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1751 - accuracy: 0.9886
Epoch 82/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1708 - accuracy: 0.9893
Epoch 83/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1671 - accuracy: 0.9893
Epoch 84/100
1/1 [==============================] - 0s 9ms/step - loss: 0.1629 - accuracy: 0.9893
Epoch 85/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1592 - accuracy: 0.9898
Epoch 86/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1553 - accuracy: 0.9898
Epoch 87/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1514 - accuracy: 0.9898
Epoch 88/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1479 - accuracy: 0.9900
Epoch 89/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1441 - accuracy: 0.9900
Epoch 90/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1409 - accuracy: 0.9899
Epoch 91/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1373 - accuracy: 0.9902
Epoch 92/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1340 - accuracy: 0.9902
Epoch 93/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1305 - accuracy: 0.9904
Epoch 94/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1275 - accuracy: 0.9909
Epoch 95/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1242 - accuracy: 0.9908
Epoch 96/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1213 - accuracy: 0.9908
Epoch 97/100
1/1 [==============================] - 0s 7ms/step - loss: 0.1184 - accuracy: 0.9910
Epoch 98/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1157 - accuracy: 0.9911
Epoch 99/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1132 - accuracy: 0.9912
Epoch 100/100
1/1 [==============================] - 0s 8ms/step - loss: 0.1110 - accuracy: 0.9917
<keras.callbacks.History at 0x7efe74e40d90>
net.evaluate(X,y)
375/375 [==============================] - 0s 363us/step - loss: 0.1086 - accuracy: 0.9918
[0.10858087986707687, 0.9917500019073486]
net.evaluate(XX,yy)
63/63 [==============================] - 0s 589us/step - loss: 0.2933 - accuracy: 0.9800
[0.29328083992004395, 0.9800000190734863]

풀이4: 확률적경사하강법 이용 // epochs=10

tf.random.set_seed(43055)
net = tf.keras.Sequential() 
net.add(tf.keras.layers.Dense(30,activation='relu'))
net.add(tf.keras.layers.Dense(1,activation='sigmoid'))
net.compile(optimizer='adam',loss=tf.losses.binary_crossentropy,metrics=['accuracy'])
net.fit(X,y,epochs=10,batch_size=120) 

Epoch 1/10
100/100 [==============================] - 0s 848us/step - loss: 3.5696 - accuracy: 0.9374
Epoch 2/10
100/100 [==============================] - 0s 748us/step - loss: 0.3958 - accuracy: 0.9795
Epoch 3/10
100/100 [==============================] - 0s 830us/step - loss: 0.2030 - accuracy: 0.9862
Epoch 4/10
100/100 [==============================] - 0s 737us/step - loss: 0.1828 - accuracy: 0.9853
Epoch 5/10
100/100 [==============================] - 0s 763us/step - loss: 0.1633 - accuracy: 0.9863
Epoch 6/10
100/100 [==============================] - 0s 737us/step - loss: 0.0990 - accuracy: 0.9891
Epoch 7/10
100/100 [==============================] - 0s 683us/step - loss: 0.0633 - accuracy: 0.9908
Epoch 8/10
100/100 [==============================] - 0s 742us/step - loss: 0.0641 - accuracy: 0.9907
Epoch 9/10
100/100 [==============================] - 0s 731us/step - loss: 0.0693 - accuracy: 0.9894
Epoch 10/10
100/100 [==============================] - 0s 783us/step - loss: 0.0316 - accuracy: 0.9923
<keras.callbacks.History at 0x7efe7581ce50>
net.evaluate(X,y)
375/375 [==============================] - 0s 346us/step - loss: 0.0360 - accuracy: 0.9919
[0.03597424924373627, 0.9919166564941406]
net.evaluate(XX,yy)
63/63 [==============================] - 0s 497us/step - loss: 0.1805 - accuracy: 0.9810
[0.18054614961147308, 0.9810000061988831]