Published on

๐Ÿ“–STUDY ์›น ๋ธŒ๋ผ์šฐ์ € ์† ๋จธ์‹ ๋Ÿฌ๋‹ Tensorflow.js | ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€๋ฅผ ์‚ฌ์šฉํ•œ ๋ถ„๋ฅ˜

์ด์ง„ ๋ถ„๋ฅ˜

Classification์€ ์ง€๋„ํ•™์Šต ์ค‘ ํ•œ ์ข…๋ฅ˜. ์ด์ง„ ๋ถ„๋ฅ˜๋Š” linearly seperatable ๋ฌธ์ œ์— ํ•ด๋‹นํ•œ๋‹ค.์‹œ๊ทธ๋ชจ์ด๋“œ ํ•จ์ˆ˜๋Š” (0,1)์˜ ๋ฒ”์œ„๋ฅผ ๊ฐ–๋Š”๋‹ค. ๋ฒ ์ด์ฆˆ ์ •๋ฆฌ์— ๋”ฐ๋ฅด๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜๋œ๋‹ค.

p(C1โˆฃx)=ฯƒ(wTx)=1a+eโˆ’a=ฯƒ(a)p(C_1 | x) = \sigma(w^T x) = \frac{1}{a + e^{-a}} = \sigma(a)

a=lnโกฯƒ1โˆ’ฯƒa = \ln\frac{\sigma}{1-\sigma}

  • logit function: ์‹œ๊ทธ๋ชจ์ด๋“œ ํ•จ์ˆ˜์˜ ์—ญํ•จ์ˆ˜, ๋‘ ํด๋ž˜์Šค์— ์†ํ•  ํ™•๋ฅ ์— ๋Œ€ํ•œ ๋น„์œจ

๊ฐ ํด๋ž˜์Šค ๋ณ„ ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ ์„ ๋กœ์ง€์Šคํ‹ฑ ์‹œ๊ทธ๋ชจ์ด๋“œ ํ•จ์ˆ˜๋กœ ํ‘œํ˜„์ด ๊ฐ€๋Šฅํ•จ.

  • ๋ชจ๋ธ์˜ ์ž…๋ ฅ ๋ฒกํ„ฐ์™€ ๊ฐ€์ค‘์น˜ ํŒŒ๋ผ๋ฏธํ„ฐ ์‚ฌ์ด์˜ ์„ ํ˜• ๊ด€๊ณ„ ์ „์ œ ์กฐ๊ฑด:
    • ๊ฐ ํด๋ž˜์Šค์˜ ๊ฐ€์šฐ์‹œ์•ˆ ๋ถ„ํฌ๊ฐ€ ๋™์ผํ•œ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ์„ ๊ฐ€์ ธ์•ผ ํ•จ.
    • ์ด๋Š” ๋ชจ๋“  ์ƒ˜ํ”Œ์ด ๊ฐ™์€ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ์„ ๊ณต์œ ํ•˜๋Š” ๊ฐ€์šฐ์‹œ์•ˆ ๋ถ„ํฌ์—์„œ ์ƒ์„ฑ๋˜์—ˆ๋‹ค๋Š” ๊ฐ€์ •์œผ๋กœ ๋ชฉ์  ํ•จ์ˆ˜๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Œ.
  • loss function: cross-entropy โžก ๋ฐ˜๋ณต ์ตœ์ ํ™” ์ฒ˜๋ฆฌํ•  ๋•Œ ์œ ์šฉ

CORE API๋ฅผ ์ด์šฉํ•œ Logistic regression

import * as tf from "@tensorflow/tfjs";

const N = 100;
const c1 = tf.randomNormal([N, 2]).add([2.0, 1.0]);
const c2 = tf.randomNormal([N, 2]).add([-2.0, -1.0]);

const l1 = tf.ones([N, 1]);
const l2 = tf.ones([N, 1]);

const xs = c1.concat(c2);
const input = xs.concat([2 * N, 1], 1); // 1 as bias
const ys = l1.concat(l2);

const w = tf.randomNormal([3, 1]).sub(-0.5).variable();

const f_x = (x) => {
  return tf.sigmoid(x.matmul(w));
};

const loss = (pred, label) => {
  return tf.losses.sigmoidCrossEntropy(pred, label).asScalar();
};

const optimizer = tf.train.adam(0.07);
for (let i = 0; i < 100; i++) {
  const l = optimizer.minimize(() => loss(f_x(input), ys), true);
  losses.push(l.dataSync());
}

Layers API๋ฅผ ์ด์šฉํ•œ Logistic regression


const model = tf.sequential();
model.add(tf.layers.dense(units: 1, batchInputShape: [null,2]));

const loss = (pred, label) => {
  return tf.losses.sigmoidCrossEntropy(pred, label).asScalar();
};

model.compile({
    loss: loss,
    optimizer: 'adam',
    metrics: ['accuracy']
});

async function training() {
    const history = await model.fit(xs, ys, {epochs: 100});
    ...
    model.predict(xs);
}

training();

machinelearning.js๋ฅผ ์ด์šฉํ•˜์—ฌ ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€ ๋ชจ๋ธ ํ›ˆ๋ จ๋„ ๊ฐ€๋Šฅํ•˜๋‹ค.

Read More

Authors