Category: MACHINE LEARNING

  • 2019.12.09(pm): Autoencoder

    The autoencoder is a neural network that receives the feature vector x and outputs the same or similar vector x ‘. The figure below shows the structure of an autoencoder. Since the output must be the same as the input, the number of nodes in the output layer and the number of nodes in the…

  • 2019.12.09(am): Back propagation

    After the release of Perceptron in 1958, the New York Times published a shocking article on the 8th of July that a computer would be developed that would soon walk, speak, and recognize self. But in 1969, Marvin Minsky devised a multilayer perceptron (MLP) to solve the XOR problem, and said there is no way…

  • 2019.12.08(pm): Multi-Layer Perceptron

    Perceptron has the limitation that it can only handle situations where linear separation is possible. The problem of XOR classification is that the perceptron has a 75% accuracy rate because one sample is wrong no matter which decision line is used. In 1969, Marvin Minsky’s Perceptrons, the father of artificial intelligence, revealed this fact, and…

  • 2019.12.08(am): Types of Machine Learning

    As there are various problems to be solved by machine learning, there are various types of machine learning. In the past, machine learning algorithms were largely divided into supervised learning and unsupervised learning. Recently, as reinforcement learning becomes more important, it is being divided into supervised learning, unsupervised learning, and reinforced learning. Supervised learning In…

  • 2019.12.05(pm): Softmax function

    Machine learning problems can be divided into classification and regression. Classification is a matter of which class the data belongs to, and regression is a problem of predicting consecutive numbers in the input data. As such, the output value of the neural network can be obtained using the identity function and the softmax function. This…

  • 2019.12.04(pm): Monte Carlo method

    The Monte Carlo method is a term for an algorithm that probabilistically calculates the value of a function using random numbers. Frequently used in mathematics and physics, it is used to approximate calculations when the value to be calculated is not represented in closed form or is complex. Stanislaw Ulam is named after Monte Carlo,…

  • 2019.12.01(pm): Stochastic Gradient Decent method

    Let’s talk about the cost function again. The cost function for learning the weights of the Adaline (Adaptive Linear Neuron) is defined in relation to the i th observation in the training data set as follows: where the ϕ is an activation function. To find the weights that minimize our cost function, we can use…

  • 2019.12.01(pm): Gradient descent method

    This article describes the gradient descent method, one of the basic function optimization methods. The gradient descent method is one of the representative ways to apply the concept of differential to optimization problems and is to find the local minimum of a function. The gradient descent method is also called steepest descent method. Gradient descent…

  • 2019.11.30(pm): Machine Learning – Linear Regression

    Today we will look at the concept of linear regression. The dictionary meaning of regression means “return to the origin state.” The term begins with the British geneticist Francis Galton studying the laws of inheritance. In studying the relationship between parents and children’s heights, Galton surveyed and tabulated the averages of heights of fathers and…

  • 2019.11.17(pm): CNN(Convolutional Newral Network)

    CNN Fully connected Layer 1 A finite part of an array (array) format. One color photograph is three-dimensional data. Long pictures available for batch mode are 4D data. Fully Connected (FC) Neural Networks can be trained with photo data, transforming 3D photo data into 1D. Flatten your photo data. Artificial neural networks can be characterized…