KECEPATAN JARINGAN BACKPROPAGATION BERDASARKAN LAJU PEMAHAMAN (LEARNING RATE) PADA MODEL NEURON 10-18-1

WULANDARI, ADESTI (2016) KECEPATAN JARINGAN BACKPROPAGATION BERDASARKAN LAJU PEMAHAMAN (LEARNING RATE) PADA MODEL NEURON 10-18-1. Bachelor thesis, UNIVERSITAS MUHAMMADIYAH PURWOKERTO.

[img]
Preview
Text
ADESTI WULANDARI_COVER.pdf

Download (1MB) | Preview
[img]
Preview
Text
ADESTI WULANDARI_BAB I.pdf

Download (811kB) | Preview
[img]
Preview
Text
ADESTI WULANDARI_BAB II.pdf

Download (1MB) | Preview
[img] Text
ADESTI WULANDARI_BAB III.pdf
Restricted to Repository staff only

Download (679kB)
[img] Text
ADESTI WULANDARI_BAB IV.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
ADESTI WULANDARI_BAB V.pdf
Restricted to Repository staff only

Download (978kB)
[img] Text
ADESTI WULANDARI_BAB VI.pdf
Restricted to Repository staff only

Download (761kB)
[img]
Preview
Text
ADESTI WULANDARI_DAFTAR PUSTAKA.pdf

Download (764kB) | Preview
[img] Text
ADESTI WULANDARI_LAMPIRAN.pdf
Restricted to Repository staff only

Download (2MB)

Abstract

The net working imitation syaraf can be called intellegance imitation which like human brain to solve problem. JST is arrengade with algoritma learning is used to practice net working. In JST is there two learning algoritma. They are tuition algoritma learning and untuition algoritma learning. One of tuition algoritma learning is backpropagation. Performance fro this algoritma is influenced by some net parameter like many neuron in input layer, the number of neuron in hidden layer, max epoh, the speed higher. The result from neting is output fasting while do data training. In the research previously have not able research fiding to 12 algorithm to solve problem. Therefore, this research is done for 12 algoritm practicing with the statistic ANOVA use alpa (α)=5% to get output result as feeding every practicing algoritm based on learning rate. This research uses a mix method. The research data is obtained by conducting research training data using 12 algorithms 20 times for each learning rate (lr). The number of neurons in the input layer is 10, the number of neurons in the hidden layer (n) used is 18 and the output is 1. The value of lr used is 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0. The results of this study are the fastest training algorithms with learning rate (lr) = 0.9 and the average velocity of 0.007485 ± 0,0004782 produced by the Gradient Descent training algorithm with Adaptive Learning Rate (traingda) at the level of a (α) = 5%.

Item Type: Thesis (Bachelor)
Additional Information: Pembimbing: Hindayati Mustafidah, S.Si., M.Kom.
Uncontrolled Keywords: backpropagation; learning rate; neuron; hidden layer; ANOVA; backpropagation; learning rate; neuron; hidden layer; ANOVA;
Subjects: T Technology > T Technology (General)
Divisions: Fakultas Teknik > Teknik Informatika
Depositing User: Riski Wismana
Date Deposited: 29 Jun 2019 03:16
Last Modified: 29 Jun 2019 03:16
URI: http://repository.ump.ac.id/id/eprint/8710

Actions (login required)

View Item View Item