AMARILIS, SYAFRILLA GEZA (2018) PENGUJIAN ALGORITMA PELATIHAN JARINGAN BACKPROPAGATION BERDASARKAN EROR JARINGAN. Bachelor thesis, Universitas Muhammadiyah Purwokerto.
Preview |
Text
SYAFRILLA GEZA AMARILIS_COVER.pdf Download (2MB) | Preview |
Preview |
Text
SYAFRILLA GEZA AMARILIS_BAB I.pdf Download (720kB) | Preview |
Preview |
Text
SYAFRILLA GEZA AMARILIS_BAB II.pdf Download (1MB) | Preview |
Preview |
Text
SYAFRILLA GEZA AMARILIS_BAB III.pdf Download (603kB) | Preview |
![]() |
Text
SYAFRILLA GEZA AMARILIS_BAB IV.pdf Restricted to Registered users only Download (1MB) |
![]() |
Text
SYAFRILLA GEZA AMARILIS_BAB V.pdf Restricted to Registered users only Download (1MB) |
![]() |
Text
SYAFRILLA GEZA AMARILIS_BAB VI.pdf Restricted to Registered users only Download (717kB) |
Preview |
Text
SYAFRILLA GEZA AMARILIS_DAFTAR PUSTAKA.pdf Download (824kB) | Preview |
![]() |
Text
SYAFRILLA GEZA AMARILIS_LAMPIRAN.pdf Restricted to Registered users only Download (2MB) |
Abstract
Backpropagation is an algorithm that is supervised and then by a perceptron with many layers to change the weight - the cost associated with neurons in the hidden layer. In the backpropagation network there are 12 trainings. Training algorithms are the most important part of artificial neural networks (ANN). ANN is a model that can be realized biologically, an artificial neural network consisting of several processing elements (neurons) and there is a connection between these neurons. Training algorithms in backpropagation networks by several network parameters are: neurons in the input layer, maximum epoch which is the learning rate, and error (MSE). The performance of an optimal teaching training program that can be seen from errors generated by the network. The smaller the error produced, the more optimal the performance of the algorithm. The tests conducted in previous studies analyzed the accuracy of neurons in hidden layers (n) using the Levenberg-Marquardt algorithm (n = 2, 3, 5, 7, 9). The parameters used are the target of 0.001, maximum epoch = 1000. In this study testing was carried out on 12 algorithms with 7 neurons in hidden layers. The parameters used were error targets 0.001, maximum epoch = 10000. This study used the mix method (mixed method ) namely development research with qualitative analysis and statistical analysis. Data is random data with 5 input neurons and 1 neuron at the Output layer. So from the results of research with 7 neurons in the hidden layer (hidden layer) produces the smallest error that is 0,000126175395 ± 0,0001591834121 with a rate of rate (lr) = 0.6.
Dosen Pembimbing: | Mustafidah, Hindayati | unspecified |
---|---|
Item Type: | Thesis (Bachelor) |
Uncontrolled Keywords: | Hidden layer, backpropagation, learning rate, training algorithm, ANAVA |
Subjects: | Q Science > QA Mathematics > QA76 Computer software |
Divisions: | Fakultas Tekniik Dan Sains > Teknik Informatika S1 |
Depositing User: | Amri Hariri |
Date Deposited: | 07 Oct 2021 05:17 |
Last Modified: | 22 May 2024 03:06 |
URI: | http://repository.ump.ac.id/id/eprint/10547 |
Actions (login required)
![]() |
View Item |