DEVI, NADIA CHANDRA (2018) PENENTUAN AKURASI ALGORITMA PELATIHAN JARINGAN SYARAF TIRUAN BERDASARKAN TINGKAT ERROR YANG PALING KECIL. Bachelor thesis, Universitas Muhammadiyah Purwokerto.
|
Text
NADIA CHANDRA DEVI_COVER.pdf Download (1MB) | Preview |
|
|
Text
NADIA CHANDRA DEVI_BAB I.pdf Download (830kB) | Preview |
|
|
Text
NADIA CHANDRA DEVI_BAB II.pdf Download (1MB) | Preview |
|
|
Text
NADIA CHANDRA DEVI_BAB III.pdf Download (605kB) | Preview |
|
![]() |
Text
NADIA CHANDRA DEVI_BAB IV.pdf Restricted to Repository staff only Download (1MB) |
|
![]() |
Text
NADIA CHANDRA DEVI_BAB V.pdf Restricted to Repository staff only Download (1MB) |
|
![]() |
Text
NADIA CHANDRA DEVI_BAB VI.pdf Restricted to Repository staff only Download (717kB) |
|
|
Text
NADIA CHANDRA DEVI_DAFTAR PUSTAKA.pdf Download (824kB) | Preview |
|
![]() |
Text
NADIA CHANDRA DEVI_LAMPIRAN.pdf Restricted to Repository staff only Download (2MB) |
Abstract
One of the most popular learning models supervised by artificial neural networks is the backpropagation learning model. Because in the backpropagation method there are 12 (twelve) trainings that can be used, the functions needed to start the most appropriate training in pattern data produce the best results. Therefore, in this study an examination of the algorithms in the backpropagation network that examined 12 algorithms was conducted. The optimal test to achieve the most optimal in terms of the errors produced. Tests that have been done in the previous study showed that the most optimal training based on the results of the error was the Levenberg-Marquardt training application with an average MSE of 0.001001 at the credit level α = 5%. This research is a mixed method, namely development research with quantitative and qualitative testing (using ANOVA). As input data, the data settings are random with 5 input neurons. The results of the learning are that training in network errors (most optimal) with network control parameters target error = 0.001, maximum epoch = 10000, 9 neurons in the hidden layer, learning rate (lr) = 0.01, 0.05, 0.1 , 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0,7,0,8, 0,9, 1 and with 5 data input network neurons is the Levenberg-Marquardt algorithm with an average error rate of 0,000156582770 with a learning rate / lr = 0.2. Thus these results can be used to create projects or applications in the field of artificial neural networks for researchers or educators for the development of science and technology.
Item Type: | Thesis (Bachelor) |
---|---|
Uncontrolled Keywords: | artificial neural networks, training algorithm, Backpropagation, ANAVA, hidden layer, error, epoch, learning rate |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Fakultas Teknik > Teknik Informatika S1 |
Depositing User: | Amri Hariri, SIP. |
Date Deposited: | 04 Oct 2021 05:49 |
Last Modified: | 04 Oct 2021 05:49 |
URI: | https://repository.ump.ac.id:80/id/eprint/10510 |
Actions (login required)
![]() |
View Item |