Title
Slotting Learning Rate in Deep Neural Networks to Build Stronger Models
Date Issued
01 January 2021
Access level
metadata only access
Resource Type
conference paper
Author(s)
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
In recent years, deep neural networks have made substantial progress in object recognition. However, one issue with deep learning is that it is currently unclear which proposed framework is exaggerated for a specific hitch. As a result, distinct dispositions are attempt before one that produces satisfactory results is discovered. This paper described a distributed supervised learning method for finding the best network architecture by modifying specifications for a perceived task dynamically. In the case of the MNIST information gathering, it is shown that asynchronous supervised learning can agree on a solution space. Setting several hyperparameters can be time-consuming when constructing neural networks. In this post, we'll provide you with some tips and instructions for better organizing your hyperparameter tuning process, which should help you find a good setting for the hyperparameters much faster.
Start page
1587
End page
1593
Language
English
OCDE Knowledge area
Educacion especial (para estudiantes dotados y aquellos con dificultades del apredizaje)
Neurociencias
Subjects
Scopus EID
2-s2.0-85123189981
ISBN of the container
978-166543368-6
Conference
Proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021
Sources of information:
Directorio de Producción Científica
Scopus