Deep Learning/Common (3) 썸네일형 리스트형 [2021.01] Exponential Moving Average Normalization for Self-Supervised and Semi-Supervised Learning To be updated Conference: CVPR 2021 URL: https://arxiv.org/abs/2101.08482 Code: https://github.com/amazon-research/exponential-moving-average-normalization [2020.11] KeepAugment: A Simple Information-Preserving Data Augmentation Approach To be updated Conference: CVPR 2021 URL: https://arxiv.org/abs/2011.11778 Code: https://github.com/clovaai/rexnet Neural tangent kernel (NTK) and beyond "Meanwhile, we see the evolving development of deep learning theory on neural networks. NTK (neural tangent kernel) is proposed to characterize the gradient descent training dynamics of infinite wide (Jacot et al., 2018) or finite wide deep networks (Hanin & Nica, 2019). Wide networks are also proved to evolve as linear models under gradient descent (Lee et al., 2019). This is further leveraged .. 이전 1 다음