<

ICCV 2021 Workshop on

Neural Architectures: Past, Present and Future

Montreal, Canada
Full day, October 11th, 2021

Gatherly: https://workshopsdayone.event.gatherly.io
YouTube: https://www.youtube.com/watch?app=desktop&v=EdJsrxJaobU


Overview

The surge of deep learning has largely benefited from the success of neural architecture design. By evolving from LeNet to AlexNet to VGG and to ResNet, neural architecture keeps incorporating novel designs of architectural elements and network topologies, leading to significant improvements in representation learning. Recently, the emergence of neural architecture search (NAS) further advances the representational capacity of neural networks, by changing the architecture design from the hand-crafted manner to automation. Despite remarkable achievements being made on various benchmark tasks, the development of neural architectures still faces several challenges. On the one hand, current neural architecture designs are not fully automatic yet. For instance, even with NAS, we still require tremendous knowledge from human experts on designing the architecture search space, defining search strategies and selecting training hyperparameters. On the other hand, existing neural architectures are severely exposed to the problems of lacking interpretability, vulnerability to adversarial examples, incapability of abstract reasoning, etc.

In this workshop, we will focus on recent research and future directions on advancing the deep learning system, particularly from the perspective of neural architectures. We aim to bring experts from artificial intelligence, machine learning, deep learning, statistics, computer vision, and cognitive science communities together not only on discussing the current challenges of neural architecture designs, but also on charting out the blueprint of neural architectures for further bridging the gap between the human brain and neural networks.


Schedule

08:55 - 09:00         Opening Remark

09:00 - 09:35         Talk 1: Jingdong Wang -- Dense Prediction with Transformers: Semantic Segmentation and High-Resolution Backbone

09:35 - 10:10         Talk 2: Alan Yuille -- Towards Bayesian Generative Architectures

10:10 - 10:45         Talk 3: Frank Hutter -- Neural Architecture Search (NAS) Benchmarks: Successes & Challenges

10:45 - 12:00         Poster Session


12:20 - 14:00         Lunch Break


14:00 - 14:35         Talk 4: David Kristjanson Duvenaud -- Infinitely Deep Bayesian Neural Architecture

14:35 - 15:10         Talk 5: Anima Anandkumar -- Are Transformers the Future of Vision?

15:10 - 15:45         Talk 6: Been Kim -- Interpretability for (somewhat) Philosophical and Skeptical Minds

15:45 - 16:20         Talk 7: Hanxiao Liu -- Towards Automated Design of ML Building Blocks


Instruction

On the workshop day (Monday, October 11th), you could go to https://workshopsdayone.event.gatherly.io to present your poster. The authors are supposed to be at the poster for their paper (please find the ID at here). When attendees come, the authors can share the screen to show the poster details and answer questions.

Optional but highly encouraged: If you would like to play with this system beforehand, please visit https://workshopsdayonetesting.event.gatherly.io.

Note: for the poster and video preparation, please follow the instructions from the ICCV main website.

Accepted Long Paper (Proceeding)

  • SCARLET-NAS: Bridging the Gap between Stability and Scalability in Weight-sharing Neural Architecture Search [Paper]
    Xiangxiang Chu (Meituan), Bo Zhang (Meituan)*, QINGYUAN LI (), Ruijun Xu (), Xudong Li (Chinese Academy of Sciences)
  • CONet: Channel Optimization for Convolutional Neural Networks [Paper] [Poster]
    Mahdi S. Hosseini (University of New Brunswick)*, Jia Shu Zhang (University of Toronto), Zhe M Liu (University of Toronto), Andre Fu (University of Toronto), Jingxuan Su (University of Toronto), Mathieu Tuli (University of Toronto), Konstantinos N Plataniotis (University of Toronto)
  • Russian Doll Network: Learning Nested Networks for Sample-Adaptive Dynamic Inference [Paper]
    Borui Jiang (Peking University)*, Yadong Mu (Peking University)
  • Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context [Paper] [Video]
    Niv Vosco (Hailo)*, Alon Shenkler (Hailo), Mark Grobman (Hailo)
  • DDUNet: Dense Dense U-Net with Applications in Image Denoising [Paper] [Video] [Poster]
    Fan JIA (The Chinese University of Hong Kong), Wing Hong Wong (The Chinese University of Hong Kong), Tieyong Zeng (The Chinese University of Hong Kong)*
  • PP-NAS: Searching for Plug-and-Play Blocks on Convolutional Neural Network [Paper]
    Biluo Shen (Chinese Academy of Sciences), Anqi Xiao (Chinese Academy of Sciences), Jie Tian (Chinese Academy of Sciences), Zhenhua Hu (Chinese Academy of Sciences)*
  • Single-DARTS: Towards Stable Architecture Search [Paper]
    Pengfei hou (Alibaba)*, Ying Jin (Tsinghua University), Yukang Chen (The Chinese University of Hong Kong)
  • Convolutional Filter Approximation Using Fractional Calculus [Paper] [Video]
    Julio Zamora (Intel Labs)*, Jesus Adan Cruz Vargas (Intel Labs), Anthony D Rhodes (Intel Labs), Lama Nachman (Intel Labs), Narayan Sundararajan (Intel Labs)
  • Graph-based Neural Architecture Search with Operation Embeddings [Paper] [Video]
    "Michail Chatzianastasis (National Technical University of Athens)*, George Dasoulas (Ecole Polytechnique, France), Georgios Siolas (National Tecnhical University of Athens), Michalis Vazirgiannis (École Polytechnique)"
  • Contextual Convolutional Neural Networks [Paper]
    Ionut Cosmin Duta (University of Bucharest), Mariana-Iuliana Georgescu (University of Bucharest), Radu Tudor Ionescu (University of Bucharest)*
  • Leveraging Batch Normalization for Vision Transformers [Paper]
    Zhuliang Yao (Tsinghua University), Yue Cao (Microsoft Research Asia)*, Yutong Lin (Xi'an Jiaotong University), Ze Liu (USTC), Zheng Zhang (MSRA, Huazhong University of Science and Technolog), Han Hu (Microsoft Research Asia)

Accepted Extended Abstract

  • Searching for Efficient Multi-Stage Vision Transformers [Paper]
    Yi-Lun Liao (Massachusetts Institute of Technology)*, Sertac Karaman (Massachusetts Institute of Technology), Vivienne Sze (Massachusetts Institute of Technology)
  • OSNASLib: One-Shot NAS Library [Paper]
    Sian-Yao Huang (National Cheng Kung University)*, Wei-Ta Chu (National Cheng Kung University)




Please contact us if you have questions.