Publication: NiNformer: a network in network transformer with token mixing generated gating function
| dc.contributor.author | Abdullah, Abdullah Nazhat | |
| dc.contributor.author | Aydin, Tarkan | |
| dc.contributor.institution | Abdullah, Abdullah Nazhat, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey | |
| dc.contributor.institution | Aydin, Tarkan, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey | |
| dc.date.accessioned | 2025-10-05T14:28:51Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | The attention mechanism is the primary component of the transformer architecture, it has led to significant advancements in deep learning spanning many domains and covering multiple tasks. In computer vision, the attention mechanism was first incorporated in the vision transformer (ViT), and then its usage has expanded into many tasks in the vision domain, such as classification, segmentation, object detection, and image generation. While the attention mechanism is very expressive and capable, it comes with the disadvantage of being computationally expensive and requiring datasets of considerable size for effective optimization. To address these shortcomings, many designs have been proposed in the literature to reduce the computational burden and alleviate the data size requirements. Examples of such attempts in the vision domain are the MLP-Mixer, the Conv-Mixer, the Perceiver-IO, and many more attempts with different sets of advantages and disadvantages. This paper introduces a new computational block as an alternative to the standard ViT block. The newly proposed block reduces the computational requirements by replacing the normal attention layers with a network in network structure, therefore enhancing the static approach of the MLP-Mixer with a dynamic learning of element-wise gating function generated by a token-mixing process. Extensive experimentation shows that the proposed design provides better performance than the baseline architectures on multiple datasets applied in the image classification task of the vision domain. © 2025 Elsevier B.V., All rights reserved. | |
| dc.identifier.doi | 10.1007/s00521-025-11226-1 | |
| dc.identifier.endpage | 13428 | |
| dc.identifier.issn | 14333058 | |
| dc.identifier.issn | 09410643 | |
| dc.identifier.issue | 19 | |
| dc.identifier.scopus | 2-s2.0-105003850350 | |
| dc.identifier.startpage | 13411 | |
| dc.identifier.uri | https://doi.org/10.1007/s00521-025-11226-1 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14719/6269 | |
| dc.identifier.volume | 37 | |
| dc.language.iso | en | |
| dc.publisher | Springer Science and Business Media Deutschland GmbH | |
| dc.relation.oastatus | All Open Access | |
| dc.relation.oastatus | Hybrid Gold Open Access | |
| dc.relation.source | Neural Computing and Applications | |
| dc.subject.authorkeywords | Computer Vision | |
| dc.subject.authorkeywords | Deep Learning | |
| dc.subject.authorkeywords | Network In Network | |
| dc.subject.authorkeywords | Transformer | |
| dc.subject.authorkeywords | Image Enhancement | |
| dc.subject.authorkeywords | Image Segmentation | |
| dc.subject.authorkeywords | Network Function Virtualization | |
| dc.subject.authorkeywords | Object Detection | |
| dc.subject.authorkeywords | Object Recognition | |
| dc.subject.authorkeywords | Attention Mechanisms | |
| dc.subject.authorkeywords | Deep Learning | |
| dc.subject.authorkeywords | Gating Functions | |
| dc.subject.authorkeywords | Image Generations | |
| dc.subject.authorkeywords | In Networks | |
| dc.subject.authorkeywords | Multiple Tasks | |
| dc.subject.authorkeywords | Network In Network | |
| dc.subject.authorkeywords | Objects Detection | |
| dc.subject.authorkeywords | Optimisations | |
| dc.subject.authorkeywords | Transformer | |
| dc.subject.authorkeywords | Mixers (machinery) | |
| dc.subject.indexkeywords | Image enhancement | |
| dc.subject.indexkeywords | Image segmentation | |
| dc.subject.indexkeywords | Network function virtualization | |
| dc.subject.indexkeywords | Object detection | |
| dc.subject.indexkeywords | Object recognition | |
| dc.subject.indexkeywords | Attention mechanisms | |
| dc.subject.indexkeywords | Deep learning | |
| dc.subject.indexkeywords | Gating functions | |
| dc.subject.indexkeywords | Image generations | |
| dc.subject.indexkeywords | In networks | |
| dc.subject.indexkeywords | Multiple tasks | |
| dc.subject.indexkeywords | Network in network | |
| dc.subject.indexkeywords | Objects detection | |
| dc.subject.indexkeywords | Optimisations | |
| dc.subject.indexkeywords | Transformer | |
| dc.subject.indexkeywords | Mixers (machinery) | |
| dc.title | NiNformer: a network in network transformer with token mixing generated gating function | |
| dc.type | Article | |
| dcterms.references | Vaswani, Ashish, Attention is all you need, Advances in Neural Information Processing Systems, 2017-December, pp. 5999-6009, (2017), Brown, Tom B., Language models are few-shot learners, Advances in Neural Information Processing Systems, 2020-December, (2020), Improving Language Understanding by Generative Pre Training, (2018), Llama Open and Efficient Foundation Language Models, (2023), Refinedweb Dataset for Falcon Llm Outperforming Curated Corpora with Web Data and Web Data Only, (2023), Mistral 7b, (2023), An Image is Worth 16x16 Words Transformers for Image Recognition at Scale, (2020), Tolstikhin, Ilya O., MLP-Mixer: An all-MLP Architecture for Vision, Advances in Neural Information Processing Systems, 29, pp. 24261-24272, (2021), Trans Mach Learn Res, (2022), Liu, Ze, Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, Proceedings of the IEEE International Conference on Computer Vision, pp. 9992-10002, (2021) | |
| dspace.entity.type | Publication | |
| local.indexed.at | Scopus | |
| person.identifier.scopus-author-id | 58115008900 | |
| person.identifier.scopus-author-id | 35106687700 |
