超越卷积、自注意力机制:强大的神经网络新算子involution
这篇工作主要是由我和SENet的作者胡杰一起完成的,也非常感谢HKUST的两位导师 陈启峰和张潼老师的讨论和建议。
![](https://filescdn.proginn.com/bfa3c4940c3c625f9073cd92029f45c6/daef5f1590d868421c565ca8997aed28.webp)
概要
我们的贡献点简单来讲:
(1)提出了一种新的神经网络算子(operator或op)称为involution,它比convolution更轻量更高效,形式上比self-attention更加简洁,可以用在各种视觉任务的模型上取得精度和效率的双重提升。
(2)通过involution的结构设计,我们能够以统一的视角来理解经典的卷积操作和近来流行的自注意力操作。
论文链接:https://arxiv.org/abs/2103.06255
代码和模型链接:https://github.com/d-li14/involution
这部分内容主要来自原文Section 2,Section 3
convolution
![](https://filescdn.proginn.com/03407dde4a20ae8fbba5319f16ee7b53/2aeb4ffcd792c98b22f09b6a1e15fba5.webp)
![](https://filescdn.proginn.com/978ff2bf7cd888aaaca5f7837eb5a617/8a9172db5549363f0f1943ae647c8f57.webp)
![](https://filescdn.proginn.com/73359d8db0798fbea4f4e9286037409f/357c1c10b811b3c8128acb10a5b6c983.webp)
![](https://filescdn.proginn.com/59d40ebf757300ca32c7d4826d86c7d4/bc8e2437bb222e3459232a89b5ba8297.webp)
![](https://filescdn.proginn.com/ab26a56f15a5fe26193afd584d17accc/dea33c9610a3917e0811a27d9ab89120.webp)
![](https://filescdn.proginn.com/5e2e05741a146ecac40f7fb126ce50d3/765942848d5d9e77a35c9bc82805cfb3.webp)
![](https://filescdn.proginn.com/142e0b79491b8fdb72335e0812043f85/d50408fdbba962611ef96329fcad79c5.webp)
![](https://filescdn.proginn.com/af001d4aa893a0f9c83359ad9be52e93/a940021468d09fcb8d015a45ff1c92d6.webp)
![](https://filescdn.proginn.com/a9ef04bcbb51c4283879e8a7ed3a4c02/a6f08248513725054f27d2cb33a5ae5a.webp)
![](https://filescdn.proginn.com/89d6d3a582e0c634b058387eb0b0f4e5/98fb015503af044a9c17ed430c25b1ac.webp)
空间不变性
![](https://filescdn.proginn.com/1b6170b7eba0072a997517d78300aa45/81f5947945c8df0d351059d33f792f6d.webp)
![](https://filescdn.proginn.com/3f8871d38f4a4cac47fb7b8512e38177/767484bdd0c44d96784ad9b7eb05e7e6.webp)
![](https://filescdn.proginn.com/cbd4305d7dca43abf06e351b9e57b866/80f229041c9f0f6f2cfea092f96a0db6.webp)
![](https://filescdn.proginn.com/cbd4305d7dca43abf06e351b9e57b866/80f229041c9f0f6f2cfea092f96a0db6.webp)
通道特异性
![](https://filescdn.proginn.com/a7395ef02b0b6c82a423d3c65702a16f/dfb0487b5ff577b9e451ecd7001a79ae.webp)
![](https://filescdn.proginn.com/2efb8e2ffcaf1fcf7298251b20938250/416054d09847a99b7a2afd266ee8d7d0.webp)
involution
![](https://filescdn.proginn.com/7d9e1fb2e1af5fedefb752caa30ab371/b0b77edccf40c16c83efb5f9dd5921b8.webp)
![](https://filescdn.proginn.com/1c46feeaed7c8649400ef1703b29b62a/39cfd5eb53258da318b7dca87a0cbaf3.webp)
![](https://filescdn.proginn.com/ff422d82d2212c595d5c35703fca02b3/f474a29c2f8716633e73e07b3dbf31ac.webp)
![](https://filescdn.proginn.com/380852810718cebaf64b25788862110b/c95b342c83fe7e172575519e31184165.webp)
![](https://filescdn.proginn.com/2f31484e02d670d4ba2b1733fe8ea2ee/10cf8278e7901e11877cb2517df8dc76.webp)
![](https://filescdn.proginn.com/0e4d632d04a9e0e78d507e8644ead9c0/977ab91f09f621779c0dd43cff1f8a2d.webp)
![](https://filescdn.proginn.com/e1948bf6d5c313b5156997d52d406d9b/8826db402a96bbd205359a571b91e283.webp)
![](https://filescdn.proginn.com/86b5cda349660a4b2089c644387d7431/183e5d9d05ab37fde31b83c5b9809af7.webp)
![](https://filescdn.proginn.com/52eb1344545fc121c661b794d02cc26e/a674684b774b44d4556cb8832152e955.webp)
![](https://filescdn.proginn.com/8f5d4252d1c99d24ff9a80dde37e8aab/d98a298ec5c95b2d04f319e351e1ebb1.webp)
![](https://filescdn.proginn.com/e1948bf6d5c313b5156997d52d406d9b/8826db402a96bbd205359a571b91e283.webp)
![](https://filescdn.proginn.com/363a540a0c7f64805fe62952d6b33578/82e8d5d747d25791477199d6c90d84ee.webp)
![](https://filescdn.proginn.com/86b5cda349660a4b2089c644387d7431/183e5d9d05ab37fde31b83c5b9809af7.webp)
![](https://filescdn.proginn.com/f46e6da343b23199d9c9e41c3572559a/832322a4a570d47845f9cdbb8a35981e.webp)
![](https://filescdn.proginn.com/14249d2f09b63f3d525b4026b0a9f5d9/bbc5d02683c95873fa316c1b04faef89.webp)
![](https://filescdn.proginn.com/3a76bc375d5b34ce24e21d4558020b85/fbec93ec6c8a78f72f64f0ffb56a24eb.webp)
![](https://filescdn.proginn.com/94fb1ee56441914e4c48283577e70ceb/654fbb14c223a1293fe25700d890641b.webp)
![](https://filescdn.proginn.com/384365839474a01713108c09afbd446e/f19ace2ce9efc70678fb489c41ec3414.webp)
![](https://filescdn.proginn.com/8f5d4252d1c99d24ff9a80dde37e8aab/d98a298ec5c95b2d04f319e351e1ebb1.webp)
![](https://filescdn.proginn.com/8eaf73f9477e986e30e80c6465622ee2/9add9e2489d1f3ab5100b94f8a45126a.webp)
![](https://filescdn.proginn.com/a7345b00a8d2c76aa2a39663608eca73/e7f100a1126a19297579a28bee6b9ea9.webp)
![](https://filescdn.proginn.com/0ec84766a21e7979941767423076293f/6f1e19acde9550e805c4d4c642ff303a.webp)
![](https://filescdn.proginn.com/30426b480c9db272031046ee41c751c6/487a76b50c5bbcd00550f89f6967c360.webp)
![](https://filescdn.proginn.com/7643e94a65b99300612b90eea0b5ab31/044346000e3842fa4e02621b8285c9bc.webp)
![](https://filescdn.proginn.com/f4e55a30d3a5a8aa6170d4e5e8698f25/caa9e3c87d6328d92449ba5c58c3f6cb.webp)
![](https://filescdn.proginn.com/490a50efa4c4983a41dafa95ea6a5758/626e81887e249f7061c8b1c2b76af15a.webp)
![](https://filescdn.proginn.com/f158d158054f1250b7c3100666e18b34/aaa97209dd9fa6ec2b0c2f6b2f623d08.webp)
在通道上共享kernel(仅有G个kernel)允许我们去使用大的空间span(增加K),从而通过spatial维度的设计提升性能的同时保证了通过channel维度的设计维持效率(见ablation in Tab. 6a,6b),即使在不同空间位置不共享权重也不会引起参数量和计算量的显著增长。
虽然我们没有在空间上的每个pixel直接共享kernel参数,但我们在更高的层面上共享了元参数(meta-weight,即指kernel生成函数的参数),所以仍然能够在不同的空间位置上进行knowledge的共享和迁移。作为对比,即使抛开参数量大涨的问题,如果将convolution在空间上共享kernel的限制完全放开,让每个pixel自由地学习对应的kernel参数,则无法达到这样的效果。
[Discussion] 与self-attention的相关性
这部分内容主要来自原文Section 4.2
self-attention
![](https://filescdn.proginn.com/e5cc780bce31025e066b8c1c45f5f2b5/cfe4f5285265677509c6358055235990.webp)
![](https://filescdn.proginn.com/f382aec723857919cae59cb19d67b564/52b5fc19552ea4d5d4d244181341f39c.webp)
![](https://filescdn.proginn.com/6bb4fb09c1aebe17daa454f7d4b2889d/f25f064dbee503928bdfc55330b77df1.webp)
![](https://filescdn.proginn.com/4e5056b0ddbe6c379704a245a7612276/9627273593ea6e3d902344b92ef97da8.webp)
![](https://filescdn.proginn.com/0ec84766a21e7979941767423076293f/6f1e19acde9550e805c4d4c642ff303a.webp)
![](https://filescdn.proginn.com/5e2e05741a146ecac40f7fb126ce50d3/765942848d5d9e77a35c9bc82805cfb3.webp)
![](https://filescdn.proginn.com/156eca8d7a962ebc7808fd42980fe3df/a4f61406b52232c349b2ce8e2e970829.webp)
self-attention中不同的head对应到involution中不同的group(在channel维度split) self-attention中每个pixel的attention map 对应到involution中每个pixel的kernel
![](https://filescdn.proginn.com/28b4eb1b9b0a66a571eee39eb2287c9d/6f1278064cd8584a59a27b20ce26e86e.webp)
![](https://filescdn.proginn.com/bc1f893e96053103499648ea5518f2ab/8473323195c55b15cd2db0b2ac010fcc.webp)
![](https://filescdn.proginn.com/6bb4fb09c1aebe17daa454f7d4b2889d/f25f064dbee503928bdfc55330b77df1.webp)
![](https://filescdn.proginn.com/cbd4305d7dca43abf06e351b9e57b866/80f229041c9f0f6f2cfea092f96a0db6.webp)
![](https://filescdn.proginn.com/ab26a56f15a5fe26193afd584d17accc/dea33c9610a3917e0811a27d9ab89120.webp)
![](https://filescdn.proginn.com/58d0832020556097cc6474ddef6a9a9d/a44811ee7c2661ece1906e8ac51ac85c.webp)
![](https://filescdn.proginn.com/c9d4a4b4e14d37e4c3cafd53c4b39194/e8c88d68b33b7e008df71def9ac259fc.webp)
![](https://filescdn.proginn.com/80b3f901b8eb993dcf36fafb84d88d6a/c011761c5c65133c1e159d65fc4730e9.webp)
![](https://filescdn.proginn.com/0267e893febf88f2cac1310685e01694/510f5a2f4fc13378d51ec55511dd03fd.webp)
![](https://filescdn.proginn.com/ca63d065486768dadcfd4ea6a13bb01a/a5e0afc828f6b68233492e04ea2d73b6.webp)
Vision Transformer
实验结果
ImageNet图像分类
![](https://filescdn.proginn.com/5a9a9b39d4b453d5ea49d56d49553532/f52d5c5703c1807527c1f1da3826ca88.webp)
![](https://filescdn.proginn.com/cbd4305d7dca43abf06e351b9e57b866/80f229041c9f0f6f2cfea092f96a0db6.webp)
COCO目标检测和实例分割
![](https://filescdn.proginn.com/743a222059410d9cf4d2878b4f167954/ea849f6adf7474127c506c2109c50689.webp)
![](https://filescdn.proginn.com/e07647d8b09594678dc864fe353647c7/8b3a8f24c3adb04549ec161e98cb3a78.webp)
Cityscapes语义分割
![](https://filescdn.proginn.com/dd1d4c27e353160ba084a616f5fc3891/1470626f8bccc1b384c800a99c11a656.webp)
![](https://filescdn.proginn.com/55d8afcc8c8e66268960a77e08a053ab/0682408028a6fdd9df0ead88769d5905.webp)
关于广义的involution中kernel生成函数空间进一步的探索; 类似于deformable convolution加入offest生成函数,使得这个op空间建模能力的灵活性进一步提升; 结合NAS的技术搜索convolution-involution混合结构(原文Section 4.3); 我们在上文论述了self-attention只是一种表达形式,但希望(self-)attention机制能够启发我们设计更好的视觉模型,类似地detection领域最近不少好的工作,也从DETR的架构中获益匪浅。
© THE END
评论