深入理解空洞卷积
小白学视觉
共 5165字,需浏览 11分钟
· 2022-03-16
点击上方“小白学视觉”,选择加"星标"或“置顶”
重磅干货,第一时间送达
导读
空洞卷积在图像分割需要增加感受野同时保持特征图的尺寸的需求中诞生,本文详细介绍了空洞卷积的诞生、原理、计算过程以及存在的两个潜在的问题,帮助大家将空洞卷积这一算法“消化吸收”。
![](https://filescdn.proginn.com/2782a1d0ccdd853c3a7dbc81062456f0/01429e7712c19803f21ffe5671c26889.webp)
![](https://filescdn.proginn.com/580017906bbfb7f158827e1078bc9429/b80ed086f17cfde8b13f10a4575bae37.webp)
![](https://filescdn.proginn.com/76178adf069a69937b49089caa116f7f/5a3b63a9a09779d96b1c55cd9e25ad1a.webp)
![](https://filescdn.proginn.com/580017906bbfb7f158827e1078bc9429/b80ed086f17cfde8b13f10a4575bae37.webp)
a是普通的卷积过程(dilation rate = 1),卷积后的感受野为3 b是dilation rate = 2的空洞卷积,卷积后的感受野为5 c是dilation rate = 3的空洞卷积,卷积后的感受野为8
![](https://filescdn.proginn.com/9d6d5a026bd4cc454d2b41df8c63fe5a/2cb759f008ae83c5432847420909cc33.webp)
![](https://filescdn.proginn.com/40c882345e51cb5958a9a4e497effbdb/6affccb8b36174027d13d6ffa9939109.webp)
![](https://filescdn.proginn.com/fdd27731a0772106ae343cc4cfdbc886/26eb24feb10b44b9dad6767cd59e9761.webp)
![](https://filescdn.proginn.com/d1fb1c7d261929c9c72f789b97607c47/cf20daf2a00e52dc859a0e41f30ad0d1.webp)
![](https://filescdn.proginn.com/26ac47213cb25b08e79f053d29120884/a4f55130a13e031f3f95ff62fd8f05b8.webp)
![](https://filescdn.proginn.com/899bfc5d0c5ca6ef1ce806f1bafda9ca/10a2d691c0c9ac673915318f7ebf3b4d.webp)
dense prediction problems such as semantic segmentation ... to increase the performance of dense prediction architectures by aggregating multi-scale contextual information(来自[1])
三、感受野的计算
![](https://filescdn.proginn.com/76178adf069a69937b49089caa116f7f/5a3b63a9a09779d96b1c55cd9e25ad1a.webp)
![](https://filescdn.proginn.com/580017906bbfb7f158827e1078bc9429/b80ed086f17cfde8b13f10a4575bae37.webp)
![](https://filescdn.proginn.com/3dd862cf9f80bd9a6f5e8db44c401972/0ae6114bacab07a729d703c7974cf450.webp)
![](https://filescdn.proginn.com/89f6db69c92dddfb5e7b26b85a67b9b2/ce2970d8e71624be9baf3b6bccb7a6ec.webp)
![](https://filescdn.proginn.com/7e1aed66d59fc4df543b0ac510999443/c75db770e34a39e5e47ce9ee05d4aa3d.webp)
![](https://filescdn.proginn.com/c6c08397b4c47aba6112dd4042b2b97e/2c5452f4c14c78b5fc018c13965904a9.webp)
![](https://filescdn.proginn.com/f1e180170133da92ac6d20ae5f27d7d0/c4ba228d6da05a725a5e385ac5595ea7.webp)
![](https://filescdn.proginn.com/580017906bbfb7f158827e1078bc9429/b80ed086f17cfde8b13f10a4575bae37.webp)
![](https://filescdn.proginn.com/5c74f3456982dd8e07784319df56e25c/256a2b36195151d3a15a8fc411aee8b9.webp)
![](https://filescdn.proginn.com/e4930c8fc04bc3e090882de3d679ef0e/79faaa621a49dc174b8a0e23d1289d67.webp)
当前层的感受野计算公式如下,其中,
![](https://filescdn.proginn.com/bab53d3341c0e0dc9afd83cfa11cd670/77fc47b26e1d5289115b5526235eb4cb.webp)
![](https://filescdn.proginn.com/da30e85d5c4bfaffed7595edad4cd252/01858c29ba180defcd8c2b5a7ab82742.webp)
![](https://filescdn.proginn.com/f1e180170133da92ac6d20ae5f27d7d0/c4ba228d6da05a725a5e385ac5595ea7.webp)
![](https://filescdn.proginn.com/a89bc0a0cb01dbc60ae75cadb31328a8/dd73b1d56a78c358814c0a287dc84fdc.webp)
![](https://filescdn.proginn.com/ec6b34fcd20775af5ba3781c266efc55/b044a4644b0cf939df5d62bc328295fc.webp)
![](https://filescdn.proginn.com/c7296609c768ecdf58e28315742d3eed/f21dd2581d7bb5497c88d092a4706d09.webp)
![](https://filescdn.proginn.com/174b29640916b6550ec51a4c9f4e8b04/436a78d634e6e1f9b5a96c273d21dc5a.webp)
![](https://filescdn.proginn.com/c37b5eda97ec49098e0b92ffba32b8be/9ad7d0ce143a78569cdeeafdb3821cd2.webp)
![](https://filescdn.proginn.com/9c8597f6eabe9be95b9b9ed59089b9e2/3008362fcc65ad4cb129a2ad64fba1c6.webp)
![](https://filescdn.proginn.com/095e158afd4d156dd20a52d396c6619f/bf04195c543725a760b9b60f2ad43541.webp)
![](https://filescdn.proginn.com/ca2297f575153b87d2e33dceb97c387a/85081485df7992c85a412c673aecc531.webp)
![](https://filescdn.proginn.com/2afea58b8542692b2897255b4cff6ed1/6324668f20ed450c298cf512e3a1391e.webp)
![](https://filescdn.proginn.com/e720688d165abb49f956c6fc9fad204a/535278cf7e9d50866fac0a7a054d82a9.webp)
![](https://filescdn.proginn.com/bef56e74e6e0be993ba9a7124f3ff300/34286ecb0528a4cf231f8aafa11d68e3.webp)
Panqu Wang,Pengfei Chen, et al**.Understanding Convolution for Semantic Segmentation.//**WACV 2018 Fisher Yu, et al. Dilated Residual Networks. //CVPR 2017 Zhengyang Wang,et al.**Smoothed Dilated Convolutions for Improved Dense Prediction.//**KDD 2018. Liang-Chieh Chen,et al.Rethinking Atrous Convolution for Semantic Image Segmentation//2017 Sachin Mehta,et al. ESPNet: Efficient Spatial Pyramid of DilatedConvolutions for Semantic Segmentation. //ECCV 2018 Tianyi Wu**,et al.Tree-structured Kronecker Convolutional Networks for Semantic Segmentation.//AAAI2019** Hyojin Park,et al.Concentrated-Comprehensive Convolutionsfor lightweight semantic segmentation.//2018 Efficient Smoothing of Dilated Convolutions for Image Segmentation.//2019
交流群
欢迎加入公众号读者群一起和同行交流,目前有SLAM、三维视觉、传感器、自动驾驶、计算摄影、检测、分割、识别、医学影像、GAN、算法竞赛等微信群(以后会逐渐细分),请扫描下面微信号加群,备注:”昵称+学校/公司+研究方向“,例如:”张三 + 上海交大 + 视觉SLAM“。请按照格式备注,否则不予通过。添加成功后会根据研究方向邀请进入相关微信群。请勿在群内发送广告,否则会请出群,谢谢理解~
评论