PyTorch1.9发布了!torchvision支持SSD模型!

机器学习算法工程师

共 1832字,需浏览 4分钟

 ·

2021-06-24 11:24

点蓝色字关注“机器学习算法工程师

设为星标,干货直达!


近日,PyTorch1.9发布了,主要更新如下:


  • Major improvements to support scientific computing, including torch.linalg, torch.special, and Complex Autograd

  • Major improvements in on-device binary size with Mobile Interpreter

  • Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core

  • Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support

  • New APIs to optimize performance and packaging for model inference deployment

  • Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler


另外torchvision更新至0.10版本,新增SSD模型:


import torch
import torchvision

# Original SSD variant
x = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)]
m_detector = torchvision.models.detection.ssd300_vgg16(pretrained=True)
m_detector.eval()
predictions = m_detector(x)

# Mobile-friendly SSDlite variant
x = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)]
m_detector = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True)
m_detector.eval()
predictions = m_detector(x)


更多详情访问:https://pytorch.org/blog/pytorch-1.9-released/



推荐阅读

CPVT:一个卷积就可以隐式编码位置信息

SOTA模型Swin Transformer是如何炼成的!

谷歌AI用30亿数据训练了一个20亿参数Vision Transformer模型,在ImageNet上达到新的SOTA!

BatchNorm的避坑指南(上)

BatchNorm的避坑指南(下)

目标跟踪入门篇-相关滤波

SOTA模型Swin Transformer是如何炼成的!

MoCo V3:我并不是你想的那样!

Transformer在语义分割上的应用

"未来"的经典之作ViT:transformer is all you need!

PVT:可用于密集任务backbone的金字塔视觉transformer!

涨点神器FixRes:两次超越ImageNet数据集上的SOTA

Transformer为何能闯入CV界秒杀CNN?

不妨试试MoCo,来替换ImageNet上pretrain模型!


机器学习算法工程师


                                    一个用心的公众号

浏览 59
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报
评论
图片
表情
推荐
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报