当前位置:首页 > 编程笔记 > 正文
已解决

CV论文阅读大合集

来自网友在路上 159859提问 提问时间:2023-11-03 16:04:20阅读次数: 59

最佳答案 问答题库598位专家为你答疑解惑

YearNameAreamodeldescriptiondrawback2021 ICMLClip (Contrastive Language-Image Pre-training)contrastive learning、zero-shot learing、mutimodelclip用文本作为监督信号来训练可迁移的视觉模型CLIP’s zero-shot performance, although comparable to supervised ResNet50, is not yet SOTA, and the authors estimate that to achieve SOTA, CLIP would need to add 1000x more computation, which is unimaginable;CLIP’s zero-shot performs poorly on certain datasets, such as fine-grained classification, abstraction tasks, etc; CLIP performs robustly on natural distribution drift, but still suffers from out-of-domain generalisation, i.e., if the distribution of the test dataset differs significantly from the training set, CLIP will perform poorly; CLIP does not address the data inefficiency challenges of deep learning, and training CLIP requires a large amount of data;2021 ICLRViT (VisionTransformer)在这里插入图片描述将Transformer应用到vision中:simple, efficient,scalable当拥有足够多的数据进行预训练的时候,ViT的表现就会超过CNN,突破transformer缺少归纳偏置的限制,可以在下游任务中获得较好的迁移效果2022DALL-E基于文本来生成模型2021 ICCVSwin Transformer在这里插入图片描述使用滑窗和层级式的结构,解决transformer计算量大的问题;披着Transformer皮的CNN2021MAE(Masked Autoencoders)self-supervised在这里插入图片描述CV版的bertscalablel;very high-capacity models that generalize wellTransMed: Transformers Advance Multi-modal Medical Image Classification在这里插入图片描述I3D2021Pathway2021 ICMLVILT视觉文本多模态Transformer性能不高 推理时间快 训练时间特别慢2021 NeurIPSALBEFalign before fusion 为了清理noisy data 提出用一个momentum model生成pseudo target
查看全文

99%的人还看了

猜你感兴趣

版权申明

本文"CV论文阅读大合集":http://eshow365.cn/6-31184-0.html 内容来自互联网,请自行判断内容的正确性。如有侵权请联系我们,立即删除!