CoBERL architecture

Andrea_banino Coberl Contrastive Bert for Reinforcement Learning 2022

[TOC] Title: CoBERL Contrastive BERT for Reinforcement Learning Author: Andrea Banino et. al. DeepMind Publish Year: Feb 2022 Review Date: Wed, Oct 5, 2022 Summary of paper https://arxiv.org/pdf/2107.05431.pdf Motivation Contribution Some key terms Representation learning in reinforcement learning motivation: if state information could be effectively extracted from raw observations it may then be possible to learn from there as fast as from states. however, given the often sparse reward signal coming from the environment, learning representations in RL has to be achieved with little to no supervision. approach types class 1: auxiliary self-supervised losses to accelerate the learning speed in model-free RL algorithm class 2: learn a world model and use this to collect imagined rollouts, which then act as extra data to train the RL algorithm reducing the samples required from the environment CoBERL is in class 1 ​ it uses both masked language modelling and contrastive learning RL using BERT architecture – RELIC ...

October 5, 2022 · 2 min · 258 words · Sukai Huang
Different architectures for image and text retrieval

Gregor_geigle Retrieve Fast Rerank Smart Cooperative and Joint Approaches for Improved Cross Modal Retrieval 2022

[TOC] Title: Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval Author: Gregor Geigle et. al. Publish Year: 19 Feb, 2022 Review Date: Sat, Aug 27, 2022 Summary of paper Motivation they want to combine the cross encoder and the bi encoder advantages and have a more efficient cross-modal search and retrieval efficiency and simplicity of BE approach based on twin network expressiveness and cutting-edge performance of CE methods. Contribution We propose a novel joint Cross Encoding and Binary Encoding model (Joint-Coop), which is trained to simultaneously cross-encode and embed multi-modal input; it achieves the highest scores overall while maintaining retrieval efficiency ...

August 27, 2022 · 3 min · 453 words · Sukai Huang
MP-Net structure

Kaitao_song Mpnet Masked and Permuted Retrain for Language Understanding 2020

[TOC] Title: MPNet: Masked and Permuted Pre-training for Language Understanding Author: Kaitao Song et. al. Publish Year: 2020 Review Date: Thu, Aug 25, 2022 Summary of paper Motivation BERT adopts masked language modelling (MLM) for pre-training and is one of the most successful pre-training models. Since BERT is all attention block and the positional embedding is the only info that care about the ordering, BERT neglects dependency among predicted tokens ...

August 25, 2022 · 2 min · 378 words · Sukai Huang

Qinqing_zheng Online Decision Transformer 2022

[TOC] Title: Online Decision Transformer Author: Qinqing Zheng Publish Year: Feb 2022 Review Date: Mar 2022 Summary of paper Motivation the author proposed online Decision transformer (ODT), an RL algorithm based on sequence modelling that blends offline pretraining with online fine-tuning in a unified framework. ODT builds on the decision transformer architecture previously introduced for offline RL quantify exploration compared to DT, they shifted from deterministic to stochastic policies for defining exploration objectives during the online phase. They quantify exploration via the entropy of the policy similar to max-ent RL frameworks. ...

March 21, 2022 · 4 min · Sukai Huang

Junyang_lin M6 a Chinese Multimodal Pretrainer 2021

[TOC] Title: M6: A Chinese Multimodal Pretrainer Author: Junyang Lin et. al. Publish Year: May 2021 Review Date: Jan 2022 Summary of paper This paper re-emphasises that large model trained on big data have extremely large capacity and it can outperform the SOTA in downstream tasks especially in the zero-shot setting. So, the author trained a big multi-modal model Also, they proposed a innovative way to tackle downstream tasks. they use masks to block cross attention between tokens so as to fit different types of downstream task Key idea: mask tokens during cross attention so as to solve certain tasks Overview ...

January 12, 2022 · 1 min · Sukai Huang