11/27/2023 0 Comments Praat descargarSimple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021).Unsupervised Speech Recognition (Baevski, et al., 2021).Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training (Hsu, et al., 2021).Self-training and Pre-training are Complementary for Speech Recognition (Xu et al., 2020).Unsupervised Cross-lingual Representation Learning for Speech Recognition (Conneau et al., 2020).Deep Transformers with Latent Depth (Li et al., 2020).Cross-lingual Retrieval for Iterative Self-Supervised Training (Tran et al., 2020).Linformer: Self-Attention with Linear Complexity (Wang et al., 2020). Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models (Enarvi et al., 2020).wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020).Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020).Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020).Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020).Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019).Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019).RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019).Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019).Adaptive Attention Span in Transformers (Sukhbaatar et al., 2019).Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context (Dai et al., 2019).Lexically constrained decoding with dynamic beam allocation (Post & Vilar, 2018).Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018).Understanding Back-Translation at Scale (Edunov et al., 2018).Scaling Neural Machine Translation (Ott et al., 2018).Attention Is All You Need (Vaswani et al., 2017).Effective Approaches to Attention-based Neural Machine Translation (Luong et al., 2015).Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019).wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019).Hierarchical Neural Story Generation (Fan et al., 2018).Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018).Convolutional Sequence to Sequence Learning (Gehring et al., 2017).Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017).We provide reference implementations of various sequence modeling papers: List of implemented papers Modeling and other text generation tasks. Fairseq(-py) is a sequence modeling toolkit that allows researchers andÄevelopers to train custom models for translation, summarization, language
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |