pretraining
Here are 190 public repositories matching this topic...
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
-
Updated
Apr 24, 2024 - Python
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
-
Updated
Apr 2, 2025 - Python
Papers about pretraining and self-supervised learning on Graph Neural Networks (GNN).
-
Updated
Feb 2, 2024 - Python
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
-
Updated
Jan 23, 2024 - Python
Recent Advances in Vision and Language PreTrained Models (VL-PTMs)
-
Updated
Aug 19, 2022
EntitySeg Toolbox: Towards Open-World and High-Quality Image Segmentation
-
Updated
Nov 30, 2023 - Jupyter Notebook
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
-
Updated
Feb 27, 2023 - Python
Official Repository for the Uni-Mol Series Methods
-
Updated
Apr 7, 2025 - Python
【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
-
Updated
Mar 25, 2024 - Python
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
-
Updated
Jan 11, 2023 - Python
Pretraining code for a large-scale depth-recurrent language model
-
Updated
Mar 14, 2025 - Python
A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
-
Updated
Nov 4, 2024
Best practice for training LLaMA models in Megatron-LM
-
Updated
Jan 2, 2024 - Python
Official repository for "Craw4LLM: Efficient Web Crawling for LLM Pretraining"
-
Updated
Feb 24, 2025 - Python
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models
-
Updated
Feb 11, 2025 - Python
PITI: Pretraining is All You Need for Image-to-Image Translation
-
Updated
Jun 2, 2024 - Python
飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
-
Updated
May 24, 2024 - Python
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links
-
Updated
Apr 5, 2022 - Python
Improve this page
Add a description, image, and links to the pretraining topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pretraining topic, visit your repo's landing page and select "manage topics."