Skip to content

zweiqi/LLaMA2-Accessory

 
 

Repository files navigation

LLaMA2-Accessory: An Open-source Toolkit for LLM Development ๐Ÿš€


๐Ÿค— HF Repo โ€ข ๐Ÿ‘‹ join our WeChat โ€ข ๐Ÿš€ Demo

๐Ÿš€LLaMA2-Accessory is an open-source toolkit for pre-training, fine-tuning and deployment of Large Language Models (LLMs) and multimodal LLMs. This repo is mainly inherited from LLaMA-Adapter with more advanced features.๐Ÿง 

โœจWithin this toolkit, we present SPHINX, a versatile multimodal large language model (MLLM) that combines a diverse array of training tasks, data domains, and visual embeddings.

News

  • [2023-11-17] We release SPHINX-V2, featuring the same architecture but with enhanced and broader capabilities! ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.10.17] We release the demo, code, and model of SPHINX!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.09.15] We now support Falcon 180B!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.09.14] WeMix-LLaMA2-70B shows excellent performance on the OpenCompass benchmark!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.09.02] We now support InternLM๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.08.28] We release quantized LLM with OmniQuant, which is an efficient, accurate, and omnibearing (even extremely low bit) quantization algorithm. Multimodal version is coming soon๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.08.27] We now support CodeLLaMA and instruction fine-tuning on evol-code-alpaca๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.08.27] We release our documentation in a webbook format ๐Ÿ”—Check it out here
  • [2023.08.21] We release the Quantization codes and Evaluation result๐Ÿ”ฅ
  • [2023.08.05] We release the multimodel fine-tuning codes and checkpoints๐Ÿ”ฅ
  • [2023.07.23] Initial release ๐Ÿ“Œ

Features

Setup

โš™๏ธ For environment installation, please refer to Environment Setup.

Model Usage

๐Ÿค– Instructions for model pre-training, fine-tuning, inference, and other related topics are all available in the document.

Frequently Asked Questions (FAQ)

โ“ Encountering issues or have further questions? Find answers to common inquiries here. We're here to assist you!

Demos

๐Ÿ’ก Now, our model SPHINX supports generating high-quality bounding boxes and then present masks created by SAM for all objects within an image driven by input prompts. Give it a try here! ๐Ÿš€

Core Contributors

Chris Liu, Ziyi Lin, Guian Fang, Jiaming Han, Yijiang Liu, Renrui Zhang

Project Leader

Peng Gao, Wenqi Shao, Shanghang Zhang

Hiring Announcement

๐Ÿ”ฅ We are hiring interns, postdocs, and full-time researchers at the General Vision Group, Shanghai AI Lab, with a focus on multi-modality and vision foundation models. If you are interested, please contact gaopengcuhk@gmail.com.

Citation

If you find our code and paper useful, please kindly cite:

@article{zhang2023llamaadapter,
  title = {LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention},
  author={Zhang, Renrui and Han, Jiaming and Liu, Chris and Gao, Peng and Zhou, Aojun and Hu, Xiangfei and Yan, Shilin and Lu, Pan and Li, Hongsheng and Qiao, Yu},
  journal={arXiv preprint arXiv:2303.16199},
  year={2023}
}
@article{gao2023llamaadapterv2,
  title = {LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model},
  author={Gao, Peng and Han, Jiaming and Zhang, Renrui and Lin, Ziyi and Geng, Shijie and Zhou, Aojun and Zhang, Wei and Lu, Pan and He, Conghui and Yue, Xiangyu and Li, Hongsheng and Qiao, Yu},
  journal={arXiv preprint arXiv:2304.15010},
  year={2023}
}

Acknowledgement

Show More

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

About

An Open-source Toolkit for LLM Development

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.8%
  • Shell 6.8%
  • Batchfile 0.4%