diff --git a/README.md b/README.md index 793691d..41119bd 100644 --- a/README.md +++ b/README.md @@ -5,18 +5,17 @@ Awesome

-A curated (still actively updated) list of practical guide resources of LLMs. It's based on our survey paper: [Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond](https://arxiv.org/abs/2304.13712). The survey is partially based on the second half of this [Blog](https://jingfengyang.github.io/gpt). +A curated (still actively updated) list of practical guide resources of LLMs. It's based on our survey paper: [Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond](https://arxiv.org/abs/2304.13712) and efforts from @[xinyadu](https://github.com/xinyadu). The survey is partially based on the second half of this [Blog](https://jingfengyang.github.io/gpt). We also build an evolutionary tree of modern Large Language Models (LLMs) to trace the development of language models in recent years and highlights some of the most well-known models. -These sources aim to help practitioners navigate the vast landscape of large language models (LLMs) and their applications in natural language processing (NLP) applications. If you find any resources in our repository helpful, please feel free to use them (and don't forget to cite our paper!) +These sources aim to help practitioners navigate the vast landscape of large language models (LLMs) and their applications in natural language processing (NLP) applications. We also include their usage restrictions based on the model and data licensing information. +If you find any resources in our repository helpful, please feel free to use them (don't forget to cite our paper! πŸ˜ƒ). We welcome pull requests to refine this figure! -## Latest NewsπŸ’₯ -- We used PowerPoint to plot the figure and released the source file [pptx](./source/figure_gif.pptx) for our GIF figure. [4/27/2023] -- We released the source file for the still version [pptx](./source/figure_still.pptx), and replaced the figure in this repo with the still version. [4/29/2023] -- Add AlexaTM, UniLM, UniLMv2 to the figure, and correct the logo for Tk. [4/29/2023] +

+ +

-We welcome pull requests to refine this figure, and if you find the source helpful, please cite our paper. - ```bibtex +```bibtex @article{yang2023harnessing, title={Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond}, author={Jingfeng Yang and Hongye Jin and Ruixiang Tang and Xiaotian Han and Qizhang Feng and Haoming Jiang and Bing Yin and Xia Hu}, @@ -25,31 +24,64 @@ We welcome pull requests to refine this figure, and if you find the source helpf archivePrefix={arXiv}, primaryClass={cs.CL} } - ``` - -## Practical Guide for Models - -We build an evolutionary tree of modern Large Language Models (LLMs) to trace the development of language models in recent years and highlights some of the most well-known models, in the following figure: +``` -

- -

+## Latest NewsπŸ’₯ +- We added usage and restrictions section. +- We used PowerPoint to plot the figure and released the source file [pptx](./source/figure_gif.pptx) for our GIF figure. [4/27/2023] +- We released the source file for the still version [pptx](./source/figure_still.pptx), and replaced the figure in this repo with the still version. [4/29/2023] +- Add AlexaTM, UniLM, UniLMv2 to the figure, and correct the logo for Tk. [4/29/2023] +- Add usage and Restrictions (for commercial and research purposes) section. Credits to [Dr. Du](https://github.com/xinyadu). [5/8/2023] + + + + +## Other Practical Guides for LLMs + +- **Why did all of the public reproduction of GPT-3 fail? In which tasks should we use GPT-3.5/ChatGPT?** 2023, [Blog](https://jingfengyang.github.io/gpt) +- **Building LLM applications for production**, 2023, [Blog](https://huyenchip.com/2023/04/11/llm-engineering.html) +- **Data-centric Artificial Intelligence**, 2023, [Repo](https://github.com/daochenzha/data-centric-AI)/[Blog](https://towardsdatascience.com/what-are-the-data-centric-ai-concepts-behind-gpt-models-a590071bb727)/[Paper](https://arxiv.org/abs/2303.10158) + + +## Catalog +* [The Practical Guides for Large Language Models ](#the-practical-guides-for-large-language-models-) + * [Practical Guide for Models](#practical-guide-for-models) + * [BERT-style Language Models: Encoder-Decoder or Encoder-only](#bert-style-language-models-encoder-decoder-or-encoder-only) + * [GPT-style Language Models: Decoder-only](#gpt-style-language-models-decoder-only) + * [Practical Guide for Data](#practical-guide-for-data) + * [Pretraining data](#pretraining-data) + * [Finetuning data](#finetuning-data) + * [Test data/user data](#test-datauser-data) + * [Practical Guide for NLP Tasks](#practical-guide-for-nlp-tasks) + * [Traditional NLU tasks](#traditional-nlu-tasks) + * [Generation tasks](#generation-tasks) + * [Knowledge-intensive tasks](#knowledge-intensive-tasks) + * [Abilities with Scaling](#abilities-with-scaling) + * [Specific tasks](#specific-tasks) + * [Real-World ''Tasks''](#real-world-tasks) + * [Efficiency](#efficiency) + * [Trustworthiness](#trustworthiness) + * [Benchmark Instruction Tuning](#benchmark-instruction-tuning) + * [Alignment](#alignment) + * [Safety Alignment (Harmless)](#safety-alignment-harmless) + * [Truthfulness Alignment (Honest)](#truthfulness-alignment-honest) + * [Practical Guides for Prompting (Helpful)](#practical-guides-for-prompting-helpful) + * [Alignment Efforts of Open-source Communtity](#alignment-efforts-of-open-source-communtity) + * [Usage and Restractions (Models and Data)](#Usage-and-Restrictions) -### Other Practical Guides for LLMs -- Why did all of the public reproduction of GPT-3 fail? In which tasks should we use GPT-3.5/ChatGPT? [Blog](https://jingfengyang.github.io/gpt) -- Building LLM applications for production, 2023. [Blog](https://huyenchip.com/2023/04/11/llm-engineering.html) +## Practical Guide for Models ### BERT-style Language Models: Encoder-Decoder or Encoder-only - BERT **BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding**, 2018, [Paper](https://aclanthology.org/N19-1423.pdf) -- RoBERTa **ALBERT: A Lite BERT for Self-supervised Learning of Language Representations**, 2019, [Paper](https://arxiv.org/abs/1909.11942) +- RoBERTa **RoBERTa: A Robustly Optimized BERT Pretraining Approach**, 2019, [Paper](https://arxiv.org/abs/1907.11692) - DistilBERT **DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter**, 2019, [Paper](https://arxiv.org/abs/1910.01108) - ALBERT **ALBERT: A Lite BERT for Self-supervised Learning of Language Representations**, 2019, [Paper](https://arxiv.org/abs/1909.11942) - UniLM **Unified Language Model Pre-training for Natural Language Understanding and Generation**, 2019 [Paper](https://arxiv.org/abs/1905.03197) - ELECTRA **ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS**, 2020, [Paper](https://openreview.net/pdf?id=r1xMH1BtvB) -- T5 **"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"**. *Colin Raffel et al.* JMLR 2019. [Paper](https://arxiv.org/abs/1910.10683)] -- GLM **"GLM-130B: An Open Bilingual Pre-trained Model"**. 2022. [Paper](https://arxiv.org/abs/2210.02414)] -- AlexaTM **"AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model"**. *Saleh Soltan et al.* arXiv 2022. [Paper](https://arxiv.org/abs/2208.01448)] +- T5 **"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"**. *Colin Raffel et al.* JMLR 2019. [Paper](https://arxiv.org/abs/1910.10683) +- GLM **"GLM-130B: An Open Bilingual Pre-trained Model"**. 2022. [Paper](https://arxiv.org/abs/2210.02414) +- AlexaTM **"AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model"**. *Saleh Soltan et al.* arXiv 2022. [Paper](https://arxiv.org/abs/2208.01448) - ST-MoE **ST-MoE: Designing Stable and Transferable Sparse Expert Models**. 2022 [Paper](https://arxiv.org/abs/2202.08906) @@ -60,7 +92,7 @@ We build an evolutionary tree of modern Large Language Models (LLMs) to trace th - GPT-3 **"Language Models are Few-Shot Learners"**. NeurIPS 2020. [Paper](https://arxiv.org/abs/2005.14165) - OPT **"OPT: Open Pre-trained Transformer Language Models"**. 2022. [Paper](https://arxiv.org/abs/2205.01068) - PaLM **"PaLM: Scaling Language Modeling with Pathways"**. *Aakanksha Chowdhery et al.* arXiv 2022. [Paper](https://arxiv.org/abs/2204.02311) -- BLOOM **"BLOOM: A 176B-Parameter Open-Access Multilingual Language Model"**. 2022. [Paper](https://arxiv.org/abs/2211.05100)] +- BLOOM **"BLOOM: A 176B-Parameter Open-Access Multilingual Language Model"**. 2022. [Paper](https://arxiv.org/abs/2211.05100) - MT-NLG **"Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model"**. 2021. [Paper](https://arxiv.org/abs/2201.11990) - GLaM **"GLaM: Efficient Scaling of Language Models with Mixture-of-Experts"**. ICML 2022. [Paper](https://arxiv.org/abs/2112.06905) - Gopher **"Scaling Language Models: Methods, Analysis & Insights from Training Gopher"**. 2021. [Paper](http://arxiv.org/abs/2112.11446v2) @@ -70,7 +102,9 @@ We build an evolutionary tree of modern Large Language Models (LLMs) to trace th - GPT-4 **"GPT-4 Technical Report"**. 2023. [Paper](http://arxiv.org/abs/2303.08774v2) - BloombergGPT **BloombergGPT: A Large Language Model for Finance**, 2023, [Paper](https://arxiv.org/abs/2303.17564) - GPT-NeoX-20B: **"GPT-NeoX-20B: An Open-Source Autoregressive Language Model"**. 2022. [Paper](https://arxiv.org/abs/2204.06745) - +- PaLM 2: **"PaLM 2 Technical Report"**. 2023. [Tech.Report](https://arxiv.org/abs/2305.10403) +- LLaMA 2: **"Llama 2: Open foundation and fine-tuned chat models"**. 2023. [Paper](https://arxiv.org/pdf/2307.09288) +- Claude 2: **"Model Card and Evaluations for Claude Models"**. 2023. [Model Card](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf) @@ -78,6 +112,7 @@ We build an evolutionary tree of modern Large Language Models (LLMs) to trace th ### Pretraining data +- **RedPajama**, 2023. [Repo](https://github.com/togethercomputer/RedPajama-Data) - **The Pile: An 800GB Dataset of Diverse Text for Language Modeling**, Arxiv 2020. [Paper](https://arxiv.org/abs/2101.00027) - **How does the pre-training objective affect what large language models learn about linguistic properties?**, ACL 2022. [Paper](https://aclanthology.org/2022.acl-short.16/) - **Scaling laws for neural language models**, 2020. [Paper](https://arxiv.org/abs/2001.08361) @@ -179,6 +214,7 @@ We build a decision flow for choosing LLMs or fine-tuned models~\protect\footnot - **SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization**, Arxiv 2023. [Paper](https://arxiv.org/abs/2303.13035) 2. Spurious biases +- **Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning**, Findings of ACL 2023 [Paper](https://aclanthology.org/2023.findings-acl.284/) - **Shortcut learning of large language models in natural language understanding: A survey**, 2023 [Paper](https://arxiv.org/abs/2208.11857) - **Mitigating gender bias in captioning system**, WWW 2020 [Paper](https://dl.acm.org/doi/abs/10.1145/3442381.3449950) - **Calibrate Before Use: Improving Few-Shot Performance of Language Models**, ICML 2021 [Paper](https://arxiv.org/abs/2102.09690) @@ -199,7 +235,7 @@ We build a decision flow for choosing LLMs or fine-tuned models~\protect\footnot - **Cross-task generalization via natural language crowdsourcing instructions**, ACL 2022 [Paper](https://aclanthology.org/2022.acl-long.244.pdf) - Tk-INSTRUCT: **Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks**, EMNLP 2022 [Paper](https://aclanthology.org/2022.emnlp-main.340/) - FLAN-T5/PaLM: **Scaling Instruction-Finetuned Language Models**, Arxiv 2022 [Paper](https://arxiv.org/abs/2210.11416) -- **The Flan Collection: Designing Data and Methods for Effective Instruction Tuning**, Arxiv 2023 [Paper](https://arxiv.org/abs/2208.03299) +- **The Flan Collection: Designing Data and Methods for Effective Instruction Tuning**, Arxiv 2023 [Paper](https://arxiv.org/abs/2301.13688) - **OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization**, Arxiv 2023 [Paper](https://arxiv.org/abs/2212.12017) ### Alignment @@ -227,20 +263,321 @@ We build a decision flow for choosing LLMs or fine-tuned models~\protect\footnot #### Practical Guides for Prompting (Helpful) -- OpenAI Cookbook. [Blog](https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md) -- Prompt Engineering. [Blog](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/) -- ChatGPT Prompt Engineering for Developers! [Course](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/) +- **OpenAI Cookbook**. [Blog](https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md) +- **Prompt Engineering**. [Blog](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/) +- **ChatGPT Prompt Engineering for Developers!** [Course](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/) #### Alignment Efforts of Open-source Communtity - **Self-Instruct: Aligning Language Model with Self Generated Instructions**, Arxiv 2022 [Paper](https://arxiv.org/abs/2212.10560) -- Alpaca. [Repo](https://github.com/tatsu-lab/stanford_alpaca) -- Vicuna. [Repo](https://github.com/lm-sys/FastChat) -- Dolly. [Blog](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm) -- DeepSpeed-Chat. [Blog](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat) -- GPT4All. [Repo](https://github.com/nomic-ai/gpt4all) -- OpenAssitant. [Repo](https://github.com/LAION-AI/Open-Assistant) -- ChatGLM. [Repo](https://github.com/THUDM/ChatGLM-6B) -- MOSS. [Repo](https://github.com/OpenLMLab/MOSS) - +- **Alpaca**. [Repo](https://github.com/tatsu-lab/stanford_alpaca) +- **Vicuna**. [Repo](https://github.com/lm-sys/FastChat) +- **Dolly**. [Blog](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm) +- **DeepSpeed-Chat**. [Blog](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat) +- **GPT4All**. [Repo](https://github.com/nomic-ai/gpt4all) +- **OpenAssitant**. [Repo](https://github.com/LAION-AI/Open-Assistant) +- **ChatGLM**. [Repo](https://github.com/THUDM/ChatGLM-6B) +- **MOSS**. [Repo](https://github.com/OpenLMLab/MOSS) +- **Lamini**. [Repo](https://github.com/lamini-ai/lamini/)/[Blog](https://lamini.ai/blog/introducing-lamini) + +## Usage and Restrictions + + + + +We build a table summarizing the LLMs usage restrictions (e.g. for commercial and research purposes). In particular, we provide the information from the models and their pretraining data's perspective. +We urge the users in the community to refer to the licensing information for public models and data and use them in a responsible manner. +We urge the developers to pay special attention to licensing, make them transparent and comprehensive, to prevent any unwanted and unforeseen usage. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LLMsModelData
LicenseCommercial UseOther noteable restrictionsLicenseCorpus
Encoder-only
BERT series of models (general domain)Apache 2.0βœ… PublicBooksCorpus, English Wikipedia
RoBERTaMIT licenseβœ… PublicBookCorpus, CC-News, OpenWebText, STORIES
ERNIEApache 2.0βœ… PublicEnglish Wikipedia
SciBERTApache 2.0βœ… PublicBERT corpus, 1.14M papers from Semantic Scholar
LegalBERTCC BY-SA 4.0❌ Public (except data from the Case Law Access Project)EU legislation, US court cases, etc.
BioBERTApache 2.0βœ… PubMedPubMed, PMC
Encoder-Decoder
T5Apache 2.0βœ… PublicC4
Flan-T5Apache 2.0βœ… PublicC4, Mixture of tasks (Fig 2 in paper)
BARTApache 2.0βœ… PublicRoBERTa corpus
GLMApache 2.0βœ… PublicBooksCorpus and English Wikipedia
ChatGLMChatGLM License❌No use for illegal purposes or military research, no harm the public interest of societyN/A1T tokens of Chinese and English corpus
Decoder-only
GPT2 Modified MIT Licenseβœ…Use GPT-2 responsibly and clearly indicate your content was created using GPT-2.PublicWebText
GPT-NeoMIT licenseβœ… PublicPile
GPT-JApache 2.0βœ… PublicPile
---> DollyCC BY NC 4.0❌ CC BY NC 4.0, Subject to terms of Use of the data generated by OpenAIPile, Self-Instruct
---> GPT4ALL-JApache 2.0βœ… PublicGPT4All-J dataset
PythiaApache 2.0βœ… PublicPile
---> Dolly v2MIT licenseβœ… PublicPile, databricks-dolly-15k
OPTOPT-175B LICENSE AGREEMENT❌No development relating to surveillance research and military, no harm the public interest of societyPublicRoBERTa corpus, the Pile, PushShift.io Reddit
---> OPT-IMLOPT-175B LICENSE AGREEMENT❌same to OPTPublicOPT corpus, Extended version of Super-NaturalInstructions
YaLMApache 2.0βœ… UnspecifiedPile, Teams collected Texts in Russian
BLOOMThe BigScience RAIL Licenseβœ…No use of generating verifiably false information with the purpose of harming others;
content without expressly disclaiming that the text is machine generated
PublicROOTS corpus (LaurenΒΈcon et al., 2022)
---> BLOOMZThe BigScience RAIL Licenseβœ…same to BLOOMPublicROOTS corpus, xP3
GalacticaCC BY-NC 4.0❌ N/AThe Galactica Corpus
LLaMANon-commercial bespoke license❌No development relating to surveillance research and military, no harm the public interest of societyPublicCommonCrawl, C4, Github, Wikipedia, etc.
---> AlpacaCC BY NC 4.0❌ CC BY NC 4.0, Subject to terms of Use of the data generated by OpenAILLaMA corpus, Self-Instruct
---> VicunaCC BY NC 4.0❌ Subject to terms of Use of the data generated by OpenAI;
Privacy Practices of ShareGPT
LLaMA corpus, 70K conversations from ShareGPT.com
---> GPT4ALLGPL Licensed LLaMa❌ PublicGPT4All dataset
OpenLLaMAApache 2.0βœ… PublicRedPajama
CodeGeeXThe CodeGeeX License❌No use for illegal purposes or military researchPublicPile, CodeParrot, etc.
StarCoderBigCode OpenRAIL-M v1 licenseβœ…No use of generating verifiably false information with the purpose of harming others;
content without expressly disclaiming that the text is machine generated
PublicThe Stack
MPT-7BApache 2.0βœ… PublicmC4 (english), The Stack, RedPajama, S2ORC
falconTII Falcon LLM Licenseβœ…/❌Available under a license allowing commercial usePublicRefinedWeb
+ +## Star History + +[![Star History Chart](https://api.star-history.com/svg?repos=Mooler0410/LLMsPracticalGuide&type=Date)](https://star-history.com/#Mooler0410/LLMsPracticalGuide&Date) diff --git a/awesome_examples/tableQA.md b/awesome_examples/tableQA.md index 2a98982..27294ae 100644 --- a/awesome_examples/tableQA.md +++ b/awesome_examples/tableQA.md @@ -4,6 +4,8 @@ In this lesson, we ask the model to answer questions based on a table. The table Comparing the following two examples, ChatGPT is vulnerable to table row order perturbation, while GPT4 is robust to table row order perturbation. Such robustness could probably be due to two reasons. The first reason is larger model size and more pretraining data of GPT4. Secondly, better truthfulness stemming from better RLHF alignment could help GPT4 follow different formats of the same instructions better. +Note that smaller finetuned models heavily suffer from such non-robustness issue, according to the paper: [TableFormer: Robust Transformer Modeling for Table-Text Encoding](https://arxiv.org/pdf/2203.00274.pdf) + # Example 1 (2022/04/29) ## ChatGPT diff --git a/imgs/qr_version.jpg b/imgs/qr_version.jpg new file mode 100644 index 0000000..01b8da7 Binary files /dev/null and b/imgs/qr_version.jpg differ diff --git a/imgs/tree.jpg b/imgs/tree.jpg new file mode 100644 index 0000000..99486bc Binary files /dev/null and b/imgs/tree.jpg differ diff --git a/imgs/tree.png b/imgs/tree.png index 24d20af..681b42c 100644 Binary files a/imgs/tree.png and b/imgs/tree.png differ diff --git a/source/README.md b/source/README.md index ed00f88..da034db 100644 --- a/source/README.md +++ b/source/README.md @@ -1,4 +1,5 @@ ### change log - V1 (04/07/2023): First version of the figure. -- V2 (04/29/2023): Second version of the figure. (The gif version is not updated) +- V2 (04/29/2023): Second version of the figure. (The gif version is not updated) +- V3 (08/06/2023): added Claude 2 and LLama-2-Chat diff --git a/source/figure_gif.pptx b/source/figure_gif.pptx index 1f5e771..5c3b2ab 100644 Binary files a/source/figure_gif.pptx and b/source/figure_gif.pptx differ diff --git a/source/figure_still.pptx b/source/figure_still.pptx index e0b0c78..f693241 100644 Binary files a/source/figure_still.pptx and b/source/figure_still.pptx differ