- NeuralChat has been showcased in Intel Innovation’23 Keynote and Google Cloud Next'23 to demonstrate GenAI/LLM capabilities on Intel Xeon Scalable Processors.
- NeuralChat supports custom chatbot development and deployment on broad Intel HWs such as Xeon Scalable Processors, Gaudi2, Xeon CPU Max Series, Data Center GPU Max Series, Arc Series, and Core Processors. Check out Notebooks and see below sample code.
# pip install intel-extension-for-transformers
from intel_extension_for_transformers.neural_chat import build_chatbot
chatbot = build_chatbot()
response = chatbot.predict("Tell me about Intel Xeon Scalable Processors.")
- LLM runtime extends Hugging Face Transformers API to provide seamless low precision inference for popular LLMs, supporting mainstream low precision data types such as INT8/FP8/INT4/FP4/NF4.
pip install intel-extension-for-transformers
For more installation methods, please refer to Installation Page
Intel® Extension for Transformers is an innovative toolkit to accelerate Transformer-based models on Intel platforms, in particular effective on 4th Intel Xeon Scalable processor Sapphire Rapids (codenamed Sapphire Rapids). The toolkit provides the below key features and examples:
-
Seamless user experience of model compressions on Transformer-based models by extending Hugging Face transformers APIs and leveraging Intel® Neural Compressor
-
Advanced software optimizations and unique compression-aware runtime (released with NeurIPS 2022's paper Fast Distilbert on CPUs and QuaLA-MiniLM: a Quantized Length Adaptive MiniLM, and NeurIPS 2021's paper Prune Once for All: Sparse Pre-Trained Language Models)
-
Optimized Transformer-based model packages such as Stable Diffusion, GPT-J-6B, GPT-NEOX, BLOOM-176B, T5, Flan-T5 and end-to-end workflows such as SetFit-based text classification and document level sentiment analysis (DLSA)
-
NeuralChat, a customizable chatbot framework to create your own chatbot within minutes by leveraging a rich set of plugins Knowledge Retrieval, Speech Interaction, Query Caching, Security Guardrail.
-
Inference of Large Language Model (LLM) in pure C/C++ with weight-only quantization kernels, supporting GPT-NEOX, LLAMA, MPT, FALCON, BLOOM-7B, OPT, ChatGLM2-6B, GPT-J-6B and Dolly-v2-3B
Below are the sample code to enable weight-only low precision inference. See more examples.
from transformers import AutoTokenizer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig
model_name = "Intel/neural-chat-7b-v1-1" # Hugging Face model_id or local model
config = WeightOnlyQuantConfig(compute_dtype="int8", weight_dtype="int4")
prompt = "Once upon a time, a little girl"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config)
gen_tokens = model.generate(inputs, max_new_tokens=300)
outputs = tokenizer.batch_decode(gen_tokens)
from transformers import AutoTokenizer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig
model_name = "Intel/neural-chat-7b-v1-1" # Hugging Face model_id or local model
config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int8")
prompt = "Once upon a time, a little girl"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config)
gen_tokens = model.generate(inputs, max_new_tokens=300)
outputs = tokenizer.batch_decode(gen_tokens)
Here is the average accuracy of validated models on Lambada (OpenAI), HellaSwag, Winogrande, PIQA, and WikiText. The next token latency is based on 32 input tokens and greedy search on Intel's 4th Generation Xeon Scalable Sapphire Rapids processor.
Model | FP32 | INT4 Accuracy (Group size 32) | INT4 Accuracy (Group size 128) | Next Token Latency |
---|---|---|---|---|
EleutherAI/gpt-j-6B | 0.643 | 0.644 | 0.64 | 21.98ms |
meta-llama/Llama-2-7b-hf | 0.69 | 0.69 | 0.685 | 24.55ms |
decapoda-research/llama-7b-hf | 0.689 | 0.682 | 0.68 | 24.84ms |
EleutherAI/gpt-neox-20b | 0.674 | 0.672 | 0.669 | 80.16ms |
mosaicml/mpt-7b-chat | 0.672 | 0.67 | 0.666 | 35.84ms |
tiiuae/falcon-7b | 0.698 | 0.694 | 0.693 | 36.1ms |
baichuan-inc/baichuan-7B | 0.474 | 0.471 | 0.47 | Coming Soon |
facebook/opt-6.7b | 0.65 | 0.647 | 0.643 | Coming Soon |
databricks/dolly-v2-3b | 0.613 | 0.609 | 0.609 | 22.02ms |
tiiuae/falcon-40b-instruct | 0.756 | 0.757 | 0.755 | Coming Soon |
Find other models like ChatGLM, ChatGLM2, StarCoder... in LLM Runtime
OVERVIEW | |||||||
---|---|---|---|---|---|---|---|
NeuralChat | LLM Runtime | ||||||
NEURALCHAT | |||||||
Chatbot on Intel CPU | Chatbot on Intel GPU | Chatbot on Gaudi | |||||
Chatbot on Client | More Notebooks | ||||||
LLM RUNTIME | |||||||
LLM Runtime | Low Precision Kernels | Tensor Parallelism | |||||
LLM COMPRESSION | |||||||
SmoothQuant (INT8) | Weight-only Quantization (INT4/FP4/NF4/INT8) | QLoRA on CPU | |||||
GENERAL COMPRESSION | |||||||
Quantization | Pruning | Distillation | Orchestration | ||||
Neural Architecture Search | Export | Metrics | Objectives | ||||
Pipeline | Length Adaptive | Early Exit | Data Augmentation | ||||
TUTORIALS & RESULTS | |||||||
Tutorials | LLM List | General Model List | Model Performance |
- Blog published on Medium: NeuralChat: Simplifying Supervised Instruction Fine-tuning and Reinforcement Aligning for Chatbots (Sep 2023)
- Intel Innovation'23 Keynote: Intel Innovation 2023 Keynote by Greg Lavender (Sep 2023)
- Blog on Intel Community: NeuralChat: A Customizable Chatbot Framework (Sep 2023)
- Blog published on Medium: NeuralChat: A Customizable Chatbot Framework (Sep 2023)
- Blog published on Medium: Faster Stable Diffusion Inference with Intel Extension for Transformers (July 2023)
- Blog of Intel Developer News: The Moat Is Trust, Or Maybe Just Responsible AI (July 2023)
- Blog of Intel Developer News: Create Your Own Custom Chatbot (July 2023)
- Blog of Intel Developer News: Accelerate Llama 2 with Intel AI Hardware and Software Optimizations (July 2023)
- Arxiv: An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs (June 2023)
- Blog published on Medium: Simplify Your Custom Chatbot Deployment (June 2023)
View Full Publication List.
-
Excellent open-source projects: bitsandbytes, FastChat, fastRAG, ggml, gptq, llama.cpp, lm-evauation-harness, peft, trl, and many others.
-
Thanks to all the contributors including Ikko Eltociear Ashimine, Hardik Kamboj, Sangjune Park, Kevin Ta, Huiyan Cao, Xigui Wang, Jiafu Zhang, Tyler Titsworth, Yi Wang, Samanway Sadhu, Jiqing Feng, Jonathan Mamou and Niroop Ammbashankar.
Welcome to raise any interesting ideas on model compression techniques and LLM-based chatbot development! Feel free to reach us and look forward to our collaborations on Intel Extension for Transformers!