Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support weight only quantization with intel-extension-for-transformers. #14504

Merged
Merged
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
ad38444
Support weight only quantization with intel-extension-for-transformers
PenghuiCheng Dec 11, 2023
c769358
merge code from master branch
PenghuiCheng Jan 30, 2024
203ff0d
Update document
PenghuiCheng Feb 1, 2024
4fe596c
Support weight only quantization with intel-extension-for-transformers
PenghuiCheng Dec 11, 2023
156cfe9
Update document
PenghuiCheng Feb 1, 2024
1c81b14
format code style
PenghuiCheng Feb 20, 2024
0aa3c8b
merge branch
PenghuiCheng Feb 20, 2024
cfb932a
Format code style
PenghuiCheng Feb 20, 2024
8af4ba1
Update code
PenghuiCheng Feb 21, 2024
e2f1559
format code style
PenghuiCheng Feb 21, 2024
71133b4
move weight_only_quantization.mdx to intel.mdx
PenghuiCheng Feb 21, 2024
50fda10
Update code
PenghuiCheng Feb 21, 2024
814abd8
Merge remote-tracking branch 'upstream/master' into penghuic/itrex_we…
PenghuiCheng Feb 23, 2024
5f24db3
Fixed UT error
PenghuiCheng Feb 26, 2024
5f97bd5
Merge from master branch
PenghuiCheng Feb 26, 2024
e990b5f
update code
PenghuiCheng Feb 26, 2024
35f1829
Update code
PenghuiCheng Feb 26, 2024
587a55a
Merge remote-tracking branch 'upstream/master' into penghuic/itrex_we…
PenghuiCheng Feb 26, 2024
8bdcc79
merge from master branch
PenghuiCheng Feb 29, 2024
da1a6e4
Merge from master branch
PenghuiCheng Mar 4, 2024
94482ac
Update code
PenghuiCheng Mar 4, 2024
8abc44e
Update code
PenghuiCheng Mar 4, 2024
5856472
Merge remote-tracking branch 'upstream/master' into penghuic/itrex_we…
PenghuiCheng Mar 5, 2024
39a759a
Update poetry.lock
PenghuiCheng Mar 6, 2024
4a239ca
Merge from master branch
PenghuiCheng Mar 6, 2024
00f433f
Fixed pylint error
PenghuiCheng Mar 7, 2024
b8830b8
Update poetry file
PenghuiCheng Mar 8, 2024
d1ae253
Merge branch 'master' into penghuic/itrex_weight_only
PenghuiCheng Mar 8, 2024
5e777ee
Fixed pylint error
PenghuiCheng Mar 11, 2024
461efb6
Merge remote-tracking branch 'upstream/master' into penghuic/itrex_we…
PenghuiCheng Mar 11, 2024
a45d207
Merge branch 'master' into penghuic/itrex_weight_only
baskaryan Mar 12, 2024
89f611a
Merge remote-tracking branch 'upstream/master' into penghuic/itrex_we…
PenghuiCheng Mar 12, 2024
e912835
Update poetry lock
PenghuiCheng Mar 13, 2024
93b35e8
Merge from master branch
PenghuiCheng Mar 13, 2024
34ad951
Merge remote-tracking branch 'upstream/master' into penghuic/itrex_we…
PenghuiCheng Mar 17, 2024
e4655b3
Update code
PenghuiCheng Mar 17, 2024
279cc63
Merge from master branch
PenghuiCheng Mar 25, 2024
80c6793
poetry
baskaryan Mar 27, 2024
a5846ef
Merge from master branch
PenghuiCheng Mar 28, 2024
2402219
poetry
PenghuiCheng Mar 28, 2024
fa9c724
Merge branch 'master' into penghuic/itrex_weight_only
PenghuiCheng Mar 28, 2024
fd831b3
Merge branch 'master' into penghuic/itrex_weight_only
PenghuiCheng Mar 28, 2024
abe6b02
Merge branch 'master' into penghuic/itrex_weight_only
baskaryan Mar 28, 2024
a1c710b
fmt
baskaryan Mar 28, 2024
d3329c7
fmt
baskaryan Mar 28, 2024
fcc6b16
Merge branch 'master' into penghuic/itrex_weight_only
baskaryan Mar 29, 2024
a26ac3b
Merge branch 'master' into penghuic/itrex_weight_only
PenghuiCheng Mar 30, 2024
614ee14
Update peotry file
PenghuiCheng Apr 3, 2024
1139982
merge from master branch
PenghuiCheng Apr 3, 2024
9e4f87f
Update poetry file
PenghuiCheng Apr 3, 2024
f2ccc46
Merge branch 'master' into penghuic/itrex_weight_only
baskaryan Apr 3, 2024
a9ab6f0
fmt
baskaryan Apr 3, 2024
37b3d33
fmt
baskaryan Apr 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
323 changes: 323 additions & 0 deletions docs/docs/integrations/llms/weight_only_quantization.ipynb
PenghuiCheng marked this conversation as resolved.
Show resolved Hide resolved

Large diffs are not rendered by default.

62 changes: 62 additions & 0 deletions docs/docs/integrations/providers/intel.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Intel® Extension for Transformers

>[Intel® Extension for Transformers](https://github.com/intel/intel-extension-for-transformers)
>(ITREX) is an innovative toolkit to accelerate Transformer-based models on Intel platforms, in particular, effective on 4th Intel Xeon Scalable processor Sapphire Rapids (codenamed Sapphire Rapids)..
>
>Here, we will introduce Weight-only quantization for Transformers large language models with ITREX. Weight-only quantization is a technique used in deep learning to reduce the memory and computational requirements of neural networks. In the context of deep neural networks, the model parameters, also known as weights, are typically represented using floating-point numbers, which can consume a significant amount of memory and require intensive computational resources.

Quantization is a process that involves reducing the precision of these weights by representing them using a smaller number of bits. Weight-only quantization specifically focuses on quantizing the weights of the neural network while keeping other components, such as activations, in their original precision.

## Introduction

As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computational demands of these modern architectures while maintaining the accuracy. Compared to [normal quantization](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/quantization.md) like W8A8, weight only quantization is probably a better trade-off to balance the performance and the accuracy, since we will see below that the bottleneck of deploying LLMs is the memory bandwidth and normally weight only quantization could lead to better accuracy.

## Installation and Setup

We need to install `intel-extension-for-transformers` python package.

```bash
pip install intel-extension-for-transformers
```

## Examples

See a [usage example](../docs/integrations/llms/weight_only_quantization.ipynb).

## Detail of Configuration Parameters

Here is the detail of the `WeightOnlyQuantConfig` class.

#### weight_dtype (string): Weight Data Type, default is "nf4".
We support quantize the weights to following data types for storing(weight_dtype in WeightOnlyQuantConfig):
* **int8**: Uses 8-bit data type.
* **int4_fullrange**: Uses the -8 value of int4 range compared with the normal int4 range [-7,7].
* **int4_clip**: Clips and retains the values within the int4 range, setting others to zero.
* **nf4**: Uses the normalized float 4-bit data type.
* **fp4_e2m1**: Uses regular float 4-bit data type. "e2" means that 2 bits are used for the exponent, and "m1" means that 1 bits are used for the mantissa.

#### compute_dtype (string): Computing Data Type, Default is "fp32".
While these techniques store weights in 4 or 8 bit, the computation still happens in float32, bfloat16 or int8(compute_dtype in WeightOnlyQuantConfig):
* **fp32**: Uses the float32 data type to compute.
* **bf16**: Uses the bfloat16 data type to compute.
* **int8**: Uses 8-bit data type to compute.

#### llm_int8_skip_modules (list of module's name): Modules to Skip Quantization, Default is None.
It is a list of modules to be skipped quantization.

#### scale_dtype (string): The Scale Data Type, Default is "fp32".
Now only support "fp32"(float32).

#### mse_range (boolean): Whether to Search for The Best Clip Range from Range [0.805, 1.0, 0.005], default is False.
#### use_double_quant (boolean): Whether to Quantize Scale, Default is False.
Not support yet.
#### double_quant_dtype (string): Reserve for Double Quantization.
#### double_quant_scale_dtype (string): Reserve for Double Quantization.
#### group_size (int): Group Size When Auantization.
#### scheme (string): Which Format Weight Be Quantize to. Default is "sym".
* **sym**: Symmetric.
* **asym**: Asymmetric.
#### algorithm (string): Which Algorithm to Improve the Accuracy . Default is "RTN"
* **RTN**: Round-to-nearest (RTN) is a quantification method that we can think of very intuitively.
* **AWQ**: Protecting only 1% of salient weights can greatly reduce quantization error. the salient weight channels are selected by observing the distribution of activation and weight per channel. The salient weights are also quantized after multiplying a big scale factor before quantization for preserving. .
* **TEQ**: A trainable equivalent transformation that preserves the FP32 precision in weight-only quantization.
Binary file modified docs/static/img/extraction_trace_few_shot.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/static/img/extraction_trace_parsing.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/static/img/extraction_trace_tool.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 12 additions & 0 deletions libs/community/langchain_community/llms/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -570,6 +570,14 @@ def _import_watsonxllm() -> Type[BaseLLM]:
return WatsonxLLM


def _import_weight_only_quantization() -> Any:
from langchain_community.llms.weight_only_quantization import (
WeightOnlyQuantPipeline,
)

return WeightOnlyQuantPipeline


def _import_writer() -> Type[BaseLLM]:
from langchain_community.llms.writer import Writer

Expand Down Expand Up @@ -777,6 +785,8 @@ def __getattr__(name: str) -> Any:
return _import_vllm_openai()
elif name == "WatsonxLLM":
return _import_watsonxllm()
elif name == "WeightOnlyQuantPipeline":
return _import_weight_only_quantization()
elif name == "Writer":
return _import_writer()
elif name == "Xinference":
Expand Down Expand Up @@ -879,6 +889,7 @@ def __getattr__(name: str) -> Any:
"VLLM",
"VLLMOpenAI",
"WatsonxLLM",
"WeightOnlyQuantPipeline",
"Writer",
"OctoAIEndpoint",
"Xinference",
Expand Down Expand Up @@ -970,6 +981,7 @@ def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
"vllm": _import_vllm,
"vllm_openai": _import_vllm_openai,
"watsonxllm": _import_watsonxllm,
"weight_only_quantization": _import_weight_only_quantization,
"writer": _import_writer,
"xinference": _import_xinference,
"javelin-ai-gateway": _import_javelin_ai_gateway,
Expand Down
244 changes: 244 additions & 0 deletions libs/community/langchain_community/llms/weight_only_quantization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,244 @@
import importlib
from typing import Any, List, Mapping, Optional

from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.pydantic_v1 import Extra

from langchain_community.llms.utils import enforce_stop_tokens

DEFAULT_MODEL_ID = "google/flan-t5-large"
DEFAULT_TASK = "text2text-generation"
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")


class WeightOnlyQuantPipeline(LLM):
"""Weight only quantized model.

To use, you should have the `intel-extension-for-transformers` packabge and
`transformers` package installed.
intel-extension-for-transformers:
https://github.com/intel/intel-extension-for-transformers

Example using from_model_id:
.. code-block:: python

from langchain_community.llms import WeightOnlyQuantPipeline
from intel_extension_for_transformers.transformers import (
WeightOnlyQuantConfig
)
config = WeightOnlyQuantConfig
hf = WeightOnlyQuantPipeline.from_model_id(
model_id="google/flan-t5-large",
task="text2text-generation"
pipeline_kwargs={"max_new_tokens": 10},
quantization_config=config,
)
Example passing pipeline in directly:
.. code-block:: python

from langchain_community.llms import WeightOnlyQuantPipeline
from intel_extension_for_transformers.transformers import (
AutoModelForSeq2SeqLM
)
from intel_extension_for_transformers.transformers import (
WeightOnlyQuantConfig
)
from transformers import AutoTokenizer, pipeline

model_id = "google/flan-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = WeightOnlyQuantConfig
model = AutoModelForSeq2SeqLM.from_pretrained(
model_id,
quantization_config=config,
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=10,
)
hf = WeightOnlyQuantPipeline(pipeline=pipe)
"""

pipeline: Any #: :meta private:
model_id: str = DEFAULT_MODEL_ID
"""Model name or local path to use."""

model_kwargs: Optional[dict] = None
"""Key word arguments passed to the model."""

pipeline_kwargs: Optional[dict] = None
"""Key word arguments passed to the pipeline."""

class Config:
"""Configuration for this pydantic object."""

extra = Extra.allow

@classmethod
def from_model_id(
cls,
model_id: str,
task: str,
device: Optional[int] = -1,
device_map: Optional[str] = None,
model_kwargs: Optional[dict] = None,
pipeline_kwargs: Optional[dict] = None,
load_in_4bit: Optional[bool] = False,
load_in_8bit: Optional[bool] = False,
quantization_config=None,
**kwargs: Any,
) -> LLM:
"""Construct the pipeline object from model_id and task."""
if device_map is not None and device is not None:
raise ValueError("`Device` and `device_map` cannot be set simultaneously!")
if importlib.util.find_spec("torch") is None:
raise ValueError(
"Weight only quantization pipeline only support PyTorch now!"
)

try:
from intel_extension_for_transformers.transformers import (
AutoModelForCausalLM,
AutoModelForSeq2SeqLM,
)
from intel_extension_for_transformers.utils.utils import is_ipex_available
from transformers import AutoTokenizer
from transformers import pipeline as hf_pipeline
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers` "
"and `pip install intel-extension-for-transformers`."
)
if device is not None and device >= 0:
if not is_ipex_available():
raise ValueError("Don't find out Intel GPU on this machine!")
device_map = "xpu:" + str(device)
elif device < 0:
device = None

if device is None:
if device_map is None:
device_map = "cpu"

_model_kwargs = model_kwargs or {}
tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)

try:
if task == "text-generation":
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=load_in_4bit,
load_in_8bit=load_in_8bit,
quantization_config=quantization_config,
use_llm_runtime=False,
device_map=device_map,
**_model_kwargs,
)
elif task in ("text2text-generation", "summarization"):
model = AutoModelForSeq2SeqLM.from_pretrained(
model_id,
load_in_4bit=load_in_4bit,
load_in_8bit=load_in_8bit,
quantization_config=quantization_config,
use_llm_runtime=False,
device_map=device_map,
**_model_kwargs,
)
else:
raise ValueError(
f"Got invalid task {task}, "
f"currently only {VALID_TASKS} are supported"
)
except ImportError as e:
raise ValueError(
f"Could not load the {task} model due to missing dependencies."
) from e

if "trust_remote_code" in _model_kwargs:
_model_kwargs = {
k: v for k, v in _model_kwargs.items() if k != "trust_remote_code"
}
_pipeline_kwargs = pipeline_kwargs or {}
pipeline = hf_pipeline(
task=task,
model=model,
tokenizer=tokenizer,
device=device,
model_kwargs=_model_kwargs,
**_pipeline_kwargs,
)
if pipeline.task not in VALID_TASKS:
raise ValueError(
f"Got invalid task {pipeline.task}, "
f"currently only {VALID_TASKS} are supported"
)
return cls(
pipeline=pipeline,
model_id=model_id,
model_kwargs=_model_kwargs,
pipeline_kwargs=_pipeline_kwargs,
**kwargs,
)

@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {
"model_id": self.model_id,
"model_kwargs": self.model_kwargs,
"pipeline_kwargs": self.pipeline_kwargs,
}

@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "weight_only_quantization"

def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Call the HuggingFace model and return the output.

Args:
prompt: The prompt to use for generation.
stop: A list of strings to stop generation when encountered.

Returns:
The generated text.

Example:
.. code-block:: python

from langchain_community.llms import WeightOnlyQuantPipeline
llm = WeightOnlyQuantPipeline.from_model_id(
model_id="google/flan-t5-large",
task="text2text-generation",
)
llm("This is a prompt.")
"""
response = self.pipeline(prompt)
if self.pipeline.task == "text-generation":
# Text generation return includes the starter text.
text = response[0]["generated_text"][len(prompt) :]
elif self.pipeline.task == "text2text-generation":
text = response[0]["generated_text"]
elif self.pipeline.task == "summarization":
text = response[0]["summary_text"]
else:
raise ValueError(
f"Got invalid task {self.pipeline.task}, "
f"currently only {VALID_TASKS} are supported"
)
if stop:
# This is a bit hacky, but I can't figure out a better way to enforce
# stop tokens when making calls to huggingface_hub.
text = enforce_stop_tokens(text, stop)
return text