Skip to content

Commit

Permalink
Various improvements (#104)
Browse files Browse the repository at this point in the history
* Make rwkv_gpu_offload_layers return true only if layers were actually offloaded

* Validate device of tensors

* Offload all layers during test

* Consistently use FP16 and FP32 instead of float16/fp16/F16/etc.

* Use spaces for indentation

* Remove spaces between type name and []

* Add cuBLAS on Windows guide, refactor docs structure

* Insert replacement characters when decoding invalid UTF-8 sequences

* Fix compatibility

* Fix formatting

* Fix copy-pasted tensor validation
  • Loading branch information
saharNooby committed Jun 21, 2023
1 parent 6b26e0d commit 9cbb9d9
Show file tree
Hide file tree
Showing 13 changed files with 182 additions and 113 deletions.
44 changes: 19 additions & 25 deletions README.md
Expand Up @@ -28,16 +28,19 @@ Below table is for reference only. Measurements were made on 4C/8T x86 CPU with

#### With cuBLAS

Measurements were made on Intel i7 13700K & NVIDIA 3060 Ti 8G. Latency per token shown.
Measurements were made on Intel i7 13700K & NVIDIA 3060 Ti 8 GB. Latency per token in ms shown.

| Model | Layers on GPU | Format | 24 Threads | 8 Threads | 4 Threads | 2 Threads | 1 Threads |
|-----------------------|---------------|--------|-------------|------------|------------|------------|------------|
| `RWKV-4-Pile-169M` | 12 | `Q4_0` | 20.6 ms | 8.6 ms | 6.9 ms | 6.2 ms | 7.9 ms |
| `RWKV-4-Pile-169M` | 12 | `Q4_1` | 21.4 ms | 8.6 ms | 6.9 ms | 6.7 ms | 7.8 ms |
| `RWKV-4-Pile-169M` | 12 | `Q5_1` | 22.2 ms | 9.0 ms | 6.9 ms | 6.7 ms | 8.1 ms |
| `RWKV-4-Raven-7B-v11` | 32 | `Q4_0` | 94.9 ms | 54.3 ms | 50.2 ms | 51.6 ms | 59.2 ms |
| `RWKV-4-Raven-7B-v11` | 32 | `Q4_1` | 94.5 ms | 54.3 ms | 49.7 ms | 51.8 ms | 59.2 ms |
| `RWKV-4-Raven-7B-v11` | 32 | `Q5_1` | 101.6 ms | 72.3 ms | 67.2 ms | 69.3 ms | 77.0 ms |
| Model | Layers on GPU | Format | 1 thread | 2 threads | 4 threads | 8 threads | 24 threads |
|-----------------------|---------------|--------|----------|-----------|-----------|-----------|------------|
| `RWKV-4-Pile-169M` | 12 | `Q4_0` | 7.9 | 6.2 | 6.9 | 8.6 | 20 |
| `RWKV-4-Pile-169M` | 12 | `Q4_1` | 7.8 | 6.7 | 6.9 | 8.6 | 21 |
| `RWKV-4-Pile-169M` | 12 | `Q5_1` | 8.1 | 6.7 | 6.9 | 9.0 | 22 |

| Model | Layers on GPU | Format | 1 thread | 2 threads | 4 threads | 8 threads | 24 threads |
|-----------------------|---------------|--------|----------|-----------|-----------|-----------|------------|
| `RWKV-4-Raven-7B-v11` | 32 | `Q4_0` | 59 | 51 | 50 | 54 | 94 |
| `RWKV-4-Raven-7B-v11` | 32 | `Q4_1` | 59 | 51 | 49 | 54 | 94 |
| `RWKV-4-Raven-7B-v11` | 32 | `Q5_1` | 77 | 69 | 67 | 72 | 101 |

Note: since cuBLAS is supported only for `ggml_mul_mat()`, we still need to use few CPU resources to execute remaining operations.

Expand Down Expand Up @@ -68,7 +71,7 @@ This option is recommended for maximum performance, because the library would be

##### Windows

**Requirements**: [CMake](https://cmake.org/download/) or [CMake from anaconda](https://anaconda.org/conda-forge/cmake), MSVC compiler.
**Requirements**: [CMake](https://cmake.org/download/) or [CMake from anaconda](https://anaconda.org/conda-forge/cmake), [Build Tools for Visual Studio 2019](https://visualstudio.microsoft.com/vs/older-downloads/).

```commandline
cmake .
Expand All @@ -79,14 +82,7 @@ If everything went OK, `bin\Release\rwkv.dll` file should appear.

##### Windows + cuBLAS

**Important**: Since there are no cuBLAS static libraries for Windows, after compiling with dynamic libraries following DLLs should be copied from `{CUDA}/bin` into `build/bin/Release`: `cudart64_12.dll`, `cublas64_12.dll`, `cublasLt64_12.dll`.

```commandline
mkdir build
cd build
cmake .. -DRWKV_CUBLAS=ON
cmake --build . --config Release
```
Refer to [docs/cuBLAS_on_Windows.md](docs%2FcuBLAS_on_Windows.md) for a comprehensive guide.

##### Linux / MacOS

Expand All @@ -104,9 +100,7 @@ If everything went OK, `librwkv.so` (Linux) or `librwkv.dylib` (MacOS) file shou
##### Linux / MacOS + cuBLAS

```commandline
mkdir build
cd build
cmake .. -DRWKV_CUBLAS=ON
cmake . -DRWKV_CUBLAS=ON
cmake --build . --config Release
```

Expand All @@ -130,10 +124,10 @@ This option would require a little more manual work, but you can use it with any

```commandline
# Windows
python rwkv\convert_pytorch_to_ggml.py C:\RWKV-4-Pile-169M-20220807-8023.pth C:\rwkv.cpp-169M.bin float16
python rwkv\convert_pytorch_to_ggml.py C:\RWKV-4-Pile-169M-20220807-8023.pth C:\rwkv.cpp-169M.bin FP16
# Linux / MacOS
python rwkv/convert_pytorch_to_ggml.py ~/Downloads/RWKV-4-Pile-169M-20220807-8023.pth ~/Downloads/rwkv.cpp-169M.bin float16
python rwkv/convert_pytorch_to_ggml.py ~/Downloads/RWKV-4-Pile-169M-20220807-8023.pth ~/Downloads/rwkv.cpp-169M.bin FP16
```

**Optionally**, quantize the model into one of quantized formats from the table above:
Expand Down Expand Up @@ -218,8 +212,8 @@ For reference only, here is a list of latest versions of `rwkv.cpp` that have su
- `Q4_3`, `Q4_1_O`
- [commit c736ef5](https://github.com/saharNooby/rwkv.cpp/commit/c736ef5411606b529d3a74c139ee111ef1a28bb9), [release with prebuilt binaries](https://github.com/saharNooby/rwkv.cpp/releases/tag/master-1c363e6)

See also [FILE_FORMAT.md](FILE_FORMAT.md) for version numbers of `rwkv.cpp` model files and their changelog.
See also [docs/FILE_FORMAT.md](docs/FILE_FORMAT.md) for version numbers of `rwkv.cpp` model files and their changelog.

## Contributing

Please follow the code style described in [CODE_STYLE.md](CODE_STYLE.md).
Please follow the code style described in [docs/CODE_STYLE.md](docs/CODE_STYLE.md).
File renamed without changes.
File renamed without changes.
68 changes: 68 additions & 0 deletions docs/cuBLAS_on_Windows.md
@@ -0,0 +1,68 @@
# Using cuBLAS on Windows

To get cuBLAS in `rwkv.cpp` working on Windows, go through this guide section by section.

## Build Tools for Visual Studio 2019

Skip this step if you already have Build Tools installed.

To install Build Tools, go to [Visual Studio Older Downloads](https://visualstudio.microsoft.com/vs/older-downloads/), download `Visual Studio 2019 and other Products` and run the installer.

## CMake

Skip this step if you already have CMake installed: running `cmake --version` should output `cmake version x.y.z`.

Download latest `Windows x64 Installer` from [Download | CMake](https://cmake.org/download/) and run it.

## CUDA Toolkit

Skip this step if you already have CUDA Toolkit installed: running `nvcc --version` should output `nvcc: NVIDIA (R) Cuda compiler driver`.

CUDA Toolkit must be installed **after** CMake, or else CMake would not be able to see it and you will get error [No CUDA toolset found](https://stackoverflow.com/questions/56636714/cuda-compile-problems-on-windows-cmake-error-no-cuda-toolset-found).

Download an installer from [CUDA Toolkit Archive](https://developer.nvidia.com/cuda-toolkit-archive) and run it.

When installing:

- check `Visual Studio Integration`, or else CMake would not be able to see the toolkit
- optionally, uncheck driver installation — depending on the downloaded version of the toolkit, you may get an unwanted driver downgrade

## Building rwkv.cpp

The only thing different from the regular CPU build is `-DRWKV_CUBLAS=ON` option:

```commandline
cmake . -DRWKV_CUBLAS=ON
cmake --build . --config Release
```

If everything went OK, `bin\Release\rwkv.dll` file should appear.

## Using the GPU

You need to choose layer count that will be offloaded onto the GPU. In general, the more layers offloaded, the better will be the performance; but you may be constrained by VRAM size of your GPU. Increase offloaded layer count until you get "CUDA out of memory" errors.

If most of the computation is performed on GPU, you will not need high thread count. Optimal value may be as low as 1, since any additional threads would just eat CPU cycles while waiting for GPU operation to complete.

To offload layers to GPU:

- if using Python model: pass non-zero number in `gpu_layer_count` to constructor of `rwkv.rwkv_cpp_model.RWKVModel`
- if using Python wrapper for C library: call `rwkv.rwkv_cpp_shared_library.RWKVSharedLibrary.rwkv_gpu_offload_layers`
- if using C library directly: call `bool rwkv_gpu_offload_layers(struct rwkv_context * ctx, const uint32_t n_layers)`

## Fixing issues

You may get `FileNotFoundError: Could not find module '...\rwkv.dll' (or one of its dependencies). Try using the full path with constructor syntax.` error.

This means that the application couldn't find CUDA libraries that `rwkv.dll` depends on.

To fix this:

- navigate to the folder where CUDA Toolkit is installed
- usually, it looks like `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin`
- find three DLLs in the `bin` folder:
- `cudart64_110.dll`
- `cublas64_11.dll`
- `cublasLt64_11.dll`
- copy these DDLs to the folder containing `rwkv.dll`
- usually, the folder is `rwkv.cpp/bin/Release`
18 changes: 9 additions & 9 deletions extras/CMakeLists.txt
@@ -1,15 +1,15 @@
function(rwkv_add_extra source)
get_filename_component(EXTRA_TARGET ${source} NAME_WE)
add_executable(rwkv_${EXTRA_TARGET} ${source})
target_link_libraries(rwkv_${EXTRA_TARGET} PRIVATE ggml rwkv)
if (RWKV_STATIC)
get_target_property(target_LINK_OPTIONS rwkv_${EXTRA_TARGET} LINK_OPTIONS)
list(REMOVE_ITEM target_LINK_OPTIONS "-static")
set_target_properties(rwkv_${EXTRA_TARGET} PROPERTIES LINK_OPTIONS "${target_LINK_OPTIONS}")
endif()
get_filename_component(EXTRA_TARGET ${source} NAME_WE)
add_executable(rwkv_${EXTRA_TARGET} ${source})
target_link_libraries(rwkv_${EXTRA_TARGET} PRIVATE ggml rwkv)
if (RWKV_STATIC)
get_target_property(target_LINK_OPTIONS rwkv_${EXTRA_TARGET} LINK_OPTIONS)
list(REMOVE_ITEM target_LINK_OPTIONS "-static")
set_target_properties(rwkv_${EXTRA_TARGET} PROPERTIES LINK_OPTIONS "${target_LINK_OPTIONS}")
endif()
endfunction()

file(GLOB extras *.c)
foreach (extra ${extras})
rwkv_add_extra(${extra})
rwkv_add_extra(${extra})
endforeach()
77 changes: 37 additions & 40 deletions rwkv.cpp
Expand Up @@ -174,8 +174,8 @@ bool rwkv_fwrite_data(FILE * file, const void * data, const size_t length) {
#define TYPE_UNKNOWN TYPE_COUNT

enum rwkv_type {
TYPE_F32,
TYPE_F16,
TYPE_FP32,
TYPE_FP16,
TYPE_Q4_0,
TYPE_Q4_1,
TYPE_Q4_1_O, // Unsupported
Expand All @@ -190,8 +190,8 @@ enum rwkv_type {
#define GGML_TYPE_UNKNOWN GGML_TYPE_COUNT

extern const enum ggml_type rwkv_type_to_ggml[TYPE_COUNT + 1] = {
GGML_TYPE_F32, /* F32 */
GGML_TYPE_F16, /* F16 */
GGML_TYPE_F32, /* FP32 */
GGML_TYPE_F16, /* FP16 */
GGML_TYPE_Q4_0, /* Q4_0 */
GGML_TYPE_Q4_1, /* Q4_1 */
GGML_TYPE_UNKNOWN, /* Q4_1_O */
Expand All @@ -204,8 +204,8 @@ extern const enum ggml_type rwkv_type_to_ggml[TYPE_COUNT + 1] = {
};

extern const enum rwkv_type rwkv_type_from_ggml[GGML_TYPE_COUNT + 1] = {
TYPE_F32, /* F32 */
TYPE_F16, /* F16 */
TYPE_FP32, /* FP32 */
TYPE_FP16, /* FP16 */
TYPE_Q4_0, /* Q4_0 */
TYPE_Q4_1, /* Q4_1 */
TYPE_Q4_2, /* Q4_2 */
Expand All @@ -220,7 +220,7 @@ extern const enum rwkv_type rwkv_type_from_ggml[GGML_TYPE_COUNT + 1] = {
TYPE_COUNT, /* COUNT */
};

extern const char * rwkv_type_to_string[TYPE_COUNT + 1] = {"float32", "float16", "Q4_0", "Q4_1", "Q4_1_O", "Q4_2", "Q4_3", "Q5_0", "Q5_1", "Q8_0", "unknown"};
extern const char * rwkv_type_to_string[TYPE_COUNT + 1] = {"FP32", "FP16", "Q4_0", "Q4_1", "Q4_1_O", "Q4_2", "Q4_3", "Q5_0", "Q5_1", "Q8_0", "unknown"};

enum rwkv_type rwkv_type_from_string(const char * str) {
for (int ord = 0; ord < TYPE_COUNT; ord++) {
Expand Down Expand Up @@ -429,7 +429,7 @@ struct rwkv_model {
struct ggml_tensor * ln0_weight;
struct ggml_tensor * ln0_bias;

std::unique_ptr<struct rwkv_layer []> layers;
std::unique_ptr<struct rwkv_layer[]> layers;

struct ggml_tensor * ln_out_weight;
struct ggml_tensor * ln_out_bias;
Expand Down Expand Up @@ -580,9 +580,9 @@ struct rwkv_context {
// Reused by all graphs.
struct rwkv_ggml_context ctx;
struct ggml_tensor * input_state;
std::unique_ptr<struct rwkv_layer_state []> input_layers;
std::unique_ptr<struct rwkv_layer_state[]> input_layers;
struct ggml_tensor * output_state;
std::unique_ptr<struct rwkv_layer_state []> output_layers;
std::unique_ptr<struct rwkv_layer_state[]> output_layers;
struct ggml_tensor * logits;

uint32_t n_threads;
Expand All @@ -598,8 +598,7 @@ struct rwkv_context {
enum rwkv_error_flags last_error;
bool print_errors;

size_t gpu_layers;
size_t vram_total;
uint32_t gpu_layers;
};

// https://stackoverflow.com/a/6458689
Expand All @@ -610,7 +609,7 @@ bool rwkv_set_params(struct rwkv_model & model, F callback) {
RWKV_ENSURE_OR_FALSE(callback("blocks.0.ln0.bias", model.ln0_bias));

uint32_t n_layer = model.header.n_layer;
std::unique_ptr<struct rwkv_layer []> layers(new(std::nothrow) struct rwkv_layer [n_layer]);
std::unique_ptr<struct rwkv_layer[]> layers(new(std::nothrow) struct rwkv_layer[n_layer]);
RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ALLOC, layers.get(), "Failed to allocate model layers");
model.layers = std::move(layers);

Expand Down Expand Up @@ -1203,11 +1202,11 @@ struct rwkv_context * rwkv_new_context_impl(std::shared_ptr<struct rwkv_instance
struct ggml_tensor * output = ggml_new_tensor_1d(ctx.ctx, GGML_TYPE_F32, n_embed * 5 * n_layer);

// We collect parts of input state here. Each part is (n_embed) vector.
std::unique_ptr<struct rwkv_layer_state []> inputs(new(std::nothrow) struct rwkv_layer_state [n_layer]);
std::unique_ptr<struct rwkv_layer_state[]> inputs(new(std::nothrow) struct rwkv_layer_state[n_layer]);
RWKV_ASSERT_NULL_MSG(RWKV_ERROR_ALLOC, inputs.get(), "Failed to allocate input state parts");

// We collect parts of output state here. Each part is (n_embed) vector.
std::unique_ptr<struct rwkv_layer_state []> outputs(new(std::nothrow) struct rwkv_layer_state [n_layer]);
std::unique_ptr<struct rwkv_layer_state[]> outputs(new(std::nothrow) struct rwkv_layer_state[n_layer]);
RWKV_ASSERT_NULL_MSG(RWKV_ERROR_ALLOC, outputs.get(), "Failed to allocate output state parts");

for (size_t i = 0; i < n_layer; i++) {
Expand Down Expand Up @@ -1277,31 +1276,29 @@ struct rwkv_context * rwkv_clone_context(struct rwkv_context * ctx, const uint32
return clone;
}

bool rwkv_gpu_offload_layers(const struct rwkv_context * ctx, const uint32_t n_gpu_layers) {
bool rwkv_gpu_offload_layers(struct rwkv_context * ctx, const uint32_t n_layers) {
#ifdef GGML_USE_CUBLAS
size_t n_gpu = std::min(n_gpu_layers, ctx->instance->model.header.n_layer);
uint32_t layers_to_offload = std::min(n_layers, ctx->instance->model.header.n_layer - ctx->gpu_layers);

size_t gpu_layers = ctx->gpu_layers;
size_t vram_total = ctx->vram_total;
for (uint32_t i = 0; i < layers_to_offload; i++) {
const struct rwkv_layer & layer = ctx->instance->model.layers[ctx->gpu_layers + i];

for (size_t i = 0; i < n_gpu; i++) {
const struct rwkv_layer & layer = ctx->instance->model.layers[i];
// Use cuBLAS only for heavy matrices; other operations are not supported for the GPU at the moment
ggml_cuda_transform_tensor(layer.att_key);
ggml_cuda_transform_tensor(layer.att_value);
ggml_cuda_transform_tensor(layer.att_receptance);
ggml_cuda_transform_tensor(layer.att_output);

// Use cuBLAS only for heavy matrices; other operations are not supported for GPU at the moment
ggml_cuda_transform_tensor(layer.att_key); vram_total += ggml_nbytes(layer.att_key);
ggml_cuda_transform_tensor(layer.att_value); vram_total += ggml_nbytes(layer.att_value);
ggml_cuda_transform_tensor(layer.att_receptance); vram_total += ggml_nbytes(layer.att_receptance);
ggml_cuda_transform_tensor(layer.att_output); vram_total += ggml_nbytes(layer.att_output);
ggml_cuda_transform_tensor(layer.ffn_key);
ggml_cuda_transform_tensor(layer.ffn_value);
ggml_cuda_transform_tensor(layer.ffn_receptance);
}

ggml_cuda_transform_tensor(layer.ffn_key); vram_total += ggml_nbytes(layer.ffn_key);
ggml_cuda_transform_tensor(layer.ffn_value); vram_total += ggml_nbytes(layer.ffn_value);
ggml_cuda_transform_tensor(layer.ffn_receptance); vram_total += ggml_nbytes(layer.ffn_receptance);
ctx->gpu_layers += layers_to_offload;

gpu_layers++;
}
return layers_to_offload > 0;
#endif

return true;
return false;
}

void rwkv_set_inputs(const struct rwkv_context * ctx, const float * state_in) {
Expand Down Expand Up @@ -1464,7 +1461,7 @@ bool rwkv_quantize_model_file(const char * in_path, const char * out_path, const
RWKV_ASSERT_FALSE_MSG(
RWKV_ERROR_FILE,
in_type == GGML_TYPE_F32 || in_type == GGML_TYPE_F16,
"Unsupported input data type (%s); needs to be F32 or F16",
"Unsupported input data type (%s); needs to be FP32 or FP16",
rwkv_type_to_string[rwkv_type_from_ggml[in_type]]
);

Expand All @@ -1477,7 +1474,7 @@ bool rwkv_quantize_model_file(const char * in_path, const char * out_path, const
size_t orig_total_size = 0;
size_t new_total_size = 0;

// Required to init the fp16 tables
// Required to init the F16 tables
// Doesn't crash if ggml_init fails
ggml_free(ggml_init({ 0, NULL, true }));

Expand All @@ -1496,7 +1493,7 @@ bool rwkv_quantize_model_file(const char * in_path, const char * out_path, const
}

// f16 type tensors get relocated to out and then converted into f32 at in
if (header.data_type == TYPE_F16) {
if (header.data_type == TYPE_FP16) {
if (in_size > max_out_size) {
max_out_size = in_size;
}
Expand Down Expand Up @@ -1524,7 +1521,7 @@ bool rwkv_quantize_model_file(const char * in_path, const char * out_path, const
// This is a histogram of quantized values. If it shows single 1.0, then all 0.0, something went very wrong!
int64_t hist_all[16] {};

std::unique_ptr<uint8_t []> scratch(new(std::nothrow) uint8_t [max_in_size + max_out_size]);
std::unique_ptr<uint8_t[]> scratch(new(std::nothrow) uint8_t[max_in_size + max_out_size]);
RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ALLOC, scratch.get(), "Failed to allocate buffer");

uint8_t * in_buf = scratch.get();
Expand All @@ -1542,19 +1539,19 @@ bool rwkv_quantize_model_file(const char * in_path, const char * out_path, const
const char * name_str = name.c_str();
RWKV_MSG("%*s - [%5" PRId32 ", %5" PRId32 "], type = %6s ", (int) max_key_length, name_str, header.width, header.height, rwkv_type_to_string[header.data_type]);

data = header.data_type == TYPE_F16 ? out_buf : in_buf;
data = header.data_type == TYPE_FP16 ? out_buf : in_buf;
size_t orig_size = rwkv_tensor_size(header), new_size = orig_size;
RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_data(in_file.file, orig_size, data), "\nFailed to read tensor data of %s", name_str);

// Quantize only 2D tensors, except embedding and head matrices.
// Embedding and head take not too much space, especially in bigger models;
// but they significantly increase perplexity when quantized.
if ((header.data_type == TYPE_F32 || header.data_type == TYPE_F16) && header.dim_count == 2 && name != "emb.weight" && name != "head.weight") {
if ((header.data_type == TYPE_FP32 || header.data_type == TYPE_FP16) && header.dim_count == 2 && name != "emb.weight" && name != "head.weight") {
RWKV_MSG("quantizing... ");

size_t nelements = (size_t) header.width * (size_t) header.height;

if (header.data_type == TYPE_F16) {
if (header.data_type == TYPE_FP16) {
ggml_fp16_to_fp32_row((const ggml_fp16_t *) out_buf, (float *) in_buf, nelements);
}

Expand Down

0 comments on commit 9cbb9d9

Please sign in to comment.