Skip to content

Commit 4e559d3

Browse files
Refactor evaluation (#96)
1 parent d3fb898 commit 4e559d3

30 files changed

+668
-352
lines changed

.flake8

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,9 @@
22
max-line-length = 120
33
per-file-ignores =
44
__init__.py:F401
5-
evaluation/infinite_bench/create_huggingface_dataset.py:E501
6-
evaluation/longbench/create_huggingface_dataset.py:E501
7-
evaluation/longbenchv2/create_huggingface_dataset.py:E501
5+
evaluation/benchmarks/infinite_bench/create_huggingface_dataset.py:E501
6+
evaluation/benchmarks/longbench/create_huggingface_dataset.py:E501
7+
evaluation/benchmarks/longbenchv2/create_huggingface_dataset.py:E501
8+
evaluation/evaluate.py:E501
89
# E203, W503 - black-compatible config
910
extend-ignore = E203, W503

evaluation/README.md

Lines changed: 30 additions & 105 deletions
Original file line numberDiff line numberDiff line change
@@ -1,112 +1,57 @@
11
# Evaluation
22

3-
This directory contains a set of scripts to evaluate the performance of different presses on different datasets. We currently support the following datasets:
4-
- [Loogle](loogle/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/loogle))
5-
- [RULER](ruler/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/ruler))
6-
- [Zero Scrolls](zero_scrolls/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/zero_scrolls))
7-
- [Infinitebench](infinite_bench/README.md) ([hf link](https://huggingface.co/datasets/MaxJeblick/InfiniteBench))
8-
- [longbench](longbench/README.md)([hf link](https://huggingface.co/datasets/Xnhyacinth/LongBench))
9-
- [longbench-v2](longbenchv2/README.md)([hf link](https://huggingface.co/datasets/Xnhyacinth/LongBench-v2))
3+
We support evaluation for all the presses implemented in the library, on a variety of popular benchmarks.
104

5+
### Quick Start 🚀
116

12-
Please refer to the README of each dataset for more information on how the Hugging Face dataset was generated.
7+
Running evaluation is straightforward! Make sure you are in the `evaluation` directory, then:
138

14-
## Usage
9+
1. **Configure your evaluation** - Edit `evaluate_config.yaml` to specify your *method*, *press*, and *dataset*
10+
2. **Run the evaluation** - Execute the script: ```python evaluate.py```
1511

16-
To evaluate a press on a dataset, you can run the following command:
17-
18-
```bash
19-
python evaluate.py --dataset <dataset_name> --data_dir <data_dir> --model <model_name> --press_name <press_name> --compression_ratio <ratio>
20-
```
12+
The script will read from `evaluate_config.yaml` and run inference accordingly.
13+
If you want, you can override the settings via command line, for instance:
2114

22-
For instance,
2315
```bash
2416
python evaluate.py --dataset loogle --data_dir shortdep_qa --model meta-llama/Meta-Llama-3.1-8B-Instruct --press_name expected_attention --compression_ratio 0.5
2517
```
2618

27-
- Results (predictions & metrics) are saved in the `results` directory.
28-
- All available presses are listed in the `PRESS_DICT` variable in `evaluate.py`.
29-
- Additional arguments are --device, --fraction, --max_new_tokens, --max_context_length and --compress_questions. For more information, run `python evaluate.py --help`
30-
- Finally we also provide a bash script `evaluate.sh` to facilitate the evaluation of multiple presses (1 per GPU) with different compression ratios.
31-
32-
33-
## Benchmarks
34-
35-
We provide benchmark results from 7 presses and 3 models. We include a variant of SnapKV where we include the question in the compression process as in the original paper (snapkv w/ question). All performance curves can be found in the [assets](assets) directory, and predictions are available [here](https://drive.google.com/drive/folders/14BilGw07v8tOUUct-5nDhQlN3zIX9BUf?usp=drive_link).
36-
37-
<details><summary>
38-
39-
### RULER
40-
</summary>
41-
42-
Average performance the 13 tasks of the RULER dataset with 4k context length (per task results [here](../evaluation/assets/)):
43-
44-
![RULER](../evaluation/assets/ruler_4096_average%20score.png)
19+
or pass a custom configuration file:
4520

46-
Observations:
47-
- snapkv w/ question consistently outperforms other methods. However this method can't be use for use cases such as prompt caching as it requires the question to be known beforehand.
48-
- All presses show degradation in performance even for small compression ratios.
49-
- llama3.1-8b-instruct is more robust to compression than other models and expected attention performs better than others.
50-
- mistral-nemo-instruct-2407 is more robust to random pruning than other models.
51-
- For phi-3.5-mini and mistral-nemo-instruct-2407, all presses perform poorly compared to baseline presses such as random (remove KV pairs randomly) or streaming llm (remove the middle KV pairs). This is especially true for the subtask [niah_single_3](assets/ruler_4096_niah_single_3.png) where most presses fail to perform a proper copy-paste of a long needle in a haystack. This might be related to [induction heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)
52-
- For phi-3.5-mini, we ran an additional experiment with a different compression ratio per layer (as in [this notebook](../notebooks/per_layer_compression_demo.ipynb)) which largely outperformed it's uniform compression counterpart (see purple cross on 2nd plot). The ratios where determined by grid search on 20/6500 samples from RULER (so results can be questionable).
53-
54-
</details>
55-
56-
<details><summary>
21+
```bash
22+
python evaluate.py --config_path <your_config.yaml>
23+
```
5724

58-
### Loogle
59-
</summary>
25+
💡 Results (predictions & metrics) are automatically saved to the `output_dir` directory .
6026

61-
Shortdep_qa
62-
![shortdep_qa](../evaluation/assets/loogle_shortdep_qa.png)
63-
Shortdep_cloze
64-
![shortdep_cloze](../evaluation/assets/loogle_shortdep_cloze.png)
65-
Longdep_qa
66-
![longdep_qa](../evaluation/assets/loogle_longdep_qa.png)
6727

68-
Observations:
69-
- Metrics are adapted from loogle benchmark, see [here](../evaluation/loogle/calculate_metrics.py). The plot show the average score (mean over all submetrics) for each task.
70-
- The metrics are not always correlated with the quality of the answer, especially for longdep_qa task. LLM-as-a-judge may better suited for a more refined evaluation.
71-
- Again, snapkv w/ question consistently outperforms other methods.
72-
- In longdep_qa, the model looses track on counting (e.g. answer to "How many times is person x mentioned?" gets lower with increased compression ratio). This is not necessarily reflected in the metrics.
73-
- Llama3.1-8b-instruct seems to be more robust to compression.
74-
- Observed attention context had to be truncated at 10 000 tokens to prevent OOM issues, as the attention matrix needs to be materialized.
75-
- For shortdep_cloze task, the output formatting is often ignored leading to performance degradation even for low compression ratios. Interestingly, the model may still be able to answer the question correctly.
76-
- mistral-nemo-instruct-2407 fails to perform well on the shortdep_cloze task, even without compression, and is thus not reported.
77-
- shortdep_cloze task runs OOM for phi-3.5-mini at compression ratio 0.0 and is thus missing.
28+
### Configuration
7829

79-
</details>
30+
Customize your evaluation by editing `evaluate_config.yaml`. This allows you to flexibly configure a variety of settings, like the `fraction` of dataset to use (for quick testing) and the model arguments (e.g. for scaling RoPE). For complete parameter details, see the `evaluation_config.yaml`
8031

81-
<details><summary>
8232

83-
### Infinitebench
84-
</summary>
33+
### Available Presses and Datasets
34+
We support evaluation with all the presses implemented in the library (and possible combinations).
8535

86-
kv_retrieval
87-
![kv_retrieval](../evaluation/assets/infinitebench_kv_retrieval.png)
88-
longbook_choice_eng
89-
![longbook_choice_eng](../evaluation/assets/infinitebench_longbook_choice_eng.png)
90-
longbook_qa_eng
91-
![longbook_qa_eng](../evaluation/assets/infinitebench_longbook_qa_eng.png)
92-
longdialogue_qa_eng
93-
![longdialogue_qa_eng](../evaluation/assets/infinitebench_longdialogue_qa_eng.png)
36+
- All implemented presses are listed in the `PRESS_REGISTRY` variable in `evaluate_registry.py`.
37+
- All implemented dataset are listed in `DATASET_REGISTRY` variable in `evaluate_registry.py`.
9438

39+
At the moment, we support the following standard popular benchmarks:
9540

96-
Observations:
97-
- All task where run with max_len=70_000 tokens, except for observed attention which used 10_000 tokens.
98-
- For kv-retrieval subtask, streaming LLM (keep last N tokens) performs better than other methods. While this may be surprising at first, respecting the format of the task `(Extract the value corresponding to the specified key in the JSON object below. JSON data: {"7de93460-b65f-404e-9a7d-af2da2c8abb5": "2d9ab7c8-394a-4062-9928-310e39201a2f", ...}. Key: "70d1b207-d1e8-4591-95b8-9c85aceb8956"`
99-
helps to understand this behavior. The information is homogeneously distributed in the context, and any token could potentially be relevant for answering the question. Streaming LLM will have access to all last tokens, while other methods will potentially create "holes".
100-
- Mistral-nemo-instruct-2407 performs poorly on kv-retrieval subtask compared to other models and is thus excluded from the plots.
101-
- For longbook-choice-eng, many compression methods are able to obtain good compression ratios. Thus, longbook-choice-eng is an example of a task that can be compressed effectively.
102-
- For longbook-qa-eng, expected attention and snapkv perform better than other methods (note the performance difference of llama3.1-8b-instruct and phi3.5/mistral-nemo).
103-
- For longdialogue-qa-eng, there's an interesting crossover between different compression methods. For higher compression, snapkv performs relatively well across models.
41+
- [Loogle](loogle/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/loogle))
42+
- [RULER](ruler/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/ruler))
43+
- [Zero Scrolls](zero_scrolls/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/zero_scrolls))
44+
- [Infinitebench](infinite_bench/README.md) ([hf link](https://huggingface.co/datasets/MaxJeblick/InfiniteBench))
45+
- [longbench](longbench/README.md)([hf link](https://huggingface.co/datasets/Xnhyacinth/LongBench))
46+
- [longbench-v2](longbenchv2/README.md)([hf link](https://huggingface.co/datasets/Xnhyacinth/LongBench-v2))
10447

105-
</details>
48+
📚 **For detailed information** about each dataset or implementing custom benchmarks, see the individual README files in the benchmarks directory.
10649

10750

108-
### Conclusions
51+
### Multi GPU Evaluation
52+
Use the provided `evaluate.sh` script to run multiple presses simultaneously across different GPUs with varying compression ratios.
10953

54+
### Discussion
11055
The methods benchmarked so far are not able to efficiently compress the KV cache while maintaining performance on several long-context datasets and models.
11156
In particular, exact information retrieval tasks such as kv-retrieval are challenging for the current methods.
11257
Further methods could be explored:
@@ -116,24 +61,4 @@ Further methods could be explored:
11661
- Move beyond pruning, as this method is fundamentally limited (see last figure in [this notebook](../notebooks/expected_attention.ipynb))
11762
- Fine-tuning LLMs to deal with compressed KV caches
11863

119-
We encourage contributions to explore these ideas and improve the performance of long-context LLMs with compressed caches.
120-
121-
## How to add a dataset
122-
123-
Each dataset directory is structured as follows:
124-
125-
```bash
126-
$dataset
127-
├── README.md
128-
├── calculate_metrics.py
129-
├── create_huggingface_dataset.py
130-
```
131-
132-
Where:
133-
- `create_huggingface_dataset.py` is a script that generates the Hugging Face dataset from the original dataset. Each dataset is associated with a set of parquet files with the following structure:
134-
- `context`: ...
135-
- `question`: ...
136-
- `answer_prefix`: ...
137-
- `answer`: ...
138-
- `max_new_tokens`: ...
139-
- `calculate_metrics.py` is a script that calculates the metrics based on the output of `evaluate.py`
64+
We encourage contributions to explore these ideas and improve the performance of long-context LLMs with compressed caches. We provide benchmark results from 7 presses and 3 models. We include a variant of SnapKV where we include the question in the compression process as in the original paper (snapkv w/ question). All performance curves can be found in the [assets](assets) directory, and predictions are available [here](https://drive.google.com/drive/folders/14BilGw07v8tOUUct-5nDhQlN3zIX9BUf?usp=drive_link).

evaluation/benchmarks/README.md

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Benchmarks
2+
3+
We currently support the following datasets:
4+
- [Loogle](loogle/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/loogle))
5+
- [RULER](ruler/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/ruler))
6+
- [Zero Scrolls](zero_scrolls/README.md) ([hf link](https://huggingface.co/datasets/simonjegou/zero_scrolls))
7+
- [Infinitebench](infinite_bench/README.md) ([hf link](https://huggingface.co/datasets/MaxJeblick/InfiniteBench))
8+
- [longbench](longbench/README.md)([hf link](https://huggingface.co/datasets/Xnhyacinth/LongBench))
9+
- [longbench-v2](longbenchv2/README.md)([hf link](https://huggingface.co/datasets/Xnhyacinth/LongBench-v2))
10+
11+
Please refer to the README of each dataset for more information on how the Hugging Face dataset was generated.
12+
13+
<details><summary>
14+
15+
### RULER
16+
</summary>
17+
18+
Average performance the 13 tasks of the RULER dataset with 4k context length (per task results [here](../evaluation/assets/)):
19+
20+
![RULER](../evaluation/assets/ruler_4096_average%20score.png)
21+
22+
Observations:
23+
- snapkv w/ question consistently outperforms other methods. However this method can't be use for use cases such as prompt caching as it requires the question to be known beforehand.
24+
- All presses show degradation in performance even for small compression ratios.
25+
- llama3.1-8b-instruct is more robust to compression than other models and expected attention performs better than others.
26+
- mistral-nemo-instruct-2407 is more robust to random pruning than other models.
27+
- For phi-3.5-mini and mistral-nemo-instruct-2407, all presses perform poorly compared to baseline presses such as random (remove KV pairs randomly) or streaming llm (remove the middle KV pairs). This is especially true for the subtask [niah_single_3](assets/ruler_4096_niah_single_3.png) where most presses fail to perform a proper copy-paste of a long needle in a haystack. This might be related to [induction heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)
28+
- For phi-3.5-mini, we ran an additional experiment with a different compression ratio per layer (as in [this notebook](../notebooks/per_layer_compression_demo.ipynb)) which largely outperformed it's uniform compression counterpart (see purple cross on 2nd plot). The ratios where determined by grid search on 20/6500 samples from RULER (so results can be questionable).
29+
30+
</details>
31+
32+
<details><summary>
33+
34+
### Loogle
35+
</summary>
36+
37+
Shortdep_qa
38+
![shortdep_qa](../evaluation/assets/loogle_shortdep_qa.png)
39+
Shortdep_cloze
40+
![shortdep_cloze](../evaluation/assets/loogle_shortdep_cloze.png)
41+
Longdep_qa
42+
![longdep_qa](../evaluation/assets/loogle_longdep_qa.png)
43+
44+
Observations:
45+
- Metrics are adapted from loogle benchmark, see [here](../evaluation/loogle/calculate_metrics.py). The plot show the average score (mean over all submetrics) for each task.
46+
- The metrics are not always correlated with the quality of the answer, especially for longdep_qa task. LLM-as-a-judge may better suited for a more refined evaluation.
47+
- Again, snapkv w/ question consistently outperforms other methods.
48+
- In longdep_qa, the model looses track on counting (e.g. answer to "How many times is person x mentioned?" gets lower with increased compression ratio). This is not necessarily reflected in the metrics.
49+
- Llama3.1-8b-instruct seems to be more robust to compression.
50+
- Observed attention context had to be truncated at 10 000 tokens to prevent OOM issues, as the attention matrix needs to be materialized.
51+
- For shortdep_cloze task, the output formatting is often ignored leading to performance degradation even for low compression ratios. Interestingly, the model may still be able to answer the question correctly.
52+
- mistral-nemo-instruct-2407 fails to perform well on the shortdep_cloze task, even without compression, and is thus not reported.
53+
- shortdep_cloze task runs OOM for phi-3.5-mini at compression ratio 0.0 and is thus missing.
54+
55+
</details>
56+
57+
<details><summary>
58+
59+
### Infinitebench
60+
</summary>
61+
62+
kv_retrieval
63+
![kv_retrieval](../evaluation/assets/infinitebench_kv_retrieval.png)
64+
longbook_choice_eng
65+
![longbook_choice_eng](../evaluation/assets/infinitebench_longbook_choice_eng.png)
66+
longbook_qa_eng
67+
![longbook_qa_eng](../evaluation/assets/infinitebench_longbook_qa_eng.png)
68+
longdialogue_qa_eng
69+
![longdialogue_qa_eng](../evaluation/assets/infinitebench_longdialogue_qa_eng.png)
70+
71+
72+
Observations:
73+
- All task where run with max_len=70_000 tokens, except for observed attention which used 10_000 tokens.
74+
- For kv-retrieval subtask, streaming LLM (keep last N tokens) performs better than other methods. While this may be surprising at first, respecting the format of the task `(Extract the value corresponding to the specified key in the JSON object below. JSON data: {"7de93460-b65f-404e-9a7d-af2da2c8abb5": "2d9ab7c8-394a-4062-9928-310e39201a2f", ...}. Key: "70d1b207-d1e8-4591-95b8-9c85aceb8956"`
75+
helps to understand this behavior. The information is homogeneously distributed in the context, and any token could potentially be relevant for answering the question. Streaming LLM will have access to all last tokens, while other methods will potentially create "holes".
76+
- Mistral-nemo-instruct-2407 performs poorly on kv-retrieval subtask compared to other models and is thus excluded from the plots.
77+
- For longbook-choice-eng, many compression methods are able to obtain good compression ratios. Thus, longbook-choice-eng is an example of a task that can be compressed effectively.
78+
- For longbook-qa-eng, expected attention and snapkv perform better than other methods (note the performance difference of llama3.1-8b-instruct and phi3.5/mistral-nemo).
79+
- For longdialogue-qa-eng, there's an interesting crossover between different compression methods. For higher compression, snapkv performs relatively well across models.
80+
81+
</details>
82+
83+
84+
## How to add a dataset
85+
86+
Each dataset directory is structured as follows:
87+
88+
```bash
89+
$dataset
90+
├── README.md
91+
├── calculate_metrics.py
92+
├── create_huggingface_dataset.py
93+
```
94+
95+
Where:
96+
- `create_huggingface_dataset.py` is a script that generates the Hugging Face dataset from the original dataset. Each dataset is associated with a set of parquet files with the following structure:
97+
- `context`: ...
98+
- `question`: ...
99+
- `answer_prefix`: ...
100+
- `answer`: ...
101+
- `max_new_tokens`: ...
102+
- `calculate_metrics.py` is a script that calculates the metrics based on the output of `evaluate.py`
File renamed without changes.
File renamed without changes.

evaluation/infinite_bench/create_huggingface_dataset.py renamed to evaluation/benchmarks/infinite_bench/create_huggingface_dataset.py

File renamed without changes.
File renamed without changes.
File renamed without changes.

evaluation/longbench/create_huggingface_dataset.py renamed to evaluation/benchmarks/longbench/create_huggingface_dataset.py

File renamed without changes.

0 commit comments

Comments
 (0)