Fix kernel cache miss and add RDNA configs #874
Annotations
11 errors
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L213
vllm/attention/ops/triton_flash_attention.py:213:81: E501 Line too long (104 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L231
vllm/attention/ops/triton_flash_attention.py:231:81: E501 Line too long (112 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L232
vllm/attention/ops/triton_flash_attention.py:232:81: E501 Line too long (102 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L236
vllm/attention/ops/triton_flash_attention.py:236:81: E501 Line too long (115 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L237
vllm/attention/ops/triton_flash_attention.py:237:81: E501 Line too long (115 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L242
vllm/attention/ops/triton_flash_attention.py:242:81: E501 Line too long (109 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L244
vllm/attention/ops/triton_flash_attention.py:244:81: E501 Line too long (108 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L246
vllm/attention/ops/triton_flash_attention.py:246:81: E501 Line too long (108 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L248
vllm/attention/ops/triton_flash_attention.py:248:81: E501 Line too long (108 > 80)
|
Analysing the code with ruff:
vllm/attention/ops/triton_flash_attention.py#L250
vllm/attention/ops/triton_flash_attention.py:250:81: E501 Line too long (108 > 80)
|
Loading