vllm.model_executor.kernels.linear ¶
This module re-exports linear kernel implementations to provide a stable import interface during an ongoing reorganization. Upcoming PRs will remove the scaled_mm and mixed_precision subdirectories and reorganize kernels by provider (aiter, cutlass, flashinfer, etc.) rather than by precision type. By centralizing exports here, we minimize the need to update imports across other modules when the internal structure changes. If you are adding a new kernel selector or kernel implementation, add it to this init.py to maintain import stability.
Modules:
| Name | Description |
|---|---|
Mxfp8LinearKernel | |
base | |
mixed_precision | |
mxfp8 | |
nvfp4 | |
scaled_mm | |
AiterInt8ScaledMMLinearKernel ¶
Bases: CutlassInt8ScaledMMLinearKernel
Source code in vllm/model_executor/kernels/linear/scaled_mm/aiter.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | |
apply_weights ¶
AiterInt8ScaledMMLinearKernel implements a fused version of output = torch.mm((scale_a * a), (scale_b * b)).to(out_dtype) where scale_a * a and scale_b * b are implemented using numpy-style broadcasting. Currently only support per-tensor-per-tensor GEMM and per-token-per-channel GEMM through AITER w8a8 scaled gemm. AiterInt8ScaledMMLinearKernel also does not support ATIER block scaled GEMM and mix-precision GEMM.
Source code in vllm/model_executor/kernels/linear/scaled_mm/aiter.py
CutlassNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
NVFP4 GEMM via the vLLM CUTLASS kernel.
Source code in vllm/model_executor/kernels/linear/nvfp4/cutlass.py
EmulationMxfp8LinearKernel ¶
Bases: Mxfp8LinearKernel
Software emulation fallback for MXFP8 (dequant to BF16).
Source code in vllm/model_executor/kernels/linear/mxfp8/emulation.py
EmulationNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
Software emulation fallback for NVFP4 (dequant → BF16 matmul).
Source code in vllm/model_executor/kernels/linear/nvfp4/emulation.py
FbgemmNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
NVFP4 GEMM via FBGEMM.
Source code in vllm/model_executor/kernels/linear/nvfp4/fbgemm.py
FlashInferCudnnNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
NVFP4 GEMM via FlashInfer's cuDNN wrapper.
Source code in vllm/model_executor/kernels/linear/nvfp4/flashinfer.py
FlashInferCutlassMxfp8LinearKernel ¶
Bases: Mxfp8LinearKernel
MXFP8 W8A8 GEMM via FlashInfer CUTLASS (SM100+).
Source code in vllm/model_executor/kernels/linear/mxfp8/flashinfer.py
FlashInferCutlassNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
NVFP4 GEMM via FlashInfer's CUTLASS wrapper.
Source code in vllm/model_executor/kernels/linear/nvfp4/flashinfer.py
FlashInferFp8DeepGEMMDynamicBlockScaledKernel ¶
Bases: Fp8BlockScaledDynamicMMLinearKernel
Conditional FlashInfer / DeepGEMM FP8 block-scaled GEMM.
Dispatches between two kernels based on input batch size: - Small batches (M < 32): FlashInfer's swapAB trick for better utilisation. - Large batches (M >= 32): DeepGEMM for peak throughput.
apply_input_quant is False because FlashInfer accepts BF16 input and handles FP8 conversion internally. The DeepGEMM branch therefore quantises BF16→FP8 inside apply_mm via a closure before dispatching to the DeepGEMM kernel — keeping both branches compatible with the single BF16 tensor operand list passed by torch.cond.
Source code in vllm/model_executor/kernels/linear/scaled_mm/flashinfer.py
FlashInferTrtllmNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
NVFP4 GEMM via FlashInfer's TensorRT-LLM wrapper.
Source code in vllm/model_executor/kernels/linear/nvfp4/flashinfer.py
MarlinMxfp8LinearKernel ¶
Bases: Mxfp8LinearKernel
MXFP8 W8A16 GEMM via Marlin (SM80+).
Source code in vllm/model_executor/kernels/linear/mxfp8/marlin.py
MarlinNvFp4LinearKernel ¶
Bases: NvFp4LinearKernel
NVFP4 weight-only GEMM via Marlin (W4A16).
Source code in vllm/model_executor/kernels/linear/nvfp4/marlin.py
Mxfp8LinearLayerConfig dataclass ¶
Configuration for an MXFP8 linear layer.
All MXFP8 layers share the same structure: FP8-E4M3 weights with uint8 (E8M0) per-block scales at block size 32.
Source code in vllm/model_executor/kernels/linear/mxfp8/Mxfp8LinearKernel.py
NvFp4LinearKernel ¶
Bases: ABC
Base class for NVFP4 quantized linear kernels.
Each subclass implements a specific GEMM backend (CUTLASS, Marlin, etc). The kernel selection mechanism iterates over registered subclasses in priority order,calling is_supported and can_implement to find the best match for the current hardware.
Source code in vllm/model_executor/kernels/linear/nvfp4/base.py
apply_weights abstractmethod ¶
Run the quantized GEMM.
can_implement abstractmethod classmethod ¶
can_implement(
config: NvFp4LinearLayerConfig,
) -> tuple[bool, str | None]
Return whether this kernel can handle config.
is_supported abstractmethod classmethod ¶
Return whether this kernel can run on the current platform.
process_weights_after_loading abstractmethod ¶
process_weights_after_loading(layer: Module) -> None
Transform weights into the format required by this kernel.
Called once after checkpoint weights have been loaded onto the device. Implementations should repack / swizzle / pad weights and scales in-place on layer.
Source code in vllm/model_executor/kernels/linear/nvfp4/base.py
NvFp4LinearLayerConfig dataclass ¶
Configuration for an NVFP4 linear layer.
All NVFP4 layers share the same structure: packed uint8 weights (2 FP4 values per byte), FP8-E4M3 per-block weight scales (group size 16), and scalar global scales for both weights and activations.
Source code in vllm/model_executor/kernels/linear/nvfp4/base.py
TritonW4A16LinearKernel ¶
Bases: MPLinearKernel
Triton-based W4A16 GEMM kernel for ROCm (MI300 and newer).
Supports GPTQ-format int4 weights (uint4b8 symmetric, uint4 asymmetric) with grouped quantization. Weight tensors are transposed from the compressed-tensors checkpoint layout to the kernel's [K, N//8] layout.
Source code in vllm/model_executor/kernels/linear/mixed_precision/triton_w4a16.py
273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 | |
process_weights_after_loading ¶
process_weights_after_loading(layer: Module) -> None
Convert compressed-tensors checkpoint layout to kernel layout.
Checkpoint (from compressed_tensors_wNa16.create_weights): weight_packed: [N, K//8] int32 input_dim=1, output_dim=0, packed_dim=1 weight_scale: [N, K//G] fp16 input_dim=1, output_dim=0 weight_zero_point: [N//8, K//G] int32 output_dim=0, packed_dim=0
Kernel needs
qweight: [K, N//8] int32 (transpose weight_packed) scales: [K//G, N] fp16 (transpose weight_scale) qzeros: [K//G, N//8] int32 (transpose weight_zero_point)
Source code in vllm/model_executor/kernels/linear/mixed_precision/triton_w4a16.py
XPUMxFp8LinearKernel ¶
Bases: Mxfp8LinearKernel
MXFP8 W8A8 GEMM on XPU.
Source code in vllm/model_executor/kernels/linear/mxfp8/xpu.py
XPUW4A8IntLinearKernel ¶
Bases: MPLinearKernel
XPU kernel for W4A8 integer quantization using oneDNN int4_gemm_w4a8.
Weights are symmetric group-quantized int4 packed as uint4. Activations are dynamically quantized per-token to symmetric int8.
Source code in vllm/model_executor/kernels/linear/mixed_precision/xpu.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | |
choose_mp_linear_kernel ¶
choose_mp_linear_kernel(
config: MPLinearLayerConfig,
compute_capability: int | None = None,
) -> type[MPLinearKernel]
Choose an MPLinearKernel that can implement the given config for the given compute capability. Attempts to choose the best kernel in terms of performance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config | MPLinearLayerConfig | Description of the linear layer to be implemented. | required |
compute_capability | Optional[int] | The compute capability of the target device, if None uses | None |
Raises:
| Type | Description |
|---|---|
ValueError | If no kernel can implement the given config. |
Returns:
| Type | Description |
|---|---|
type[MPLinearKernel] | type[MPLinearKernel]: Chosen kernel. |
Source code in vllm/model_executor/kernels/linear/__init__.py
choose_scaled_mm_linear_kernel ¶
choose_scaled_mm_linear_kernel(
config: _KernelConfigT,
possible_kernels: dict[
PlatformEnum, list[type[_KernelT]]
],
compute_capability: int | None = None,
force_kernel: type[_KernelT] | None = None,
) -> type[_KernelT]
Choose a _KernelT that can implement the given config for the given compute capability. Attempts to choose the best kernel in terms of performance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config | _KernelConfigT | Description of the linear layer to be implemented. | required |
possible_kernels | dict[PlatformEnum, list[_KernelT]] | A dictionary of platforms and their list of possible kernels. | required |
compute_capability | Optional[int] | The compute capability of the target device, if None uses | None |
force_kernel | Optional[type[_KernelT]] | An Optional forced kernel to override the possible_kernels if it can be implemented. If None, it will only try the possible kernels. | None |
Raises:
| Type | Description |
|---|---|
ValueError | If no kernel can implement the given config. |
Returns:
| Name | Type | Description |
|---|---|---|
_KernelT | type[_KernelT] | Chosen kernel. |
Source code in vllm/model_executor/kernels/linear/__init__.py
init_mxfp8_linear_kernel ¶
init_mxfp8_linear_kernel() -> Mxfp8LinearKernel
Select and instantiate the best MXFP8 linear kernel for the current platform.
Source code in vllm/model_executor/kernels/linear/__init__.py
init_nvfp4_linear_kernel ¶
init_nvfp4_linear_kernel() -> NvFp4LinearKernel
Select and instantiate the best NVFP4 linear kernel for the current platform.
Source code in vllm/model_executor/kernels/linear/__init__.py
register_linear_kernel ¶
register_linear_kernel(
kernel_class: type,
platform: PlatformEnum,
kernel_type: str = "mp",
) -> None
Register a new linear kernel class to be considered in kernel selection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kernel_class | type | The kernel class to register. | required |
platform | PlatformEnum | The platform for which this kernel is applicable. | required |
kernel_type | str | The type of the kernel, either "mp", "int8", or "fp8". Defaults to "mp". | 'mp' |
Raises:
| Type | Description |
|---|---|
ValueError | If the kernel_type is not recognized. |