mlc-llm 当不使用tvm.relax.transform.FuseOps()和tvm.relax.transform.FuseTIR()转换时,Phi-2 q4f16_1的运行速度更快,

xj3cbfub  于 2个月前  发布在  SEO
关注(0)|答案(4)|浏览(60)

🐛 Bug

当我使用 tvm.relax.transform.FuseOps()tvm.relax.transform.FuseTIR() 转换注解掉 Phi-2 ( https://huggingface.co/microsoft/phi-2 ) 的编译(https://github.com/mlc-ai/mlc-llm/blob/main/python/mlc_llm/compiler_pass/pipeline.py#L128),在 Cuda 上得到更好的预填充和解码速度。

重现方法

  • 编译时,我使用了以下命令:

mlc_llm compile Phi2/phi-2-q4f16_1-MLC/mlc-chat-config.json --device cuda -o Phi2/phi-2-q4f16_1-MLC/phi-2-q4f16_1-cuda.so
使用 tvm.relax.transform.FuseOps()tvm.relax.transform.FuseTIR() 的情况:

Statistics: ----------- prefill -----------
throughput: 283.674 tok/s
total tokens: 12 tok
total time: 0.042 s
------------ decode ------------
throughput: 101.508 tok/s
total tokens: 31 tok
total time: 0.305 s

不使用 tvm.relax.transform.FuseOps()tvm.relax.transform.FuseTIR() 的情况:

Statistics: ----------- prefill -----------
throughput: 291.720 tok/s
total tokens: 12 tok
total time: 0.041 s
------------ decode ------------
throughput: 129.715 tok/s
total tokens: 31 tok
total time: 0.239 s

预期行为

我相信当在 IR 上执行 FusedOps 转换时,应该会有更快的性能表现。

环境

  • 平台:CUDA
  • 操作系统:Ubuntu
  • 设备:PC + RTX 2080
  • Python 版本:3.10
  • GPU 驱动版本:530.41.03
  • CUDA/cuDNN 版本:12.1
  • TVM Unity Hash Tag(如果你编译模型的话,适用):
USE_NVTX: OFF
USE_GTEST: AUTO
SUMMARIZE: OFF
TVM_DEBUG_WITH_ABI_CHANGE: OFF
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU: OFF
CUDA_VERSION: 12.1
USE_LIBBACKTRACE: AUTO
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: ON
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: ON
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: ON
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM: OFF
USE_OPENCL_GTEST: /path/to/opencl/gtest
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_VITIS_AI: OFF
USE_MLIR: ON
USE_RCCL: OFF
USE_LLVM: ON
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_NCCL: OFF
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: d694451c580a931116a2c93571f21f7d791c7fa0
USE_VULKAN: OFF
USE_RUST_EXT: OFF
USE_CUTLASS: ON
USE_CPP_RPC: OFF
USE_HEXAGON: ON
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-04-18 10:05:07 -0400
USE_HEXAGON_SDK: /shared-volume/Qualcomm/Hexagon_SDK/3.5.4/
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: ON
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: ON
INSTALL_DEV: OFF
USE_PROFILER: ON
USE_NNPACK: OFF
LLVM_VERSION: 15.0.7
USE_MRVL: OFF
USE_OPENCL: ON
COMPILER_RT_PATH: 3rdparty/compiler-rt
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: none
USE_BNNS: OFF
USE_FLASHINFER: OFF
USE_CUBLAS: ON
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: ON
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION: 
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /usr/bin/c++
HIDE_PRIVATE_SYMBOLS: OFF
aelbi1ox

aelbi1ox1#

我无法在Intel硬件上复现这个结果,但我之前是抱有希望的。
我不得不自己编写基准测试,因为mlc_llm bench命令被移除了。
我将5个连续基准测试的结果求平均值。
我设置了max_tokens=1000,以尝试稍微平滑化结果。
硬件:Intel Arc A770 16GB
测试模型:https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final
量化:q4f16_1
使用nightly wheel的标准编译(0.1.dev1287)的结果:

(completion tokens: 694) Completion tokens/sec =  29.19
(completion tokens: 558) Completion tokens/sec =  28.68
(completion tokens: 462) Completion tokens/sec =  30.96
(completion tokens: 576) Completion tokens/sec =  30.12
(completion tokens: 717) Completion tokens/sec =  29.36
mean tokens = 601.4, mean speed = 29.662

不使用tvm.relax.transform.FuseOps()和tvm.relax.transform.FuseTIR():

(completion tokens: 475) Completion tokens/sec =  26.87
(completion tokens: 691) Completion tokens/sec =  27.98
(completion tokens: 568) Completion tokens/sec =  29.80
(completion tokens: 504) Completion tokens/sec =  30.13
(completion tokens: 434) Completion tokens/sec =  30.68
mean tokens = 534.4, mean speed = 29.092
ibps3vxo

ibps3vxo2#

你好,@0xDEADFED5。我为"Phi-2"模型(https://huggingface.co/microsoft/phi-2)创建了这个问题。不确定Llama-3的行为。

yizd12fk

yizd12fk3#

是的,我知道,只是添加更多的数据。我正在使用Hive网络作为我的唯一互联网,所以我无法测试那个模型。
我打赌如果你用更多的代币做更多的基准测试,你的数字会稳定下来

57hvy0tb

57hvy0tb4#

我会运行基准测试来检查。但是@0xDEADFED5的解码速度至少不受提示输入的影响吧?

相关问题