CoNST: Code Generator for Sparse Tensor Networks
ACM Transactions on Architecture and Code Optimization, 2024•dl.acm.org
Sparse tensor networks represent contractions over multiple sparse tensors. Tensor
contractions are higher-order analogs of matrix multiplication. Tensor networks arise
commonly in many domains of scientific computing and data science. Such networks are
typically computed using a tree of binary contractions. Several critical inter-dependent
aspects must be considered in the generation of efficient code for a contraction tree,
including sparse tensor layout mode order, loop fusion to reduce intermediate tensors, and …
contractions are higher-order analogs of matrix multiplication. Tensor networks arise
commonly in many domains of scientific computing and data science. Such networks are
typically computed using a tree of binary contractions. Several critical inter-dependent
aspects must be considered in the generation of efficient code for a contraction tree,
including sparse tensor layout mode order, loop fusion to reduce intermediate tensors, and …
Sparse tensor networks represent contractions over multiple sparse tensors. Tensor contractions are higher-order analogs of matrix multiplication. Tensor networks arise commonly in many domains of scientific computing and data science. Such networks are typically computed using a tree of binary contractions. Several critical inter-dependent aspects must be considered in the generation of efficient code for a contraction tree, including sparse tensor layout mode order, loop fusion to reduce intermediate tensors, and the mutual dependence of loop order, mode order, and contraction order.
We propose CoNST, a novel approach that considers these factors in an integrated manner using a single formulation. Our approach creates a constraint system that encodes these decisions and their interdependence, while aiming to produce reduced-order intermediate tensors via fusion. The constraint system is solved by the Z3 SMT solver and the result is used to create the desired fused loop structure and tensor mode layouts for the entire contraction tree. This structure is lowered to the IR of the TACO compiler, which is then used to generate executable code. Our experimental evaluation demonstrates significant performance improvements over current state-of-the-art sparse tensor compiler/library alternatives.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果