Skip to content

Conversation

@kimm240
Copy link
Contributor

@kimm240 kimm240 commented Nov 26, 2025

The FuseReductionEpilogue primitive currently supports fusing bias addition epilogues into reduction blocks. This commit extends the primitive to also support ReLU activation functions in epilogue blocks, enabling fusion of patterns like max(temp + bias, 0) into the reduction computation.

The implementation adds an EpilogueType enumeration to distinguish between Bias and BiasReLU patterns. The AnalyzeEpiloguePattern method is extended to detect ReLU patterns by checking for MaxNode expressions with zero constants.

This commit also adds comprehensive tests in
test_tir_schedule_fuse_reduction_epilogue_relu.py, following the same patterns as the existing bias tests. The tests verify structural equality, numerical correctness with per-iteration ReLU semantics, and multiple epilogue block scenarios. All tests pass successfully.

The FuseReductionEpilogue primitive currently supports fusing bias addition
epilogues into reduction blocks. This commit extends the primitive to also
support ReLU activation functions in epilogue blocks, enabling fusion of
patterns like max(temp + bias, 0) into the reduction computation.

The implementation adds an EpilogueType enumeration to distinguish between
Bias and BiasReLU patterns. The AnalyzeEpiloguePattern method is extended
to detect ReLU patterns by checking for MaxNode expressions with zero
constants.

This commit also adds comprehensive tests in
test_tir_schedule_fuse_reduction_epilogue_relu.py, following the same
patterns as the existing bias tests. The tests verify structural equality,
numerical correctness with per-iteration ReLU semantics, and multiple
epilogue block scenarios. All tests pass successfully.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kimm240, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the FuseReductionEpilogue primitive by adding support for ReLU activation functions. This allows for more aggressive fusion of common patterns like max(temp + bias, 0) directly into reduction computations, which can lead to improved performance by reducing memory traffic and increasing computational density.

Highlights

  • ReLU Support: The FuseReductionEpilogue primitive now supports fusing ReLU activation functions (max(temp + bias, 0)) into reduction blocks.
  • Epilogue Type Enumeration: A new EpilogueType enum (Bias, BiasReLU) has been introduced to differentiate between epilogue patterns.
  • Pattern Analysis Enhancement: The AnalyzeEpiloguePattern method is extended to detect ReLU patterns by looking for MaxNode expressions with a zero constant.
  • Per-Iteration ReLU Semantics: The fusion logic correctly applies ReLU per-iteration during both the initialization and update steps of the fused reduction block.
  • Comprehensive Testing: New tests have been added to verify structural equality, numerical correctness, and scenarios involving multiple epilogue blocks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request extends the FuseReductionEpilogue primitive to support ReLU activation functions, which is a great enhancement. The implementation is clean and follows existing patterns. The addition of EpilogueType to differentiate between Bias and BiasReLU is a good approach. The new tests are comprehensive, covering structural equality, numerical correctness with per-iteration ReLU semantics, and multiple epilogue scenarios. I have one minor suggestion to simplify the code.

Comment on lines +1124 to +1129
bool is_zero_const = false;
if (tir::is_zero(max_node->b)) {
is_zero_const = true;
} else if (const auto* float_imm = max_node->b.as<FloatImmNode>()) {
is_zero_const = (float_imm->value == 0.0);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tir::is_zero function handles both integer and floating-point zero constants, so the else if condition checking for FloatImmNode is redundant. This check can be simplified.

Suggested change
bool is_zero_const = false;
if (tir::is_zero(max_node->b)) {
is_zero_const = true;
} else if (const auto* float_imm = max_node->b.as<FloatImmNode>()) {
is_zero_const = (float_imm->value == 0.0);
}
bool is_zero_const = tir::is_zero(max_node->b);

@kimm240
Copy link
Contributor Author

kimm240 commented Nov 26, 2025

@wrongtest-intellif
This Pull Request (PR) extends the activation function support, following the footsteps of https://github.com/apache/tvm/pull/18418, which originally added ReLU support to the FuseReductionEpilogue primitive in the Apache TVM repository.

Furthermore, this enhancement allows TVM to accurately recognize complex but standardized patterns, such asmax(temp + bias, 0), at the IR (Intermediate Representation) level, and reconstruct them using an optimized approach: the Per-iteration ReLU semantics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant