Skip to content

Conversation

@JoonPark1
Copy link

…cordingly once engine submit timeout is reached - prevent subsequent kyuubi OOM

Why are the changes needed?

This PR change is to address bug #7226. It updates the behavior of updating metadata store accordingly for batch jobs that have timed out due to waiting for available spark driver engine. This will fix the subsequent restarted kyuubi server from repeatedly polling for the spark application status of each and every batch job, which can cause consecutive OOM errors under k8 cluster deployment mode for kyuubi.

How was this patch tested?

This patch was tested through integration test that was added to test suite class called "SparkOnKubernetesTestsSuite.scala".

Was this patch authored or co-authored using generative AI tooling?

No!

…cordingly once engine submit timeout is reached - prevent subsequent kyuubi OOM
@codecov-commenter
Copy link

codecov-commenter commented Oct 21, 2025

Codecov Report

❌ Patch coverage is 0% with 14 lines in your changes missing coverage. Please review.
✅ Project coverage is 0.00%. Comparing base (3b205a3) to head (de98cf8).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
...kyuubi/engine/KubernetesApplicationOperation.scala 0.00% 14 Missing ⚠️
Additional details and impacted files
@@          Coverage Diff           @@
##           master   #7227   +/-   ##
======================================
  Coverage    0.00%   0.00%           
======================================
  Files         696     696           
  Lines       43530   43543   +13     
  Branches     5883    5884    +1     
======================================
- Misses      43530   43543   +13     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@pan3793 pan3793 requested a review from turboFei October 24, 2025 08:56
@turboFei turboFei requested a review from Copilot October 24, 2025 19:56
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR addresses issue #7226 by preventing Kyuubi OOM errors when multiple batch jobs time out waiting for Spark driver engines. When a batch job reaches the engine submit timeout, the metadata store is now properly updated with TIMEOUT state and NOT_FOUND engine state, preventing the restarted Kyuubi server from repeatedly polling these timed-out jobs.

Key Changes:

  • Updated timeout handling to persist batch job state when engine submission times out
  • Added metadata store update with proper error state and message on timeout
  • Added integration test to verify timeout behavior updates metadata correctly

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
KubernetesApplicationOperation.scala Added metadata store update logic when driver pod is not found after submit timeout
SparkOnKubernetesTestsSuite.scala Added integration test verifying timeout state is properly persisted to metadata store

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

assert(!failKillResponse._1)
}
test(
"If spark batch reach timeout, it should have associated Kyuubi Application Operation be " +
Copy link

Copilot AI Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammatical error in test description. Should be 'reaches timeout' instead of 'reach timeout', and 'should have the associated' instead of 'should have associated'.

Suggested change
"If spark batch reach timeout, it should have associated Kyuubi Application Operation be " +
"If spark batch reaches timeout, it should have the associated Kyuubi Application Operation be " +

Copilot uses AI. Check for mistakes.
@turboFei
Copy link
Member

turboFei commented Oct 24, 2025

Hi @JoonPark1
Thanks for the contribution.

For this issue, does it has chance to update metadata in BatchJobSubmission?

private def updateBatchMetadata(): Unit = {
val endTime = if (isTerminalState(state)) lastAccessTime else 0L
if (isTerminalState(state) && _applicationInfo.isEmpty) {
_applicationInfo = Some(ApplicationInfo.NOT_FOUND)
}
_applicationInfo.foreach { appInfo =>
val metadataToUpdate = Metadata(
identifier = batchId,
state = state.toString,
engineOpenTime = appStartTime,
engineId = appInfo.id,
engineName = appInfo.name,
engineUrl = appInfo.url.orNull,
engineState = getAppState(state, appInfo.state).toString,
engineError = appInfo.error,
endTime = endTime)
session.sessionManager.updateMetadata(metadataToUpdate)
}
}

@JoonPark1
Copy link
Author

Hey @turboFei. I believe the spark driver engine state and spark app state will be updated for metadata store...

@turboFei
Copy link
Member

turboFei commented Oct 24, 2025

Hi @JoonPark1
Before this PR, it can not update the metadata with BatchJobSubmission:: updateBatchMetadata?

Could you provide more details?

@JoonPark1
Copy link
Author

@turboFei Sure! Once the kyuubi batch job times out because the elapsed time exceeds the configured submitTimeout property value (no spark driver is instantiated and in running state to handle the submitted batch job), the metadata about the spark application and the spark driver engine state is updated accordingly via "org.apache.kyuubi.server.metadata.MetadataManager" class' updateMetadata method which takes in the new up-to-date Metadata construct object instance (which is instance of class "org.apache.kyuubi.server.metadata.api.Metadata"). Then, internally within the manager class, the method calls upon the "org.apache.kyuubi.server.metadata.MetadataStore" class' updateMetadata method, which keeps the data regarding the state of each submitted kyuubi batch jobs utilizing spark compute engine as in-sync with the state of kyuubi's metadata store in relationalDB. As you can see, the whole flow does not need to invoke the BatchJobSubmission:: updateBatchMetadata to update the kyuubi's metadata store instance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants