You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: widgets/ScalaSparkCompute-sparkcompute.json
+8Lines changed: 8 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,14 @@
15
15
"default": "/**\n * Transforms the provided input Apache Spark RDD or DataFrame into another RDD or DataFrame.\n *\n * The input DataFrame has the same schema as the input schema to this stage and the transform method should return a DataFrame that has the same schema as the output schema setup for this stage.\n * To emit logs, use: \n * import org.slf4j.LoggerFactory\n * val logger = LoggerFactory.getLogger('mylogger')\n * logger.info('Logging')\n *\n *\n * @param input the input DataFrame which has the same schema as the input schema to this stage.\n * @param context a SparkExecutionPluginContext object that can be used to emit zero or more records (using the emitter.emit() method) or errors (using the emitter.emitError() method) \n * @param context an object that provides access to:\n * 1. CDAP Datasets and Streams - context.fromDataset('counts'); or context.fromStream('input');\n * 2. Original Spark Context - context.getSparkContext();\n * 3. Runtime Arguments - context.getArguments.get('priceThreshold')\n */\ndef transform(df: DataFrame, context: SparkExecutionPluginContext) : DataFrame = {\n df\n}"
0 commit comments