Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
78196a8
Added missing saving functions for ReLU and ELU activation layers (Je…
dosier Jun 14, 2021
6347d65
Reverted changes to the imports
dosier Jun 14, 2021
208cd56
Merge branch 'master' of https://github.com/JetBrains/KotlinDL
dosier Jun 16, 2021
0dd75a8
Merge branch 'master' of https://github.com/JetBrains/KotlinDL
dosier Jun 18, 2021
4c88bb8
Merge branch 'master' of https://github.com/JetBrains/KotlinDL
dosier Jul 8, 2021
a904dc8
Merge remote-tracking branch 'JetBrains/master'
dosier Nov 15, 2021
42cabce
Merge remote-tracking branch 'JetBrains/master'
dosier Jun 8, 2022
71a87e8
Merge remote-tracking branch 'JetBrains/master'
dosier Sep 29, 2022
a175f9b
Updated shape documentation in GlobalAvgPool2D (#211)
dosier Sep 29, 2022
e41ac23
Added shape docs for ELU layer from Keras (#211)
dosier Sep 29, 2022
c672d06
Added shape docs for LeakyReLU layer from Keras (#211)
dosier Sep 29, 2022
f5c4d53
Added shape docs for PReLU layer from Keras (#211)
dosier Sep 29, 2022
e044a7f
Added shape docs for Softmax layer from Keras (#211)
dosier Sep 29, 2022
aa8f4f3
Added shape docs for ThresholdedReLU layer from Keras (#211)
dosier Sep 29, 2022
7521fdc
Added shape docs for DepthwiseConv2D layer from Keras (#211)
dosier Sep 29, 2022
b1e48be
Added shape docs for SeparableConv2D layer from Keras (#211)
dosier Sep 29, 2022
8f7fef8
Added shape docs for Dense layer from Keras (#211)
dosier Sep 29, 2022
02b42aa
Added shape docs for BatchNorm layer from Keras (#211)
dosier Sep 29, 2022
6e57581
Added shape docs for AvgPool1D layer from Keras (#211)
dosier Sep 29, 2022
138a215
Added shape docs for AvgPool2D layer from Keras (#211)
dosier Sep 29, 2022
b858b06
Added shape docs for AvgPool3D layer from Keras (#211)
dosier Sep 29, 2022
ab90b54
Added shape docs for GlobalMaxPool1D layer from Keras (#211)
dosier Sep 29, 2022
e1dca8d
Added shape docs for GlobalMaxPool2D layer from Keras (#211)
dosier Sep 29, 2022
b66ac7f
Added shape docs for GlobalMaxPool3D layer from Keras (#211)
dosier Sep 29, 2022
4ac2bcb
Added shape docs for MaxPool1D layer from Keras (#211)
dosier Sep 29, 2022
dfe7f61
Added shape docs for MaxPool2D layer from Keras (#211)
dosier Sep 29, 2022
27ad318
Added shape docs for MaxPool3D layer from Keras (#211)
dosier Sep 29, 2022
df6b4b1
Added shape docs for ZeroPadding1D layer from Keras (#211)
dosier Sep 29, 2022
e6d5118
Added shape docs for ZeroPadding2D layer from Keras (#211)
dosier Sep 29, 2022
d5c8d88
Added shape docs for ZeroPadding3D layer from Keras (#211)
dosier Sep 29, 2022
dc372ff
Added missing line breaks in shape docs (#211)
dosier Sep 29, 2022
627db27
Added shape docs for ReLU layer from Keras (#211)
dosier Sep 29, 2022
c8a990c
Merge remote-tracking branch 'JetBrains/master' into missing-layer-do…
dosier Mar 2, 2023
34c394f
Updated Dense input/output shape documentation
dosier Mar 2, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,12 @@ import org.tensorflow.op.Ops
* the activation closer to zero which enable faster learning as they
* bring the gradient to the natural gradient.
*
* __Input shape__: Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the samples axis)
* when using this layer as the first layer in a model.
*
* __Output shape__: Same shape as the input.
*
* @property [alpha] Hyperparameter that controls the value to which
* an ELU saturates for negative net inputs. Should be > 0.
* @constructor Creates [ELU] object.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,13 @@ import org.tensorflow.op.Ops
* f(x) = x, if x >= 0
* f(x) = alpha * x if x < 0
* ```
*
* __Input shape:__ Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the batch axis)
* when using this layer as the first layer in a model.
*
* __Output shape:__ 2D tensor with shape `(batch_size, channels)`.
*
* @property [alpha] Negative slope coefficient. Should be >= 0.
* @constructor Creates [LeakyReLU] object.
* @since 0.3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,12 @@ import org.tensorflow.op.Ops
* ```
* where `alpha` is a learnable weight and has the same shape as `x` (i.e. input).
*
* __Input shape:__ Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the samples axis)
* when using this layer as the first layer in a model.
*
* __Output shape:__ Same shape as the input.
*
* @property [alphaInitializer] Initializer instance for the weights.
* @property [alphaRegularizer] Regularizer instance for the weights.
* @property [sharedAxes] The axes along which to share learnable parameters.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,13 @@ import org.tensorflow.op.Ops
* f(x) = x, if threshold <= x < maxValue
* f(x) = negativeSlope * (x - threshold), if x < threshold
* ```
*
* __Input shape:__ Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the batch axis)
* when using this layer as the first layer in a model.
*
* __Output shape:__ Same shape as the input.
*
* @property [maxValue] Maximum activation value. Should be >= 0.
* @property [negativeSlope] Negative slope coefficient. Should be >= 0.
* @property [threshold] Threshold value for threshold activation.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,12 @@ import org.tensorflow.op.core.ReduceSum
* softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))
* ```
*
* __Input shape:__ Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the samples axis)
* when using this layer as the first layer in a model.
*
* __Output shape:__ Same shape as the input.
*
* @property [axis] along which the softmax normalization is applied.
* @constructor Creates [Softmax] object
* @since 0.3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,13 @@ import org.tensorflow.op.Ops
* f(x) = x, if x > theta
* f(x) = 0 otherwise
* ```
*
* __Input shape:__ Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the samples axis)
* when using this layer as the first layer in a model.
*
* __Output shape:__ Same shape as the input.
*
* @property [theta] Threshold value for activation.
* @constructor Creates [ThresholdedReLU] object.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,11 @@ import org.tensorflow.op.nn.DepthwiseConv2dNative
* The `depthMultiplier` argument controls how many
* output channels are generated per input channel in the depthwise step.
*
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* __Output shape:__ 4D tensor with shape `(batch_size, new_rows, new_cols, channels * depth_multiplier)`.
* `rows` and `cols` values might have changed due to padding.
*
* @property [kernelSize] Two long numbers, specifying the height and width of the 2D convolution window.
* @property [strides] Strides of the pooling operation for each dimension of input tensor.
* NOTE: Specifying any stride value != 1 is incompatible with specifying any `dilation_rate` value != 1.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,11 @@ import kotlin.math.roundToInt
* Intuitively, separable convolutions can be understood as
* a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block.
*
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* __Output shape:__ 4D tensor with shape `(batch_size, new_rows, new_cols, filters)`.
* `rows` and `cols` values might have changed due to padding.
*
* @property [filters] The dimensionality of the output space (i.e. the number of filters in the convolution).
* @property [kernelSize] Two long numbers, specifying the height and width of the 2D convolution window.
* @property [strides] Strides of the pooling operation for each dimension of input tensor.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,10 @@ import org.tensorflow.op.Ops
* created by the layer, and `bias` is a bias vector created by the layer
* (only applicable if `use_bias` is `True`).
*
* __Input shape:__ 2D tensor with shape `(batch_size, input_dim)`.
*
* __Output shape:__ 2D tensor with shape `(batch_size, units)`.
*
* @property [outputSize] Dimensionality of the output space.
* @property [activation] Activation function.
* @property [kernelInitializer] Initializer function for the 'kernel' weights matrix.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,12 @@ import org.tensorflow.op.core.Variable
/**
* NOTE: This layer is not trainable and does not update its weights. It's frozen by default.
*
* __Input shape:__ Arbitrary. Use the keyword argument `input_shape`
* (tuple of integers, does not include the samples axis)
* when using this layer as the first layer in a model.
*
* __Output shape:__ Same shape as input.
*
* @property [axis] Integer or a list of integers, the axis that should be normalized (typically the features' axis).
* @property [momentum] Momentum for the moving average.
* @property [center] If True, add offset of beta to normalized tensor. If False, beta is ignored.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ import org.tensorflow.op.core.Squeeze
*
* Downsamples the input by taking the average over a temporal window of size [poolSize].
*
* __Input shape:__ 3D tensor with shape `(batch_size, steps, features)`.
*
* __Output shape:__ 3D tensor with shape `(batch_size, downsampled_steps, features)`.
*
* @property [poolSize] Size of the temporal pooling window for each dimension of input.
* @property [strides] The amount of shift for pooling window per each input dimension in each pooling step.
* @property [padding] Padding strategy; can be either of [ConvPadding.VALID] which means no
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ import org.tensorflow.op.Ops
/**
* Average pooling layer for 2D inputs (e.g. images).
*
* NOTE: Works with tensors which must have rank 4 (batch, height, width, channels).
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* __Output shape:__ 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.
*
* @property [poolSize] The size of the sliding window for each dimension of input tensor (pool batch, pool height, pool width, pool channels).
* Usually, pool batch and pool channels are equal to 1.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ import org.tensorflow.op.Ops
*
* Downsamples the input by taking the average over a window of size [poolSize].
*
* __Input shape:__ 5D tensor with shape `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`.
*
* __Output shape:__ 5D tensor with shape `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`.
*
* @property [poolSize] Size of the pooling window for each dimension of input.
* @property [strides] The amount of shift for pooling window per each input dimension in each pooling step.
* @property [padding] Padding strategy; can be either of [ConvPadding.VALID] which means no
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,9 @@ import org.tensorflow.op.Ops
/**
* Global average pooling operation for 2D data (images and so on).
*
* NOTE: Works with tensors which must have rank 4 (batch, height, width, channels).
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* Input shape: 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* Output shape: 2D tensor with shape `(batch_size, channels)`.
* __Output shape:__ 2D tensor with shape `(batch_size, channels)`.
*
* @property [name] Custom layer name.
* @constructor Creates [GlobalAvgPool2D] object.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ import org.tensorflow.op.Ops
*
* Downsamples the input by taking the maximum value over time dimension.
*
* __Input shape:__ 3D tensor with shape `(batch_size, steps, features)`.
*
* __Output shape:__ 2D tensor with shape `(batch_size, features)`.
*
* @since 0.3
*/
public class GlobalMaxPool1D(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ import org.tensorflow.op.Ops
*
* Downsamples the input by taking the maximum value over spatial dimensions.
*
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* __Output shape:__ 2D tensor with shape `(batch_size, channels)`.
*
* @since 0.3
*/
public class GlobalMaxPool2D(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ import org.tensorflow.op.Ops
*
* Downsamples the input by taking the maximum value over spatio-temporal dimensions.
*
* __Input shape:__ 5D tensor with shape `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`.
*
* __Output shape:__ 2D tensor with shape `(batch_size, channels)`.
*
* @since 0.3
*/
public class GlobalMaxPool3D(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ import org.tensorflow.op.core.Squeeze
*
* Downsamples the input by taking maximum value over a temporal window of size [poolSize].
*
* __Input shape:__ 3D tensor with shape `(batch_size, steps, features)`.
*
* __Output shape:__ 3D tensor with shape `(batch_size, downsampled_steps, features)`.
*
* @property [poolSize] Size of the temporal pooling window for each dimension of input.
* @property [strides] The amount of shift for pooling window per each input dimension in each pooling step.
* @property [padding] Padding strategy; can be either of [ConvPadding.VALID] which means no padding, or
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@ import org.tensorflow.op.Ops
/**
* Max pooling layer for 2D inputs (e.g. images).
*
* NOTE: Works with tensors which must have rank 4 (batch, height, width, channels).
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* __Output shape:__ 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.
*
* @property [poolSize] The size of the sliding window for each dimension of input tensor (pool batch, pool height, pool width, pool channels).
* Usually, pool batch and pool channels are equal to 1.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,11 @@ import java.util.*

/**
* Max pooling operation for 3D data (spatial or spatio-temporal).
* NOTE: Works with tensors which must have rank 5 (batch, depth, height, width, channels).
*
* __Input shape:__ 5D tensor with shape `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`.
*
* __Output shape:__ 5D tensor with shape: `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`.
*
* @property [poolSize] The size of the sliding window for each dimension of input tensor (pool batch, pool depth ,pool height, pool width, pool channels).
* Usually, pool batch and pool channels are equal to 1.
* @property [strides] Strides of the pooling operation for each dimension of input tensor.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ import org.tensorflow.Shape
/**
* Zero-padding layer for 1D input (e.g. audio).
* This layer can add zeros in the rows of the audio tensor
*
* __Input shape:__ 3D tensor with shape `(batch_size, axis_to_pad, features)`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

axis_to_pad is a somewhat misleading name because an axis usually refers to the dimension's index, but here it refers to the size of the dimension. Just dim would be ok.

*
* __Output shape:__ 3D tensor with shape `(batch_size, padded_axis, features)`.
*
* @property [padding] 2 numbers interpreted as `(left_pad, right_pad)`.
*
* @since 0.3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ import org.tensorflow.Shape
/**
* Zero-padding layer for 2D input (e.g. picture).
* This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.
*
* __Input shape:__ 4D tensor with shape `(batch_size, rows, cols, channels)`.
*
* __Output shape:__ 4D tensor with shape `(batch_size, padded_rows, padded_cols, channels)`.
*
* @property [padding] 4 numbers interpreted as `(top_pad, bottom_pad, left_pad, right_pad)`.
*/
public class ZeroPadding2D : AbstractZeroPadding {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ import org.tensorflow.Shape
/**
* Zero-padding layer for 3D input (e.g. video).
* This layer can add zeros in the rows, cols and depth of a video tensor.
*
* __Input shape:__ 5D tensor with shape `(batch_size, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad, depth)`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, maybe it would be better to replace axis with dim

*
* __Output shape:__ 5D tensor with shape `(batch_size, first_padded_axis, second_padded_axis, third_axis_to_pad, depth)`.
*
* @property [padding] 6 numbers interpreted as `(left_dim1_pad, right_dim1_pad, left_dim2_pad, right_dim2_pad, left_dim3_pad, right_dim3_pad)`.
*
* @since 0.3
Expand Down