So, I quickly built one for tf.layers.conv2d and tf.layers.flatten which I will share in this post. I have made them as close to function definitions in tensorflow as possible.
1. conv2d - Functional interface for the 2D convolution layer.
function conv2d( | |
inputs: Tensor, | |
filters: number, | |
kernel_size: number, | |
graph: Graph, | |
strides: number = 1, | |
padding = "valid", | |
data_format = "channels_last", | |
activation?, | |
kernel_initializer: Initializer = new VarianceScalingInitializer(), | |
bias_initializer: Initializer = new ZerosInitializer(), | |
name: string = "") |
- inputs Tensor input.
- filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
- kernel_size Number to specify the height and width of the 2D convolution window.
- graph Graph opbject.
- strides Number to specify the strides of convolution.
- padding One of "valid" or "same" (case-insensitive).
- data_format "channels_last" or "channel_first"
- activation Optional. Activation function which is applied on the final layer of the function. Function should accept Tensor and graph as parameters
- kernel_initializer An initializer object for the convolution kernel.
- bias_initializer An initializer object for bias.
- name string which represents name of the layer.
Tensor output.
Usage:
// 32 5x5 filters | |
var network = conv2d(tensor, 32, 5, graph); | |
// 32 5x5 filters, stride 2, "same" padding with relu activation | |
var network = conv2d(tensor, 32, 5, graph, 2, "SAME", undefined, (layer, graph) => {return graph.relu(layer)}); | |
// applying some kernel_initializer | |
var network = conv2d(x, 32, 5, g, undefined, undefined, undefined, undefined, new RandomUniformInitializer(0, 0.5)); |
function conv2d( | |
inputs: Tensor, | |
filters: number, | |
kernel_size: number, | |
graph: Graph, | |
strides: number = 1, | |
padding = "valid", | |
data_format = "channels_last", | |
activation?, | |
kernel_initializer: Initializer = new VarianceScalingInitializer(), | |
bias_initializer: Initializer = new ZerosInitializer(), | |
name: string = "") { | |
// get the channels parameter from the input | |
const channel_axis = data_format == "channels_last" ? inputs.shape[2] : inputs.shape[0]; | |
// shape of the kernel to create the filters | |
const depthwise_kernel_shape = [kernel_size, kernel_size, channel_axis, filters]; | |
// Create a new variable for weights of the filters and apply the initializer | |
var weights = graph.variable(name + "w", | |
kernel_initializer.initialize(depthwise_kernel_shape, kernel_size * kernel_size * channel_axis * filters, | |
filters)); | |
// create a new variable for bias and apply the initializer | |
var bias = graph.variable(name + "b", bias_initializer.initialize([filters], kernel_size, filters)) | |
// call the actual conv2d function | |
const layer = graph.conv2d(inputs, weights, bias, kernel_size, filters, strides, padding == "valid" || padding == "VALID" ? 0 : undefined); | |
// return the tensor. Apply the activation if defined | |
return activation == undefined ? layer : activation(layer, graph); | |
} |
2. flatten - Flattens an input tensor.
/** | |
* Flattens an input tensor. | |
* @param inputs Tensor input | |
*/ | |
function flatten(inputs: Tensor) { | |
return g.reshape(inputs, (() => { | |
let i = 1; | |
inputs.shape.forEach((val) => { i *= val }); | |
return [i]; | |
})()); | |
} |