# TVM Operator Inventory¶

TVM Operator Inventory.

TOPI is the operator collection library for TVM, to provide sugars for constructing compute declaration as well as optimized schedules.

Some of the schedule function may have been specially optimized for a specific workload.

## Index¶

List of operators

 topi.exp(x) Take exponential of input x. topi.tanh(x) Take hyperbolic tanh of input x. topi.log(x) Take logarithm of input x. topi.sqrt(x) Take square root of input x. topi.sigmoid(x) Take sigmoid tanh of input x. topi.transpose(a[, axes]) Permute the dimensions of an array. topi.expand_dims(a, axis[, num_newaxis]) Expand the shape of an array. topi.nn.relu(x) Take relu of input x. topi.nn.leaky_relu(x, alpha) Take leaky relu of input x. topi.nn.dilate(data, strides[, name]) Dilate data with zeros. topi.nn.conv2d_nchw(Input, Filter, stride, …) Convolution operator in NCHW layout. topi.nn.conv2d_hwcn(Input, Filter, stride, …) Convolution operator in HWCN layout. topi.nn.depthwise_conv2d_nchw(Input, Filter, …) Depthwise convolution nchw forward operator. topi.nn.depthwise_conv2d_nhwc(Input, Filter, …) Depthwise convolution nhwc forward operator. topi.max(data[, axis, keepdims]) Maximum of array elements over a given axis or a list of axes topi.sum(data[, axis, keepdims]) Sum of array elements over a given axis or a list of axes topi.min(data[, axis, keepdims]) Minimum of array elements over a given axis or a list of axes topi.broadcast_to(data, shape) Broadcast the src to the target shape topi.broadcast_add(lhs, rhs) Binary addition with auto-broadcasting topi.broadcast_sub(lhs, rhs) Binary subtraction with auto-broadcasting topi.broadcast_mul(lhs, rhs) Binary multiplication with auto-broadcasting topi.broadcast_div(lhs, rhs) Binary division with auto-broadcasting

List of schedules

 topi.generic.schedule_conv2d_nchw(outs) Schedule for conv2d_nchw topi.generic.schedule_depthwise_conv2d_nchw(outs) Schedule for depthwise_conv2d_nchw topi.generic.schedule_reduce(outs) Schedule for reduction topi.generic.schedule_broadcast(outs) Schedule for injective op. topi.generic.schedule_injective(outs) Schedule for injective op.

## topi¶

topi.exp(x)

Take exponential of input x.

Parameters: x (tvm.Tensor) – Input argument. y – The result. tvm.Tensor
topi.tanh(x)

Take hyperbolic tanh of input x.

Parameters: x (tvm.Tensor) – Input argument. y – The result. tvm.Tensor
topi.log(x)

Take logarithm of input x.

Parameters: x (tvm.Tensor) – Input argument. y – The result. tvm.Tensor
topi.sqrt(x)

Take square root of input x.

Parameters: x (tvm.Tensor) – Input argument. y – The result. tvm.Tensor
topi.sigmoid(x)

Take sigmoid tanh of input x.

Parameters: x (tvm.Tensor) – Input argument. y – The result. tvm.Tensor
topi.transpose(a, axes=None)

Permute the dimensions of an array.

Parameters: a (tvm.Tensor) – The tensor to be expanded. axes (tuple of ints, optional) – By default, reverse the dimensions. ret tvm.Tensor
topi.expand_dims(a, axis, num_newaxis=1)

Expand the shape of an array.

Parameters: a (tvm.Tensor) – The tensor to be expanded. num_newaxis (int, optional) – Number of newaxis to be inserted on axis ret tvm.Tensor
topi.max(data, axis=None, keepdims=False)

Maximum of array elements over a given axis or a list of axes

Parameters: data (tvm.Tensor) – The input tvm tensor axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. ret tvm.Tensor
topi.sum(data, axis=None, keepdims=False)

Sum of array elements over a given axis or a list of axes

Parameters: data (tvm.Tensor) – The input tvm tensor axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. ret tvm.Tensor
topi.min(data, axis=None, keepdims=False)

Minimum of array elements over a given axis or a list of axes

Parameters: data (tvm.Tensor) – The input tvm tensor axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. ret tvm.Tensor
topi.broadcast_to(data, shape)

Broadcast the src to the target shape

Parameters: data (tvm.Tensor) – shape (list or tuple) – ret tvm.Tensor
topi.broadcast_add(lhs, rhs)

Parameters: lhs (tvm.Tensor) – rhs (tvm.Tensor) – ret tvm.Tensor
topi.broadcast_sub(lhs, rhs)

Parameters: lhs (tvm.Tensor) – rhs (tvm.Tensor) – ret tvm.Tensor
topi.broadcast_mul(lhs, rhs)

Parameters: lhs (tvm.Tensor) – rhs (tvm.Tensor) – ret tvm.Tensor
topi.broadcast_div(lhs, rhs)

Parameters: lhs (tvm.Tensor) – rhs (tvm.Tensor) – ret tvm.Tensor

## topi.nn¶

topi.nn.relu(x)

Take relu of input x.

Parameters: x (tvm.Tensor) – Input argument. y – The result. tvm.Tensor
topi.nn.leaky_relu(x, alpha)

Take leaky relu of input x.

Parameters: x (tvm.Tensor) – Input argument. alpha (float) – The slope for the small gradient when x < 0 y – The result. tvm.Tensor
topi.nn.dilate(data, strides, name=’DilatedInput’)

Dilate data with zeros.

Parameters: data (tvm.Tensor) – n-D, can be any layout. strides (list / tuple of n ints) – Dilation stride on each dimension, 1 means no dilation. name (str, optional) – The name prefix operators generated Output – n-D, the same layout as data. tvm.Tensor
topi.nn.conv2d_nchw(Input, Filter, stride, padding, out_dtype=’float32’)

Convolution operator in NCHW layout.

Parameters: Input (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width] Filter (tvm.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width] stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width] padding (int or str) – Padding size, or [‘VALID’, ‘SAME’] Output – 4-D with shape [batch, out_channel, out_height, out_width] tvm.Tensor
topi.nn.conv2d_hwcn(Input, Filter, stride, padding, out_dtype=’float32’)

Convolution operator in HWCN layout.

Parameters: Input (tvm.Tensor) – 4-D with shape [in_height, in_width, in_channel, batch] Filter (tvm.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter] stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width] padding (int or str) – Padding size, or [‘VALID’, ‘SAME’] output – 4-D with shape [out_height, out_width, out_channel, batch] tvm.Tensor
topi.nn.depthwise_conv2d_nchw(Input, Filter, stride, padding, out_dtype=’float32’)

Depthwise convolution nchw forward operator.

Parameters: Input (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width] Filter (tvm.Tensor) – 4-D with shape [in_channel, channel_multiplier, filter_height, filter_width] stride (tuple of two ints) – The spatial stride along height and width padding (int or str) – Padding size, or [‘VALID’, ‘SAME’] Output – 4-D with shape [batch, out_channel, out_height, out_width] tvm.Tensor
topi.nn.depthwise_conv2d_nhwc(Input, Filter, stride, padding)

Depthwise convolution nhwc forward operator.

Parameters: Input (tvm.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel] Filter (tvm.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier] stride (tuple of two ints) – The spatial stride along height and width padding (int or str) – Padding size, or [‘VALID’, ‘SAME’] Output – 4-D with shape [batch, out_height, out_width, out_channel] tvm.Tensor

## topi.generic¶

Generic declaration and schedules.

This is a recommended way of using TOPI API. To use the generic schedule function, user must set the current target scope using with block. See also tvm.target

Example

# create schedule that dispatches to topi.cuda.schedule_injective
with tvm.target.create("cuda"):
s = tvm.generic.schedule_injective(outs)

topi.generic.schedule_conv2d_nchw(outs)

Schedule for conv2d_nchw

Parameters: outs (Array of Tensor) – The computation graph description of conv2d_nchw in the format of an array of tensors. sch – The computation schedule for the op. Schedule
topi.generic.schedule_depthwise_conv2d_nchw(outs)

Schedule for depthwise_conv2d_nchw

Parameters: outs (Array of Tensor) – The computation graph description of depthwise_conv2d_nchw in the format of an array of tensors. sch – The computation schedule for the op. Schedule
topi.generic.schedule_reduce(outs)

Schedule for reduction

Parameters: outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors. sch – The computation schedule for the op. Schedule
topi.generic.schedule_broadcast(outs)

Schedule for injective op.

Parameters: outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors. sch – The computation schedule for the op. Schedule
topi.generic.schedule_injective(outs)

Schedule for injective op.

Parameters: outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors. sch – The computation schedule for the op. Schedule