cavis/.old/contrib/codegen-tools/codegen/adr/0010-ir-codegen.md

5.3 KiB

###Interpreter

An interpreter takes a tensorflow or pytorch model and figure out how to map various ops. Their attributes and op names are mapped to libnd4j using information from the above op descriptor.

An interpreter can take in an individual op from tensorflow, onnx or another framework and translate it to an equivalent op in libnd4j represented as the equivalent op descriptor.

The usage is as follows:

Op def files

For each framework in tensorflow/onnx, we have inbuilt definition files for each tensorflow and pytorch.

For onnx, we have an onnx.pbtxt generated by the dl4j-dev tools submodule onnx-defs. This definition file has each op serialized as an onnx NodeProto For tensorflow, we have an ops.proto pulled from tensorflow's official repo.

We use these files to map operation attributes serialized by nd4j's generated operation definition tool found in dl4j-dev-tools to their equivalents in tensorflow and pytorch.

An interpreter has 2 methods:

Interpreter interpreter = ...;

OpDescriptor descriptor = interpreter.interpretTensorflow(nodeFromOtherFramework);
OpDescriptor descriptor = interpreter.interpretOnnx(nodeFromOtherFramework);

//proceed to use descriptor to map for model import...

##Interpreter file format

An interpreter is language neutral. We have a mini syntax for mapping attributes from one format to another.

Through indexing every attribute and input/output in libnd4j, we can maintain an index of operation names and attributes with a mapping syntax. If we want to map a trivial operation like say: Abs, let's compare tensorflow, onnx and the descriptor in nd4j.

Tensorflow:

op {
  name: "Floor"
  input_arg {
    name: "x"
    type_attr: "T"
  }
  output_arg {
    name: "y"
    type_attr: "T"
  }
  attr {
    name: "T"
    type: "type"
    allowed_values {
      list {
        type: DT_BFLOAT16
        type: DT_HALF
        type: DT_FLOAT
        type: DT_DOUBLE
      }
    }
  }
}

Onnx:

input: "X"
output: "Y"
name: "Floor"
op_type: "Floor"
attribute {
  name: "X-types"
  strings: "float"
  strings: "float16"
  strings: "double"
  type: STRINGS
}
doc_string: "\nFloor takes one input data (Tensor<T>) and produces one output data\n(Tensor<T>) where the floor is, y = floor(x), is applied to\nthe tensor elementwise.\n"

The op descriptor for libnd4j is:

OpDeclarationDescriptor(name=Floor, nIn=1, nOut=1, tArgs=0, iArgs=0, inplaceAble=true, inArgNames=[first], outArgNames=[z], tArgNames=[], iArgNames=[], bArgNames=[], opDeclarationType=OP_IMPL)

Floor is a fairly simple op with 1 input and 1 output. Inputs and outputs are implicitly tensors. This is true for both onnx and tensorflow.

Tensorflow has an attribute defined for valid types. The way we generated the onnx schema proto, we have something equivalent that allows for a list of types presented as a string.

Mapping a descriptor happens based on attribute. An example of abs below:

floor {
  tensorflow_mapping: {
     input_mappings: {
       input_mapping {
        first: "x"
     }
         
     }
     output_mappings: {
        z: "y"
     }
  
      attribute_mapping_functions: {

     }

  }
  onnx_mapping {
    input_mappings {
       first: "X"
    }
    output_mappings {
        z: "Y"
    }

     attribute_mapping_functions {

     }
  }
}

Now we can compare this to Convolution. In tensorflow, the convolution op is represented as:

op {
  name: "Conv2D"
  input_arg {
    name: "input"
    type_attr: "T"
  }
  input_arg {
    name: "filter"
    type_attr: "T"
  }
  output_arg {
    name: "output"
    type_attr: "T"
  }
  attr {
    name: "T"
    type: "type"
    allowed_values {
      list {
        type: DT_HALF
        type: DT_BFLOAT16
        type: DT_FLOAT
        type: DT_DOUBLE
      }
    }
  }
  attr {
    name: "strides"
    type: "list(int)"
  }
  attr {
    name: "use_cudnn_on_gpu"
    type: "bool"
    default_value {
      b: true
    }
  }
  attr {
    name: "padding"
    type: "string"
    allowed_values {
      list {
        s: "SAME"
        s: "VALID"
      }
    }
  }
  attr {
    name: "data_format"
    type: "string"
    default_value {
      s: "NHWC"
    }
    allowed_values {
      list {
        s: "NHWC"
        s: "NCHW"
      }
    }
  }
  attr {
    name: "dilations"
    type: "list(int)"
    default_value {
      list {
        i: 1
        i: 1
        i: 1
        i: 1
      }
    }
  }
}

In onnx, it's represented as:

input: "X"
input: "W"
input: "B"
output: "Y"
name: "Conv"
op_type: "Conv"
attribute {
  name: "auto_pad"
  s: "NOTSET"
  type: STRING
}
attribute {
  name: "dilations"
  s: ""
  type: INTS
}
attribute {
  name: "group"
  i: 1
  type: INT
}
attribute {
  name: "kernel_shape"
  s: ""
  type: INTS
}
attribute {
  name: "pads"
  s: ""
  type: INTS
}
attribute {
  name: "strides"
  s: ""
  type: INTS
}
attribute {
  name: "X-types"
  strings: "double"
  strings: "float"
  strings: "float16"
  type: STRINGS
}
attribute {
  name: "W-types"
  strings: "double"
  strings: "float"
  strings: "float16"
  type: STRINGS
}
attribute {
  name: "B-types"
  strings: "double"
  strings: "float"
  strings: "float16"
  type: STRINGS
}
doc_string: "\nThe convolution operator consumes an input tensor and a filter, and\ncomputes the output."