Type inference is implemented through the two classes
`ForwardTypeInferenceAnalysis` and `BackwardTypeInferenceAnalysis`,
which can be used as forward and backward dataflow analyses with the
MLIR sparse dataflow analysis framework.
Both classes rely on a type resolver, which must be a class inheriting
`TypeResolver` and that specifies which types are to be considered as
unresolved and that resolves the actual types for the values related
to an operation based on the previous state of type inference.
The type inference state for an operation is represented by an
instance of the class `LocalInferenceState`, which maps the values
related to an operation to instances of `InferredType` (either
indicating the inferred type as an `mlir::Type` or indicating that no
type has been inferred, yet).
The local type inference by a type resolver can be implemented with
type constraints (instances of sub-classes of `TypeConstraint`), which
can be combined into a `TypeConstraintSet`. The latter provides a
function that attempts to apply the constraints until the resulting
type inference state converges.
There are multiple, predefined type constraint classes for common
constraints (e.g., if two values must have the same type or the same
element type). These exist both as static constraints and as dynamic
constraints. Some pre-defined type constraints depend on a class that
yields a pair of values for which the contraints shall be applied
(e.g., yielding two operands or an operand and a result, etc.).
The `TypeInference` dialect provides three operations.
The operation `TypeInference.propagate_downwards` respresents a type
barrier, which is supposed to forward the type of its operand as its
result type during type inference.
The operation `TypeInference.propagate_upwards` also respresents a
type barrier, but is supposed to forward the type of its result as the
type for its operand during type inference.
The operation `TypeInference.unresolved_conflict` can be used as a
marker when two different types have beed inferred for a value (e.g.,
one type during forward dataflow analysis and the other during
backward dataflow analysis)
Some of the tests use lookup tables whose numbers of elements do not
match the sizes of the polynoms of the bootstrap operations they are
passed to. This commit replaces these lookup tables with tables of the
right size.
In the optimizer, nodes without consumers are identified as outputs.
Since we can now return multiple values, this is inherently buggy,
since a value can then be both returned, and consumed to create another
input.
This commit fixes this by allowing the compiler to tag nodes as being
outputs.