IRs#
Created On: Dec 16, 2025 | Last Updated On: Dec 16, 2025
PyTorch 2.0 offers two set of IRs for backends to interface with: Core ATen IR and Prims IR.
Core ATen IR#
Core aten ops is the core subset of aten operators that can be used to compose other operators.
Core aten IR is fully functional, and there is no inplace or _out variants in this opset.
In contrast to Prims IR, core aten ops reuses the existing aten ops in “native_functions.yaml”,
and it doesn’t further decompose ops into explicit type promotion and broadcasting ops.
This opset is designed to serve as the functional IR to interface with backends.
Warning
This opset is still under active development, more ops will be added in the future.
Prims IR#
Prims IR is a set of primitive operators that can be used to compose other operators. Prims IR is a lower level opset than core aten IR, and it further decomposes ops into explicit type promotion and broadcasting ops: prims.convert_element_type and prims.broadcast_in_dim. This opset is designed to interface with compiler backends.
Warning
This opset is still under active development, more ops will be added in the future.
Glossary Terms Demo#
This section demonstrates tooltips for various glossary terms. Hover over the highlighted terms to see their definitions.
Operations#
An Operation is a unit of work in PyTorch. There are different types of operations:
Native Operation: Operations that come natively with PyTorch ATen
Custom Operation: Operations defined by users, usually a Compound Operation
Leaf Operation: Basic operations that always have dispatch functions defined
Compound Operation: Operations composed of other operations (also known as Composite Operation or Non-Leaf Operation)
Kernels#
A Kernel is the implementation of a PyTorch operation. There are two main types:
Device Kernel: Device-specific kernel of a Leaf Operation
Compound Kernel: Device-agnostic kernels that belong to Compound Operations
JIT Compilation#
PyTorch supports JIT (Just-In-Time) compilation through TorchScript. There are two main approaches:
Tracing: Using
torch.jit.traceon a function to get an executable that can be optimizedScripting: Using
torch.jit.scriptto inspect source code and compile it as TorchScript code
Summary Table#
Term |
Type |
Description |
|---|---|---|
Library |
Foundational tensor library |
|
Concept |
Unit of work |
|
Implementation |
What happens when an operation executes |
|
Technique |
Just-In-Time Compilation |
|
Interface |
JIT compiler and interpreter |
Intersphinx References#
Note: Intersphinx tooltips only work with documentation hosted on Read the Docs that has the embed API enabled. Most external documentation sites (docs.python.org, docs.pytorch.org, numpy.org) do not support this feature.
The following are standard intersphinx links (clickable but without tooltips):
torch.Tensor- The main tensor classtorch.zeros()- Create a tensor of zeroslist- Python’s built-in list type