Huge idea: what if tensors are the next-generation replacement for RAM? Classic RAM (I'm talking about the software abstraction, not the physical hardware) is just a vector with 2^64 cells, most of which are zero and not backed by physical memory. This is commonly known as a
sparse vector. The current AI boom has made it obvious that higher-dimensional memory chunks, known as
tensors, are an important idea, especially sparse ones. Other than being higher-dimensional, key differences between tensors and RAM include:
• An AI app will typically work with multiple tensors, but a classical app will only work with one RAM. (Though Wasm can have multiple RAMs, known as "linear memories", and of course, you can pretend to have multiple memories using abstractions like
malloc).
• Tensors can be subjected to
unary operations such as slicing, permuting, and aggregation (min, max, sum, product), that generalize the boring read and write operations on RAM.
• Tensors can be subjected to
binary operations such as multiplication/contraction (generalizing matrix multiplication), convolution, and element-wise addition.
The data of everyday programs is often very heterogeneous, which corresponds to having lots of
sparse tensors. Sparse tensors need good support in software and
ideally in hardware. Thankfully, there is AI hardware being developed that is designed to operate on sparse tensors, by way of dedicated circuits that can compress and decompress them.
Tenstorrent is probably the leader here.
Here's a fun fact: multiplication of sparse Boolean tensors is equivalent to a database equi-join. So if you think databases are important, then maybe you should give tensors some thought.
And relatedly: operations on tensors are typically massively-parallelizable, thus could be a good foundation for a high-performance programming language that compiles to AI hardware.