Software is the new hardware, LLMs are the new software

I’m seeing this pattern emerge in application codebases that extensively use LLMs: the software we write acts as a sort of frame, whereas the most “flexible” parts of the application are directly handled by LLM prompts and responses.

The roles of the non-LLM software are essentially three: 1) provide proper context to LLMs; 2) validate LLM output; 3) transfer this information to the interface or the database.

Non-LLM software is “harder” not in the fact that it is expensive to write or change, but rather “hard” in the sense that it provides necessary invariants to contain and leverage the flexibility of LLMs. The analogy is only partly apt, since software is still software, and LLMs are also software themselves!

Having “hard” software is necessary not only for performance reasons (since it’s currently prohibitively expensive and slow to do all the deterministic operations of a backend purely on LLM prompts), but also because LLMs (even at low temperatures) are non-deterministic and therefore less reliable.

This combination of using deterministic (hard) software to manage and validate LLM results is powerful.