The final decade has seen unbelievable progress in machine studying (ML), primarily pushed by highly effective neural community architectures and the algorithms used to coach them. Nonetheless, regardless of the success of huge language fashions (LLMs), a couple of elementary challenges persist, particularly round continuous studying, the flexibility for a mannequin to actively purchase new information and abilities over time with out forgetting previous ones.
In terms of continuous studying and self-improvement, the human mind is the gold normal. It adapts by way of neuroplasticity — the outstanding capability to alter its construction in response to new experiences, reminiscences, and studying. With out this potential, an individual is restricted to instant context (like anterograde amnesia). We see the same limitation in present LLMs: their information is confined to both the instant context of their enter window or the static data that they study throughout pre-training.
The straightforward method, regularly updating a mannequin’s parameters with new knowledge, typically results in “catastrophic forgetting” (CF), the place studying new duties sacrifices proficiency on previous duties. Researchers historically fight CF by way of architectural tweaks or higher optimization guidelines. Nonetheless, for too lengthy, we now have handled the mannequin’s structure (the community construction) and the optimization algorithm (the coaching rule) as two separate issues, which prevents us from attaining a very unified, environment friendly studying system.
In our paper, “Nested Studying: The Phantasm of Deep Studying Architectures”, printed at NeurIPS 2025, we introduce Nested Studying, which bridges this hole. Nested Studying treats a single ML mannequin not as one steady course of, however as a system of interconnected, multi-level studying issues which can be optimized concurrently. We argue that the mannequin’s structure and the principles used to coach it (i.e., the optimization algorithm) are basically the identical ideas; they’re simply completely different “ranges” of optimization, every with its personal inner stream of knowledge (“context stream”) and replace charge. By recognizing this inherent construction, Nested Studying offers a brand new, beforehand invisible dimension for designing extra succesful AI, permitting us to construct studying elements with deeper computational depth, which in the end helps resolve points like catastrophic forgetting.
We check and validate Nested Studying by way of a proof-of-concept, self-modifying structure that we name “Hope”, which achieves superior efficiency in language modeling and demonstrates higher long-context reminiscence administration than current state-of-the-art fashions.

