Transcending the von Neumann Architecture?
Note: This was originally written in response to an assignment for my course on the Collective Dynamics of Complex Systems at Binghamton University.
John von Neumann's Theory of Self-Reproducing Automata (1966), based on lectures presented in 1949, represents a fascinating glimpse into the mindset of a founding father of modern computing.1 The computers we rely on today are grounded in the von Neumann architecture, whose defining feature is the separation of processing (handled by CPUs) and memory (handled by RAM).
While the paper explores many compelling themes, including von Neumann's systematic comparison of organic nervous systems and artificial computing systems, I'd like to focus on one particularly thought-provoking statement from page 912:
"Appealing to the living world doesn't help us when we don't understand enough about how natural organisms function."
The statement arises in the context of von Neumann's exploration of an apparent contradiction between complication and self-reproduction. In living systems, "simple entities surrounded by an unliving amorphous milieu, produce something more complicated." This suggests that complication is a generative property enabling simple parts to create more complex wholes.
However, in artificial systems, he observes the opposite: using the example of machine tools, he argues that any organization creating something new must be more complicated than what it creates. This implies that complication is a degenerative property that limits the creation of more complicated things.
Given our complete understanding of man-made artificial automata versus our limited grasp of biological systems, von Neumann advocates focusing our attention on artificial systems to advance our understanding of self-reproduction. He then demonstrates how to construct formal systems capable of creating objects more complicated than themselves.
My question is - to what extent does von Neumann's statement that "appealing to the living world doesn't help us" still hold today?
We've made remarkable advances in biology and neuroscience since 1949, mapping the human genome and developing sophisticated tools for understanding brain structure and dynamics. As of yesterday we've used AI to (allegedly) "write whole chromosomes and small genomes from scratch."2 Still, no reasonable person would claim we fully understand phenomena like life or consciousness.
Are we ready to design and make practical use of more organismic neuromorphic computers that transcend the von Neumann architecture? 3 4
It’s a fascinating time to be alive as the tools developed by scientists who had to engineer around our limited understanding of organic life are now being wielded to accelerate that very understanding at a breathtaking pace, likely leading to emergent outcomes that none of us can anticipate.
Neumann, J. V., & Burks, A. W. (1966). Theory of Self-Reproducing Automata. University of Illinois Press.
Callaway, E. (2025). Biggest-ever AI biology model writes DNA on demand. Nature. https://doi.org/10.1038/d41586-025-00531-3
Schuman, C. D., Kulkarni, S. R., Parsa, M., Mitchell, J. P., Date, P., & Kay, B. (2022). Opportunities for neuromorphic computing algorithms and applications. Nature Computational Science, 2(1), 10–19. https://doi.org/10.1038/s43588-021-00184-y
Kudithipudi, D., Schuman, C., et al. (2025). Neuromorphic computing at scale. Nature, 637(8047), 801–812. https://doi.org/10.1038/s41586-024-08253-8