I know Conway was slightly resentful that game of life overshadowed some of his other work in the public's imagination, but I've always been entranced by it.
I remember as a child the first computer my family had was a dual boot of Microsoft DOS and Windows 3.1 (or something like that?). On the Windows 3.1 side was a version of Conway's Game of Life which was preinstalled, and I'd spend hours messing around with it. You could place two different colors of cells, and I'd set patterns up and then let the simulation go to see which one would "win", or outlast the other.
Conway's Game of Life was also one of the first meaningful things I'd programmed, and even today I like to reimplement game of life when learning a new langauge. Typically I like to let the user assign different colors to the grid, and have new cells born be a blend of all of their neighboring colors as a kind of simulation of natural selection. Right now I'm learning network programming for game development, and I'm finishing up a networked implementation of game of life so multiple people can join and manipulate a running simulation. In general I think it's a good project to use when playing around with learning something new.
I just really like cellular automatons, and game of life in particular.
Conway's Game of Life was my first and always is my favorite thing to be programmed (the runner ups are the Mandelbrot set and strange attractors), to the point that I messed around with its rules and became the continuous CA you see at the bottom of the blog post.
And the technologies and science have changed a lot. It was Pascal and Assembly and maybe BASIC in my first try, now it's Python, TensorFlow, WebGL etc. Back then I know nothing about the theory of complex systems, it turns out that the Game of Life and a whole bunch of fascinating emergent systems are at the crossroads of complex systems, artificial life, and artificial intelligence research (see Neural CA, CA-NEAT, etc).
As for Conway's resentment of his own creation, I was told that his friend showed the continuous CA to Conway last year, and he was mesmerized by its complexity. I hope that means the great mathematician was delighted by the direct offspring of his brainchild that he was once ashamed of.
> I know Conway was slightly resentful that game of life overshadowed some of his other work in the public's imagination, but I've always been entranced by it.
The blog post was inspired by The Recursive Universe by William Poundstone. I read this book years ago and found it amazing. I recently read another book by him, Fortune’s Formula, which features the Kelly Criterion, Claude Shannon beating roulette, card counting in blackjack, and the surprising origins of Warner Bros. Recommended for sure. Thanks to the HN poster who suggested it.
Heh, I remember reading William Poundstone's books "Big Secrets" and "Bigger Secrets" as a kid (https://en.wikipedia.org/wiki/Big_Secrets ). They're mostly about debunking or explaining various urban legends and conspiracy theories. I'm not sure how well they've stood the test of time, but they definitely had a formative impact on my thinking in a way that I only suddenly realized upon reading his name thirty some years later.
A highly inspiring book, for sure. In my own case, it has led to a handful of blog posts about HashLife, a literate programming implementation, implementing Life on a Turing Machine, and of course, Conway’s passing from COVID-19 led to a post about FRACTRAN and the Collatz Conjecture.
We may not break new ground, but when we read about something and are inspired to write code and play with ideas, we engage with the idea at a deeper level than simply reading to comprehend.
Adam P. Goucher recently created metacell more advanced than the one described in the article, for which the Dead state is literally empty space. When a new metacell is born it is constructed by one of its neighbours by colliding gliders.
I am convinced The Universe is an enormous fractal. I always wondered if you 'zoomed out' of the Universe far enough, would you encounter more matter or a separate Universe co-existing next to our one? Keep Zooming out and you could probably see /infinite/ Universes that go on for eternity, as one long fractal journey.
Recursive universes don't get interesting until you start doing system calls:
> So when an inevitable bug occurred in that super-duper LIFE machine, the intelligent entities in the simulation would have suddenly been presented with a window to the metaphysics which determined their own existence. They would have a clue to how they were really implemented. In that case, Fredkin concluded, they entities might accurately conclude that they were part of a giant simulation and might want to pray to their implementors by arranging themselves in recognisable patterns, asking in readable code for the implementors to be given clues as to what they're like.
It's unfeasible to keep scanning the whole of a simulated world to try to discern anything that might be intended to pass a message of course, so making a "real" or at least a normal system call involves using a pre-ordained area of the simulated space which is set aside as a buffer, altering the contents of that area according to some pre-ordained protocol known in advance to both the simulated world and the simulating program. Responding to the system call likewise involves the simulating program "miraculously" altering the state of a pre-ordained buffer area according to a pre-set protocol. Not only is this how you can implement system calls in recursive universes: this is what a system call necessarily is. System calls, calls to the runtime, just are events of this nature happening between simulated and simulating systems.
Likewise any kind of message passing is built on top of this: basically the only way any process can pass a message to another is to make a system call requesting that a message be passed on to the intended recipient, then hope that simulating system will deliver it as requested. Then delivering the message obviously involves the recipient's simulating system—which isn't necessarily the same system as the sender's simulating system—appropriately altering the state of the recipient.
(Unless the sender and receiver have a shared memory area, you could say, but that's not so different either: two simulated programs only have a shared memory area to the extent that the simulating system is pleased to keep the supposedly-shared area actually consistent in the two programs it is simulating.)
Notice how annoying it is that, by and large, most system-call protocols don't allow a process to, for example, send its simulating system a message explicitly addressed to its simulating system's simulating system, or to send a simulated system a message explicitly addressed to one of its simulated system's simulated systems. I suppose you could set it up with nested VMs and their virtual Ethernet interfaces.
That reminds me of a story where a student couldn't figure out the properties of some physical system, so he ran a massive simulation of a universe with laws of physics similar to his own (but, of course, cutting corners whenever possible). This universe eventually produced a life form intelligent enough to figure out the necessary equations, at which point he happily copied them to his homework and forgot about the simulation.
...only to find it days later (= billions of years of simulated time), by which point the simulated life had figured out that their universe was written hastily and its laws were full of subtle bugs, like floating-point rounding errors showing up in physical measurements. Their technological advance let them move stars around, which they grumpily arranged in a message saying "your code sucks".
Can't find it at the moment, does anyone recognise the reference? It could be in one of these books, I suppose, but I don't have them.
Relatedly, I’ve always found quantum mechanics to be reasonable “proof” that we’re living in a simulation.
Take wave-partical duality: how is that not an artifact of the implementors wanting to reuse some core routines from an legacy particle-based universe while building our next-gen wave based one? The particle-based simulation wouldn’t work at the scales needed, so they moved to waves (much easier to simulate), and rigged up some adaptors to switch to particle mode in a JIT manner as needed.
The folks trying to unify quantum and classical mechanics are essentially reverse-engineering that JIT (and others like it).
So memory mapped IO between cellular automata and the underlying system? Has anyone implemented anything that uses a feature like that because it sounds rather neat.
> (Unless the sender and receiver have a shared memory area, you could say, but that's not so different either: two simulated programs only have a shared memory area to the extent that the simulating system is pleased to keep the supposedly-shared area actually consistent in the two programs it is simulating.)
Maybe it's not intended to be shared, but we can observe timing effects from e.g. cache aliasing, or rowhammer the other universes.
This is horrifying. Fredkin and Feynman are both heroes of mine, and both succumbed, for at least a time, to the idea that cellular automata is more than a game. Wolfram, another genius, appears still afflicted.
This is a kind of intellectual heroin. As Jerry Garcia said, first drugs seem like the solution, then they turn out to be the problem.
Cellular automata are like the `y = m * x + b` of the algorithmic universe.
Personally, I believe it is inevitable that a vast and rich new mathematical world will spring up around simple computational systems, where they are the primary object of study and not relegated to being games, "simulations" of something else, curiosities, etc. CAs are just the "hello world" of this universe.
In that future, your attitude will be seen as a kind of parochial holdout of 20th century attitudes.
The new mathematics will:
* privilege finite, discrete systems
* be explicitly constructivist
* not shy away from computational enumeration and
classification
* be much more visual, but no less rigorous than our last few hundred years of mathematics
* make deep connections with many new technological systems, like Git, distributed systems, blockchains, computer networks, trust networks, social networks, etc.
* most importantly, be way more fun than old math!
It will exploit to the full the capabilities of computer systems to make the new systems tangible, so that the idea of a 'math paper' will be itself quite old fashioned -- most mathematical work will start with forking a Git repo or the equivalent, and most communication and education will be interactive interrogation of computational systems using a kind of souped-up, computation-focused REPL. It's the kind of mathematics that will be natural when Brett Victor's ideas are actually implemented.
Wolfram's NKS book is the prototype work of this kind, but it will also be seen as quite a product of its time and of the author's prior commitments to physics, math, etc. And of course distorted by the classic Wolfram immodesty.r
Ideas spread like viruses (mathematically, and physically, as the interaction between two individuals who have some percentage change of a successful exchange).
Some idea viruses disrupt the host, some improve the host’s health. Some minds are more prone to certain viruses that would be nonsense to lesser minds.
So fun. I like to imagine the workings of the universe as an infinite dimensional game of life. Not sure how we will ever wrap our minds around the complexity.
So many people have been working on develop/discover new life forms or different types of universe in the Game of Life. It is purely deterministic so obviously there will never be any free will for the lives inside. But maybe chances are that we can observe the illusion of consciousness as the lives become more complex.
I find these large scale structures in Game of Life absolutely impressive and sometimes wonder, how they're constructed to begin with and what the tooling looks like people use to build this.
I try to design my programs like the game of life. I start off with some primitive axioms and build all the high level functions out of a minimal set of primitives.
A lot of simulations will likely be done in a manner that's similar to CAs. Pixel shaders run once per pixel, which make this type of abstraction quite attractive. You can take advantage of the massive parallelism of GPUs.
I appreciate this response. However the “cellular” is represented by the minimal computational data structure to simulate the gas. There is no automata. Ordinary physics, adhered to as closely as possible, and not phantasmagoric creation and annihilation rules, are used to calculate the next state.
The automation is cellular, discrete (binary) and is driven by a set of activation rules. If that's your ordinary physics we had very different physics books.
I don’t see why people are downvoting. I’m not pessimistic, I just don’t understand for myself the real life applications, so please educate instead of downvoting.
That said, procedural generation is a great example thanks.
Also, whatever happened of Wolfram’s “A New Kind of Science”, or whatever it was called. He literally wrote a gigantic book on sampling the computational universe, but I’m not really aware of it being used, probably because I’m ignorant.
I mean please don't take this answer the wrong way, but this is like saying "Some rich person should give away $1M for finding an application of complex numbers". It feels condescending. If your area deals with cellular automata e.g. programming languages, computer simulation, procedural generation etc applications of cellular automata are numerous and well understood. It's niche sure, but it's not like this entertaining thing that strangely has no practical applications. I downvoted GP because of this, since it assumes it has no applications and offers a rich person to help us out to compensate for that.
I think the value of Stephen's big book is that served as a kind of manifesto to "take computation seriously". By that I mean: to think about computations, in the abstract, as a kind of new mathematics about which we know almost nothing, and about which our naive intuitions from other domains is almost totally inapplicable. It is an injunction to explore the "computational universe", in other words, the "universe of simple computational systems", where CAs live as one of several kinds of maximally simple form of computation (which Wolfram made some attempt to categorize and explore).
By analogy, think about the development of algebra. It didn't come naturally! Yet Algebra is one of the most natural ideas in abstract mathematics, incredibly simple and incredibly powerful, lying as a rosetta stone connecting so many other topics in mathematics. But people didn't immediately accept or describe it; it took many hundreds of years to crystalize and mature as a topic. It's a very old piece of the operating system of mathematics that underwent a lot of hacking and refactoring.
Wolfram proposes we think about computational systems in the same way, as a nascent field that needs exploration, mapping, the irrigation of young minds and new ideas. It may be some time before it yields a harvest. We shouldn't expect it to immediately revolutionize everything.
Even after Turing, computational systems existed as a kind of diaspora in mathematics, having sat unrecognized in all kinds of places, never having had a sort of independent state in which they are not considered as an aspect of something else, as somehow alien and unworthy of respect because of their confusing aspects. Largely, there were two reasons they were treated so shoddily:
1. they required computers (and good computer tools) to actually explore, since they produced complex and irreducible computations
2. they resisted any kind of analysis by prior mathematics
In other words, we were not ready to really probe them until a few decades ago, and even now our software is not well suited to explore them (Mathematica remains the best tool for the job, though I anticipate Julia will surpass it rapidly). Furthermore, traditional mathematical fields have not adapted to the presence of computation very gracefully.
I guess I would summarize: NKS is an imperfect book and Stephen an imperfect herald of the ideas within. But he was the first person to really articulate those ideas crisply and push them hard into the imagination, and I'm very glad he did. The hand he overplayed was the application to the natural world -- I think that will take longer to pay off than he predicted.
You mean like procedural generation? I used cellular automata with custom rules to generate procedural generated universes for a video game I worked on.
The current answer is amazing screen saver, but part of me thinks the answer is in microbots that are super simple to build and have limited instructions, but could be placed in a configuration to do something useful (like build more?)
You should really take a look at the paper "Computation and Pattern Formation by Swarm Networks with Brownian Motion" or, more generally, any of the papers by Teijiro Isokawa and Ferdinand Peper. What they present is really close to (a theoretical version of) the microbots you're talking about.
I even wrote a report and presentation on those papers for university, but sadly they're in German.
I remember as a child the first computer my family had was a dual boot of Microsoft DOS and Windows 3.1 (or something like that?). On the Windows 3.1 side was a version of Conway's Game of Life which was preinstalled, and I'd spend hours messing around with it. You could place two different colors of cells, and I'd set patterns up and then let the simulation go to see which one would "win", or outlast the other.
Conway's Game of Life was also one of the first meaningful things I'd programmed, and even today I like to reimplement game of life when learning a new langauge. Typically I like to let the user assign different colors to the grid, and have new cells born be a blend of all of their neighboring colors as a kind of simulation of natural selection. Right now I'm learning network programming for game development, and I'm finishing up a networked implementation of game of life so multiple people can join and manipulate a running simulation. In general I think it's a good project to use when playing around with learning something new.
I just really like cellular automatons, and game of life in particular.