Algorithmic generative design, is basically the defining of rules to define behavior boundaries and letting random() take the wheel. I was asked by a client to incorporate motion into a project and had been struggling to decide what tool to use to prototype it, framer, aftereffects or whatever. I had asked a few designers what they would do and got some suggestions about puppet tools and precomping behavior and so on. I was not excited at the the time I was expecting it would take to manually key frame the kind of dynamic feeling motion I was after.
Thankfully one of the people I talked to had mentioned Processing, the code language made specifically for dynamic visualisations. Not in the D3.js dataviz way, but the Conway's Life kind of way. I had run into it a little bit with some maker projects (ie the Ambilight dynamic backlight) but had not really seen the potential until I saw openprocessing.org. The gallery of work was amazing and after digging through some code samples and a little bit of prodding and poking I was able to get close to the behavior I was looking for, and with processing.js it can run in browser.
What I really liked was that it was easy to add both user and data interaction to create dynamic generative behavior that could reflect some underlying data making it more than decoration but an abstraction of some piece of information.
The specifics of the behavior are known, but the starting points and thus the visual is different on every page load, which is also really interesting to me. I don't know that I would use processing on an enterprise scale project, the in-browser calculations would be kind of a jerk thing to unleash on folks, particularly mobile (battery constrained) people. Granted it could done serverside with node.js or something, but for small scale of the project it seemed like a fair match.
I was also a lot of fun to play with.