Tuesday, December 19, 2006

Playing God

If I write a program that fills “agents” with behaviors, and then introduce a lot of agents in the system, I know the agents wouldn’t have any purpose. Now think of these agents as drivers. I program them to be able to drive. Their purpose is not “not to hit others”; their purpose is to reach a destination without getting hit themselves. Let's say I assign destinations randomly every some-time-period. I also program them to maneuver left and right, give them a rear view mirror and so on.

Then I fill the system with too many agents to avoid hits altogether. As a last resort, agents don’t become motionless; they hit each other and die. In this program, I can be sure of all the factors that come in play when an agent near misses another agent – after all, I coded it!


  • Can I be sure that the factor of the agent’s purpose of reaching a destination played or not a role in the near miss? In the short term, apparently not; in the long term, definitely yes.
  • Can I be sure that any pattern that arises out of such near misses isn't the manifestation of collective activity of some benign bug that I left in the agents' code?
  • Can I be sure that the pattern in the collective activity itself is not a manifestation of a bigger universal truth like pi?

I am sure, if there is a God, he is one very confused guy.

No comments: