Simulating a City
For the past 4 years or so, in my spare time, I have been working with a small start-up company, Encitra, whose goal is to help cities and real estate developers make sustainable urban plans come to life in the minds and hearts of stakeholders and the general public. We go at it with virtual reality. Not just computer animation movies; we develop complete multi-user interactive virtual environments that are built and re-built over time by multiple people, and that simulate urban areas — both structural and dynamic aspects — as faithfully as possible. Recently, we accomplished an important milestone: we were able to simulate an area of 3km x 1.5km of the city of Uppsala, Sweden. This includes the actual terrain, the major landmarks of the city, several hundred assorted buildings, as well as traffic and pedestrians. It’s all live and accessible on the Internet, although not on the Web browser. This post explains the technology behind it. For the most part, it’s all based on open source software!
The Server Side
Being a developer of OpenSimulator, it’s no surprise that the core of the infrastructure is OpenSimulator. I seriously considered other alternatives, though, but I leave the technology decision process for another post.
The area is divided into a 2D grid of 256-meter “regions”, meaning that we have 6×12=72 regions total. In turn, the regions are grouped in “sectors” of 3×3 regions, each sector running in a separate simulator. So we have 8 simulators for this area, like the picture on the left (not to scale, it’s just an illustration).
We use OpenSimulator in a standard grid configuration, with one central “Robust” server, and with the simulators all sharing the grid resources. During different phases of this project, we placed the simulators in a variety of servers, from our own machines to AWS EC2. We have developed a simple systems administration layer on top of OpenSim that allows us to very quickly deploy simulators in whatever servers we need them to be on.
We have configured the simulators using advanced options that are available in OpenSim, but that aren’t well known. For example, we open child agents in a span of 6 regions all around, instead of the default 1. This allows us to view almost the entire area independent of where our avatars are.
One characteristic of the Encitra virtual reality environment vs. Second Life and the OpenSim grids out there is that our environment has very few visitors. During the most active phases of the build, there are 3-4 people logged in, and when we have meetings, there are 7-10 of us. We don’t expect ever having to cope with large numbers of users in the same simulators, since this is not a social environment, it’s just a collaborative one. We do, however, produce relatively large builds in terms of numbers of prims, meshes and textures.
The Viewers
We use primarily 2 viewers: Imprudence and the Zen viewer. Imprudence is a great utility viewer, as it allows us to create very large structures, as well as importing/exporting objects. More recently, we have all come to use and love the Zen viewer for doing routine builds and for capturing video and pictures. Zen is based on the Linden Lab V3 code base, meaning that it supports all the latest eye candy like media-on-a-prim, shadows, etc. It also comes with presets for creating stunning environments regarding the sky, the water and the light. We have been very pleased with the videos that Zen allows us to produce!
Terrain
With an external toolset that I developed, we are able to generate terrains from GIS data, embedded with aerial images. (One of these tools is based on old code from Adam Frisby.) These realistic aerial-image terrains, by themselves, provide a fair amount of immersion, even before any building is modeled. They also give us the footprint for placing the buildings in the scene and for visualizing the roads and vegetation.
These terrains allow us to work under them. Even though this was not part of the requirements for the Uppsala simulation project, I simply had to place something underground… after all, cities are not just what’s visible, they are also what’s invisible.
The Build Process
The build process is exactly the same as any build in Second Life / OpenSim. We have been using primarily prim buildings, but we also have sculpties and meshes. We have taken a fair amount of pictures of the building façades, and used them to texture the buildings. Here are pictures of some of the Uppsala landmarks:
For the urban plans that are being considered but that don’t yet exist, like the podcar system, for example, we do something special with them so that we can add them to or remove them from the scene with the click of a button.
Traffic Simulation
We developed a traffic simulation addon that is capable of driving thousands of vehicles all over the simulated area without this negatively impacting the visiting user’s experience. The traffic simulation runs on a separate server. The vehicles go from one end to the other without regards for “region” or simulator borders.
The current traffic simulation/visualization has been developed by me from scratch, but the intention is to hook up this technology to external traffic simulators, of which there are a few out there. That will be a future milestone. But since I developed this one, I developed an appreciation for traffic simulators. They’re fun pieces of software! My traffic simulation includes working traffic lights and stops to which the vehicles react, as well as collision detection, all wrapped up in very simple, boid-like rules.
Currently, the painful part is the establishment of routes. We have no way to infer the routes other than by visual inspection and by insider’s knowledge of the traffic flow and signs in Uppsala. The picture on the left shows the way points for cars and buses around the station area. With areas like this one, 3km wide, setting up the paths that the vehicles use, and the speed limits, is a daunting task, almost as daunting as doing it in real life! Perhaps when we hook this up to real traffic simulators that information will already be available.
On the positive side, the podcar system, given that it doesn’t yet exist, has been much more amenable to automation. We have automatically generated both the tracks and the routes from an existing plan.
Pedestrians
Since 0.7.3, OpenSim supports server-side bots that are scriptable inworld. We have used that facility to create over 60 standing/sitting bots and a dozen of walking ones. The standing/sitting bots are very lightweight; we could easily have hundreds of them in every simulator. The walking bots, on the other hand, are relatively heavy, as they are part of the physics scene.
Additionally to contributing to lag, and similarly to traffic, one of the major hassles associated with walking bots is establishing the routes that they walk. Our bot developer has scripted a very pretty system that allows us to visualize the routes of the bots using particle systems. This can’t exist in real life, but it’s so pretty that I wish it could!
Can Cities be Simulated, Really?
You may very well be asking that question. Clearly, this simulation ignores an enormous amount of things that happen in the real city of Uppsala. In fact, it focuses only on a very small number of aspects — the buildings, the roads, the traffic, the pedestrians around the stations and, most importantly in this case, the podcar system connecting the main station to the Hospital and the University. As with any computational model, big chunks of reality are discarded. That’s how these models work.
In this particular case, the main question at hand is the viability of the podcar system — its concept, its look & feel, its utility with respect to the alternative (buses), its layout throughout the city, and the positive and negative interference with existing infrastructure. Since there aren’t many podcar systems in the world, people are’t used to this urban transportation concept. The decision to have one is part of a long process of technical, political and strategic deliberation involving many stakeholders. This simulation and visualization is part of a larger set of artifacts that are being produced. It will be interesting to see what the outcome of those deliberations will be…
Comments
That this amazing piece of work has no comments as of now really makes me sad. This is truly a showcase what can be done with the Opensim Software. I wish you all the best and hope that other people will see what can be achieved.
best regards
Wordfromthe Wise
another question which i forgot to sk ; How can i “open child agents in a span of 6 regions all around, instead of the default 1” ? This sounds interesting fo my Regions on the OSGRID. And what is the difference to setting my Camera View to e.g. 500 meters (besides only seeing over 2 regions) ?
There is a configuration variable under the [Startup] section called DefaultDrawDistance. It might not work well in OSGrid, though, especially if you roam around to other regions that have a different value. I haven’t tried an inconsistent value but I heard it causes some problems.
Comments are closed.