This project has moved and is read-only. For the latest updates, please go here.

Best Server Architecture For Map Server with Tile Caching?

Jun 13, 2008 at 3:16 PM
Hey all,

I am currently doing some research on technologies to build a Google Maps mashup with overlay thematic mapping.
Thought I would pick your minds and find out what people feel like would be the best solution that would scale.

User can pick from a few demographic indicators and see a shaded region overlay of the distribution over Google Maps.

Possible Solutions

Solution 1
5 or more tile servers, load balanced, with each doing the following upon a tile request
  1. See if tile image exists on its local file system.  If so, return image.
  2. Generate tile image using SharpMap, store image on file system, return image.
Solution 2
3 or more tile servers, load balanced
3 or more tile 'crunchers'

Upon tile request to the tile servers:
  1. See if tile image exists on its local file system.  If so, return image.
  2. Make WMS request to tile 'cruncher', store image on file system, return image.

Assuming server specs/quantity is very flexible within reason, what do you think would be the best solution in terms of performance?  Anyone got some other ideas?


Jun 13, 2008 at 3:30 PM
Edited Jun 13, 2008 at 3:37 PM
Hi Matt, Sounds like a big project..
Either solution will work - in a way it depends on the area of data being covered as ultimately all servers will end up with all tiles locally (unless you have a SAN).
As the application runs there will be less calls to have maps generated as more and more can be served from cache. At this point the server spec becomes less important because it is just streaming from disk so you can have lots of cheap boxes (until you include rackspace ;)) but the initial performance of the app will not be s quick. My tactic is usually big database server / small web servers - the more webs servers the better..

Senario 1 is catered for in the 1.1 experimental branch, this is going to be the starting point for the v2 web architecture as well..

I suppose senario 2 is like as well..
HTH jd
Jun 13, 2008 at 4:07 PM
Yeah, I was contemplating solution 2 because I was wondering if it would be better having the small tile servers offload the tile generation to some beefy machines.  The time for the whole WMS request/response might negate any advantage there however.

Say the area covered is the whole USA with all 18 of Google's zoom levels (roughly 540mil tiles for one indicator).
Storing all tiles is definitely out of the question, so one would obviously need to pre cache highly viewed tiles (big cities, high zoom levels, etc.).
With this tile generation will always occur.  Is a SAN absolutely necessary here?

Jun 13, 2008 at 4:36 PM
Edited Jun 13, 2008 at 4:46 PM
I think the problem comes down ultimately to disk space - a SAN isn't completely necessary but without it a good algorithm for prioritizing commonly used tiles is. I _guess_ the 'out of town' tiles have less to render anyway so they would probably render reasonably fast anyway..
Following the rule of scaling out not up I would let each of the front end servers take some of the rendering on - otherwise you will likely bottleneck at the map rendering servers..

Edit: I guess a SAN could also be considered a potential bottleneck.. 

hth jd