OpenStreetMap logo OpenStreetMap

this doesn't seem to work

Posted by Deelkar on 5 May 2008 in English.

There are some problems with the t@h approach to rendering maps, but there are two that might develop to the point where they become show-stoppers:
(from my point of view)

A) The server is too slow.
B) The clients can't handle certain tiles.

Yes, the server is too slow. It cannot handle enough clients to keep the world up to date soon, let alone retroactively re-render everything that need to be rendered when a layer gets added to the portfolio of things we want to show.
For almost all year the server has not shown a similar improvement in speed to everything else OSM.

Also since several city-tiles are getting very complex, there are fewer and fewer clients that can infact render those tiles. Next to nobody has a PC with 12+ GB of RAM laying around. The problem being that if we start precutting the tiles into more manageable chunks the osm to svg transformation becomes prohibitively slow. There should be a more effective way of keeping svg complexity down when rendering high-zoom subtiles of densely mapped city areas, other than mucking around with the osm data.

Email icon Bluesky Icon Facebook Icon LinkedIn Icon Mastodon Icon Telegram Icon X Icon

Discussion

Comment from iandees on 5 May 2008 at 21:42

Hey fellow t@h user. I definitely agree with this statement. Both A and B are show stoppers. Perhaps this discussion should be moved to the t@h mailing list, because I'll bet there are others that would love to weigh in on this.

For A (server side), I think the software needs to be streamlined. We should come up with a list of use cases that the server needs to support and then either rewrite from scratch or rewrite the existing code to speed it up. I've started toying with an alternative implementation on Google App Engine (which is "infinitely scalable") here: http://taggr.appspot.com/browse.

For B (client side), I imagine that some piece of the system needs to determine if a tile is too complex for the client to render (based on memory) and prevent that client from taking the job. The more complex tiles could be rendered by clients that have the capability of rendering them.

This doesn't solve the overall problem, but it does buy us some time.

Comment from Deelkar on 5 May 2008 at 21:49

Just to clarify I'm not just another t@h user, I'm the de-facto lead client developer.
Just passing on tiles the client can't render would mean several tiles do not get rendered at all. So a more general solution needs to be found.
or/p looks promising on that aspect, it could make B) workable.

Comment from PhilippeP on 6 May 2008 at 08:33

One way of lightening the work would be to layering the data.
Leave only basic map data like roads and landuse on base layer, then make additionnal layer(s) for amenities, shops, ...

On server side , why not try to squat Google servers ?? it would also improve map data on Google Maps in some parts of the world ... :)

Comment from andrewpmk on 6 May 2008 at 10:22

Personally, I think that we should just use Mapnik and get rid of tiles@home. Mapnik is much, much more efficient than tiles@home and if we make Mapnik to update its database more frequently (like daily) then tiles@home will become redundant.

Comment from Deelkar on 6 May 2008 at 10:39

no, not redundant, but more the "debug" layer, or "show me everything" layer. As a lot of Map_Features is still not rendered.
Unfortunately I'm not that well versed in db stuff so that I can work on mapnik, however I can work to make tiles@home better, so I do that.

Log in to leave a comment