daniel-j-h's Comments
Post | When | Comment |
---|---|---|
RoboSat ❤️ Tanzania | Great! Keep me posted how it goes! :) Always happy to hear feedback. |
|
RoboSat ❤️ Tanzania | WebP or PNG does not matter. We can read all image formats supported by PIL https://pillow.readthedocs.io/en/5.3.x/handbook/image-file-formats.html |
|
RoboSat ❤️ Tanzania | In your dataset
That is for all z, x, y tiles you are interested in there have to be parallel images
The same applies to the validation dataset. Creating this dataset is on you and a bit out of scope here. |
|
RoboSat ❤️ Tanzania | Maybe visualize where you have GeoTIFF image tiles and where you have mask tiles. It could be that the GeoTIFFs just don’t cover all of the areas you extracted masks for. Otherwise try to reproduce this with a small GeoTIFF and/or a smaller area. And maybe try the gdal2tiles approach and see if the output is different. You need to debug this a bit - could be multiple problems. |
|
RoboSat ❤️ Tanzania |
|
|
RoboSat ❤️ Tanzania | You need to run the |
|
RoboSat ❤️ Tanzania | If you don’t want to install robosat on your own, we provide both CPU as well as GPU images you can docker-run directly: https://hub.docker.com/r/mapbox/robosat/ Regarding your problem cutting out an area from an osm base map: it sounds like either the cutting is creating invalid polygons or there are invalid polygons in the base map. In addition it sounds like cutting out an area does not properly keep the way locations that’s why you are running into the location problem osmium-tool’s extract comes with sane default strategies to prevent all of these issues. Wnat you can try if you are having a hard time installing osmium-tool is to use the mason package manager to get a pre-built binary:
|
|
RoboSat ❤️ Tanzania | I think osmium-tool comes with documentation for how to build it. For protozero and libosmium, I would just compile them from source, too, and not rely on the package manager. If you are running into compilation issues I would open a ticket with these libraries. You don’t strictly need osmium-tool I just used it because I had it around. You can use any tool to cut out a smaller area out of a larger pbf base map. |
|
RoboSat ❤️ Tanzania | The problem with using The alternative is to download the raw GeoTIFFs to your machine and run either gdal2tiles or the small tiler tool I wrote here to cut out tiles from the GeoTIFFs. You will find the GeoTIFF URLs in the OpenAerialMap API response (see the description above using the Hope that helps |
|
RoboSat ❤️ Tanzania | You will need two URLs with tokens. One for the compare map to load styles and one for the tile server to fetch satellite or aerial imagery from to run segmentation on it. You will need to adapt the center locations for the compare map (for both the before and after map) here. The after map is then adding a raster layer automatically requesting tiles from the Again, I recommend using the |
|
RoboSat ❤️ Tanzania | Your
The serve tool is currently fixed at zoom level 18 since we only used it for initial debugging. If you are not using z18 you have to change it here: You will definitely see many 404s since the serve tool tileserver will only respond with e.g. z18 tiles but the map will request e.g. z16, z17, z18, z19, all at the same time. In addition you probably want to adapt the map template’s initial location, zoom levels, and so on: I recommend using the |
|
RoboSat ❤️ Tanzania | You will get the warning when the downloader was not able to download tiles from the list of tile coordinates you give it. This is probably due to the tile endpoint not providing imagery for all your tile ids. osm.wiki/Aerial_imagery is a good place to start when looking for imagery sources. |
|
RoboSat ❤️ Tanzania | Agree, for larger tiling jobs I recommend using proper tools like gdal2tiles. |
|
RoboSat ❤️ Tanzania | I can’t debug statements like “contains garbage images”. What does your GeoTIFF look like? Do you have a small self-contained example where I can reproduce this issue? What do the Slippy Map tiles look like? You can give gdal2tiles a try; the tiler script was just a quick way for me to tile my GeoTIFFs. |
|
RoboSat ❤️ Tanzania | It doesn’t matter where you get the bounding box from; I used http://geojson.io since it’s convenient to use. And yes, it is currently working for me. The bounding box is only used for cutting out a smaller base map from a larger |
|
RoboSat ❤️ Tanzania | Not sure why you’d want to run the RoboSat toolchain in Jupyter Notebooks, but I guess there is nothing stopping you from doing that. You have to install the RoboSat tools and its dependencies then you can use the |
|
RoboSat ❤️ Tanzania | We train on (tile, mask) pairs, that’s right. But for prediction we buffer the tile on the fly (e.g. with 32px overlap on all sides), predict on the buffered tile which now captures the context from its eight adjacent tiles, and then crop out the probabilities for the original tile again. This results in smooth borders across tiles in the masks. The probabilities you are seeing above might still not match 100% at the borders, but that’s fine. Here is the original tile and how the buffering during prediction affects it. original tile in the middle, four corners from adjacent tiles added for buffering fully buffered tile with context from its eight adjacent tiles Without this buffering approach you would clearly see the tile borders, correct. |
|
RoboSat ❤️ Tanzania |
If you reach an IoU of 0.8 that’s pretty amazing to be honest. Here’s why. There are two sides contributing to the IoU metric: your predictions can be off but worse also the OpenStreetMap “ground truth” geometries can be off. Even with a perfect model you won’t reach an IoU of 1.0 since the OpenStreetMap geometries can - and often are - coarse, or have a slight misalignment, or are not yet mapped in OpenStreetMap etc. Here’s an interesting experiment: randomly sample from your dataset. Manually annotate the image tiles generating fine-detailed masks. Now calculate the IoU metric on your manually generated fine-detailed masks aand the automatically generated masks from OpenStreetMap. This will be your IoU upper bound you can reach. Also see this graphic to get a feel for the IoU metric.
Agree, without a GPU training will be slow. That said, I made some changes recently which should speed things up considerably:
If you want to give it a try with current master you should see improvements for your use-case. |
|
RoboSat ❤️ Tanzania | The amount of images you will need for training can vary a lot and mostly depends on
For example the more hard-negative iterations you do the better the model can distinguish the background class. But hard-negative mining also takes quite a while. Same with the automatically created dataset: you can manually clean it up it but it is quite time-intensive. In addition you could do more data augmentation during training to further artificially embiggen the dataset, you could do test-time augmentation where you predict on the tile and it’s 90/180/270 degree rotations and then merge the predictions, you could train and predict on multiple zoom levels, and so on. I would say it also depends on your use-case. For detecting building footprints like in this guide a couple of thousand image are fine to get the rough shapes. It’s definitely not great for automatic mapping but that is not my intention in the first place. Regarding trained models: I recently added an ONNX model exporter to RoboSat which allows for portable model files folks can use with their backend of choice. I could publish the trained ONNX model for this guide since I did it on my own time. The Mapbox models I am not allowed to publish as of writing this. If there is community interest maybe we can come up with a publicly available model catalogue hosting ONNX models and metadata where folks can easily upload and download models? |
|
RoboSat — robots at the edge of space! | Can you create an issue in the https://github.com/mapbox/robosat repository. I haven’t seen |