h4ck3rm1k3's Comments
Post | When | Comment |
---|---|---|
TwoNickels(dime) dxf2osm is running with polygons | datafile is here:
|
|
cs2cs help | Ahh,
x=358376.5
outputs are :
see the main routine...
it is all working now, so dont worry about it. |
|
cs2cs help | Sorry for the confusion. MText is the new part of the Dime (two nickels)
I have made a simple proj tool in dime as well here, but I forget to add that to git.
On of my problems that was in the old code was the radians, I had converted the northing and easting to radians before converting. Now I do that after I get the results. In fact, It works with the standard proj lib now, after I made the needed changes. Here is the current code :
Still, the code changes to proj are good, there were alot of bad typecasts, removing the constant from chars etc. Ideally we could have a set of templates, one for each projection and do the code inline. It would be much faster. to answer your questions : I had posted the parameters and results here with the code : The new code using the standard proj interface :
void convertPoint(double x, double y, double & rx, double & ry) The inputs : 358376.5 7753537 the outputs :
the convert point goes from a UTM northing/easting to lat/lon.
|
|
Proj now working |
I have added the first results of the dxf-osm converted,
code checked in
More to come. mike |
|
Brasil Coordinates Transform | Data File:
|
|
Very fast osm processing in C++ | Memory leaks using xerces transcode. It was making copies of all the strings. Have replaced/removed all the unneeded copies of the strings for comparison and parsing of ints. I have used valgrind to debug the memory leaks. The problems are removed. Also I turned off the reverse lookup by the string of the Point and removed it from the memory representation. Will look into using a rtree or a quadtree for that later. mike |
|
Very fast osm processing in C++ | Now, to be fair, the osm2pgsql does process the osm files in a very similar way, but it is not intended on being a generic processor. I am working on processing a much larger file now, the entire state file and also have started looking into the bzip2 processing inline and the sax reader. We should be able to fetch just parts of a world.osm.bz2 file and process it while downloading (using the blocks as they complete) But that is for the future, for now I will focus on the county processing. Well, here are the results of wordcount on the uncompressed NJ file: 18 million nodes, counted in 1.5 minutes. wget http://downloads.cloudmade.com/north_america/united_states/new_jersey/new_jersey.osm.bz2
time wc new_jersey.osm
One thing that I have observed, the processing takes up the entire processor, but only one of four. That is why we need these splitting routines in general so that we can process on mutiple processors easily. Osmosis is nice, but I dont feel confortable with using it, it is pretty complex. Ideally Now, just running my county extractor on that takes a long time. I need to find out why.... mike |
|
Very fast osm processing in C++ | I have checked in the Makefile, using gcc -O4, have now the bounding box calculation and refactored the classes.
here are my system details :
processor : 3
|
|
Very fast osm processing in C++ | OK, I have a perl script to generate xml constants here:
A scheme file here:
The latest version has a makefile and also I have generated a list of fields:
This is just a first version, will need to put more work into creating an optimum recogniser for the schema. It should be possible to generate a lex like structure to process the rest. but for now, I am doing switches based on the field names. Now, this version looks up each node reference in the id -> coords table and also outputs the entire names database of the nodes, ways and relations. it runs in 10 seconds on my computer with a larger version of the osm file with some duplicates where i tried to resolve the missing nodes in the extract file.
for comparison, wordcount needs 5x less.
So, it is still fast even though it is doing much more processing.
I am going to make some template classes for the processing of fields and defining structures... here is a start that I have not even compiled :
|
|
Very fast osm processing in C++ | Yes, I am rewriting that perl script in c++ now,
I dont want to collect any huge memory structure in the parser, the client should be able to do that. mike |
|
New version of osm2poly.pl to extract from the cloudmade admin borders | The upload is finished :
|
|
Polygon files for NJ ZCTA on the way | Here is the second part!
|
|
Polygon files for NJ ZCTA on the way | You can see the difference between the zcta and the "zip codes" Here are the ztcas :
There are differences that you can see between the two versions.
But we have to start somewhere! |
|
Polygon files for NJ ZCTA on the way | The first part is finished uploading :
|
|
Polygon files for NJ ZCTA on the way | I found a mashup that shows just want I am planning on doing,
Here are more infos on zipcodes :
So if anyone wants to add any information about them, do it there. mike |
|
Hacking the OSM tools today Osm2PgSql and Osm2Poly | http://zip4.usps.com/zip4/welcome.jsp Here is a nice tool to double check the zip code if there are any questions. |
|
New Host for OSM data , archive.org | I have been playing with qgis, and it looks like there is a feature to create a convex hull based on an attribute value. So, you could take these attribute values (post codes) and create a convex hull and then compare this to the ZCTA. That would give you a good start because you could compare the areas that have the biggest difference first. The other thing is that you can flag the nodes and ways that are outside of the ZCTA, that is what I was doing to check them. Maybe other states have more problems with the zipcodes, but NJ looks very stable. mike |
|
New Host for OSM data , archive.org | I was just following the wiki,
|
|
New Host for OSM data , archive.org | Yes, of course. In Germany I found power lines, security cameras and trees.
|
|
New Host for OSM data , archive.org | I have hacked osm2pgsql so that it imports the data from my feeds :
The data is loaded in qgis. I will be creating some postgres queries to split up the data and process it. that is at least my plan. I dont care if the monolithic OSM database stores this data or not. In fact, I think it would be better to keep it separate until we find a better way to add in layers. Ideally the chunks of data will be usable directly from some GIT repository and we split them into very small but useful peices. mike |