Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v4.10.0 #3159

Closed
kocio-pl opened this issue Apr 1, 2018 · 12 comments
Closed

Release v4.10.0 #3159

kocio-pl opened this issue Apr 1, 2018 · 12 comments
Labels

Comments

@kocio-pl
Copy link
Collaborator

kocio-pl commented Apr 1, 2018

I plan to release v4.10.0 on 20.04. I hope #3099 will be merged soon and this will be quite important feature. Even more important would be #3061, but testing is needed to be sure.

Any other important things to be done before the release?

@polarbearing
Copy link
Contributor

I plan to progress the accommodation and museum outlines, however I'm offline for a a week until Apr 8.

@andrzej-r
Copy link
Contributor

#3061 is now #3163. I've rebased it to the current git master.

@matthijsmelissen
Copy link
Collaborator

I agree it would be nice to release after #3163 has been merged.

@kocio-pl
Copy link
Collaborator Author

kocio-pl commented Apr 19, 2018

I think this release is ready to be shipped feature- and time-wise, but I'm a bit worried about performance. I have a feeling that some of the recent changes in master eat much more of CPU power. I will investigate it, but I need some help - could anyone else confirm this effect (and possibly find the source of this problem)?

@kocio-pl
Copy link
Collaborator Author

It looks like the performance hit is triggered by #3101 and it's enough to just revert it to get rid of the problem.

@matthijsmelissen
Copy link
Collaborator

Yes, this is a likely cause of performance problems. I dont access to a pc now, feel free to go ahead and reverse it.

How did you notice the performance issue? And can you see whether it is an issue on all zoom levels?

@kocio-pl
Copy link
Collaborator Author

kocio-pl commented Apr 20, 2018

Since we don't have any performance testing platform, I just had the feeling that something goes worse than usual. My standard test condition is z16 on the center of Warsaw with a screen split between current OSM.org tiles and rendering the fresh code, because it works reasonably fast - when I do a single dragging of the area, test rendering is ready in just ~0.5 s later than fetching tiles from the server. It also takes one "tick" (step) to render all the missing tiles. I'm used to this usage pattern and it's quite comfortable for me.

In this case it took few seconds for rendering to catch up with fetching tiles and it was done in 2/3 ticks, not just in one batch. Looking at the CPU usage pattern has shown the source of the problem is there - I rather worry about disk I/O (which means too big database query result), but in this case it was low all the time.

I haven't tested it on other scales (like midzoom), but they tend to be even slower - that's why I start with high zoom level by default.

@kocio-pl
Copy link
Collaborator Author

@andrzej-r
Copy link
Contributor

How long does it take to re-render all tiles in osm? I wonder if some tile servers still have old tiles or if we have a problem with race conditions in production environment.

The reason I suspect there may be an actual problem is that I've seen THREE different renderings of:

https://www.openstreetmap.org/way/170525607

  • as building name, no icon or dot, just gray label
  • as amenity=bank, with brown icon and label
  • as blue office dot and brown amenity label.

Some renderings are obviously undesirable, which may be a bug in itself. However, as there are three different ones, we are not just switching from one version to another but there is also an inconsistency in how tiles are rendered on different servers.

@kocio-pl
Copy link
Collaborator Author

It depends not only on rendering, but also on caching servers. It might take few days, in general.

What is the real problem with different renderings? Blue dot and brown label is a bug for me, but two others seem to be proper rendering and this could only be a wrong tagging problem.

@andrzej-r
Copy link
Contributor

OK, it make sense to wait a bit more then for all rendering to finish.

There is definitely one bug already, it is even visible in the test rendering (please check different zoom levels). I just haven't noticed it as it is fairly uncommon combination of tags.

Does CartoCSS provide any guarantees w.r.t. the order clauses are evaluated? We have code testing for amenity, office and building tags separately. If the order was not specified and/or random we would have to add a lot more checks to handle overlapping cases.

@andrzej-r
Copy link
Contributor

Nah, that's an actual bug, also visible in kosmtik and perfectly reproducible. The way CSS is evaluated is that the whole section/layer (e.g. .points) is (surprise!) processed sequentially. That is, the when the parser encounters:
[feature = 'amenity_bank'][zoom >= 17] { ... }
it doesn't stop there. Instead it keeps executing all other sections and then it finds:
[office != null][zoom >= 17] { ... }
it executes it as well, overriding colours, icon sizes etc.

In addition to the obvious issues we see (and will see more when as we increase the number of supported tags), this also has an impact on rendering speed.

To solve it, we could pepper the code with extra checks ([office != null][amenity = null]). But I think it could be better to somehow end processing CSS at the end of the amenity_bank section) instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants