OSMF Board has lots of people who are about building community. We keep pushing away from the mind one significant thing: infrastructure.
There are munin graphs on server availability, yes. There are almost no graphs on API’s availability. How fast does 99% of users get whatever they need from APIs? Is it below 100ms (as it should be in 2016) or closer to 30s?
In 2012 I started the wiki page on expanding OSMF-hosted tile servers into a global CDN. In 2016 this CDN serves 6 TB a day, amount commercially available for $68 000 a month. Shall I remind you that we collect just €70,000 for a year of operations, completely ignoring the fact there are effectively already ~$1 000 000 of donations to OSM Foundation yearly?
While the CDN itself works like a charm, problems start appearing in other spots. Rendering nodes are close to 90% of their capacity, and a single large enough new site can easily push that to over 100%. Currently people immediately start pointing a finger at this site and shouting «OFFENDER!», banning them from OSM tile usage. Truth is, you cannot save more than you have. Wikimedia sets up separate tile infrastructure with the same stylesheets, just because pointing Wikipedia users to OSMF servers will blow servers away in seconds. We’d better think how we can provide more tiles.
There is no messaging API. It was fine when all the editors were on osm.org website, or at least at a PC, but now we’ve got new mobile editors (most visible is maps.me, but other may follow) that can send data to osm servers, but have no way of getting informed that someone tried to contact the user, and showing a notification on that.
There are no more planet.gpx dumps. Last one was done in 2013, and the progress is dead since then. See: http://planet.openstreetmap.org/gps/ — we can’t even push enough traces into OpenStreetMap to have a complete speed profile from just a single city. Doroga.tv tried that once, blew OSM servers away and is still a top traces contributor. Did we learn anything from there? Can we ingest similar amount of data now, as there are now a lot more services that can provide us with similar data? The answer for now is «no».
OpenStreetMap has one of largest public Postgres instances in the world, with 6Tb of data. Still running on Postgres 9.1, without all the performance improvements and the JSONB usage, which was created exactly for effecient handling of tags. What if we ask a database vendor like https://postgrespro.ru/ or http://www.enterprisedb.com/ to support it and make a report on how it can do better?
I’m going to raise the issues like this and perform attempts at resolving them in a constructive way.
- get in touch with database vendors and collect their recommendations on what could be done better;
- get monitoring done in terms of what experience user gets in addition to existing «how loaded our servers are»;
- analyse community issues in terms of lack of infrastructure. example: if we can’t contact maps.me users, let’s create means of contact targeted at them;
- make rhetoric of «we live on donated capacity, go away and set your own» go away in favor of «we live on donated capacity, donate and we’ll live better together«;
- clean up import of TIGER and similar things that were done long ago and we’ve used to, at least in terms of tags used — no more «tiger:cfcc» and «tiger:tlid».
Member of OpenStreetMap Foundation since 2010-05-24. It was a tricky thing then – no PayPal in Belarus, no way to register and prolong membership for years, yet I did that :)
Sponsoring two servers for Tile CDN, serving most of East Europe:
Vote for me if you want to grant me an opportunity to write «can your company help us?» e-mails not as «Komzpa, someone from Belarus who cares about OSM», but as «someone from OSMF board».
If you decide to vote for me and can’t decide who else to vote for, please vote for Kate Chapman aka Wonderchook. Her manifesto: https://wiki.openstreetmap.org/wiki/User:Wonderchook/2016_OSMF_Board_Elections_Manifesto
Overall info about elections: