After hanging out on my daily todo list many many many times, I finally have moved earthquakes today.
Once upon a time, I was hosting it on Amazon S3 and building it on a VPS. I guess I should've mentioned that it's a middleman project and middleman excels at building static sites.
But doing these constant updates of the site had doubled my monthly S3 bill. At the time, it was like the difference between 20 and 40 cents and OMG, though I knew it was stupid to fret over 20 cents, I fretted anyway.
So I rearchitected the site to host it on heroku. I basically threw in some code to check the age of the earthquake yml file, and if it was out of date, it goes out and updates it (while the user waits for the server to respond).
As convoluted as it was, it actually worked well.
But something changed over at Heroku at some point and the site's been crashing constantly.
What's also really odd is that I've got a few other similarly programmed sites and I have no idea why this particular site is crashing.
It's always resolved by a simple
heroku restart. But what really bothers me is, why didn't the heroku system detect a problem and restart the dyno automatically?
Am I expecting too much? I thought the whole point of heroku was to offload system admin work?
I tried a few things to try to resolve the heroku issue, but really I knew that I should just move the site back to doing something slightly more sane. Especially now that I have several earthquake sites so I could just have them share the raw data.
So finally, the earthquakes today site is back to being pre-built. The user no longer has to wait as the system goes out and updates the earthquake data. It happens in the background.
Oh, I'm sure the site will still crash. But at least, I'll have a clue as to what's going on.