Another quick tip about databases, this time about throwing them around between development, staging and production.
This is easiest to explain by example.
At gomore.dk we facilitate ride sharing between our members. Say I get a support email from a member, explaining that a ride she just created isn’t showing up in the search results.
I first impersonate her on the production site, take a look at the ride she created and verify that it is indeed not present in search results. Noone else seems to be having this problem and I can’t see anything unusual about the ride.
Clearly I want to reproduce this locally.
My development database is now an exact clone of production, so unless something really nasty is going on, I’ll be able to reproduce and debug the issue on my laptop.
If for some reason I want to try out a fix on our staging server, getting the data there is easy as well.
Our database is not huge (we probably have about 500k rows total) so these scripts take only a few seconds to run. Here they are:
As you can see, these scripts work by dumping the DB on one server, gzipping it on the fly, transferring the bits to another server via ssh, gunzipping them there and finally restoring the DB.
I’m not going to go into detail about the basic operation, but be sure to understand it or read up on the constructs used here, because the combination of ssh and pipes is extremely versatile and useful.
A few notes about the options given to the Postgres tools:
-xSkips privileges (GRANT commands)
-OSkips ownership of the dumped objects. This means that the DB owner becomes the owner of the objects on the receiving side.
-F tSets the output format to “tar”. This allows me to use option
-cwhen invoking pg_restore.
-cDelete database objects before recreating them
-x, not strictly necessary here
-U gomoreConnect to the Postgres server as the
-d db_nameSpecify the destination database name
pg_restore reads from stdin because I didn’t specify a filename argument.
Thanks to Jørgen Erichsen for showing me very similar scripts a few years ago.
If you have huge tables in your DB, it can be useful to skip them with