Women in Technology

Hear us Roar

  Automating PostgreSQL Tasks
Subject:   Some other helpful hints
Date:   2004-12-12 17:37:48
From:   pspinler
Good stuff. We do very similar stuff in our shop. Here's a couple of additional, possibly useful hints which we do:

1) Make a script per entity.

Assume that you're maintaining an already existing application, and you have to make partial changes to the database. A script per entity will allow you to change one entity at a time.

2) Use a revision control system. Using CVS, or better, something like SVN or Darcs on the scripts will allow you a lot more flexibility.

If you have e.g. a seperate development, test, and production database for an application, where you start a change in development, then migrate that change to test, and finally to production, here's a revision control system really shines.

2.a) use tags. Tag specific script revisions with the environment, e.g. "cvs tag -r 1.6 TEST foo_table.sql"

Then, to reconstruct the test database, pull everything with a TEST tag. When the next change gets migrated in, move the TEST tag to that revision of the script, DEVELOPMENT tagged scripts are what you'd use to rebuild development, etc.

3) Don't be afraid to use the database provided load/unload programs to create your scripts and data files. For instance, "pg_dump -a -t <table>" will dump only data for table, and "pg_dump -s -t <table>" will dump only schema for table.

Why, you ask ? Well, then you can let developers or other users make changes to schema or data, and after you review them, you can dump them to a file, and start migrating them to test or production databases.

You could even set up a web or script based set of functions to dump out schema, check it into cvs/svn, and tag it; then another set of scripts / web pages to check out a set of specific tags, and migrate it to a new database or rebuild a development database.

Anyway, hope folks find this interesting. good luck.

-- Pat

Full Threads Oldest First

Showing messages 1 through 1 of 1.

  • Some other helpful hints
    2005-01-12 13:29:44  tomlag [View]

    There is another way to manipulate database objects. We use nmake from AT&T labs together with Perl DBI:: DBD::. We create OPERATORS to do database specific tasks. Everything is coded in the OPERATOR. It makes maintainence really easy.

    A makefile would look like this:

    include DBrules.mk


    thats it.

    The operator would scan the current directory and create all the necessary dependency relationships for tables,indices,constraints,procedures,triggers,static data, etc. Then build the ones that are out of date as well as anything that depended on the out of date targets.

    Just modify the objects that need to change and rerun nmake with a database context and everything is taken care of. Only the objects that were changed are rebuilt. There is even a reload feature to reload a structure change to a table automatically.

    I know this is quite vague but it really works. We can do major releases with major database structure changes within a half hour.

    If anybody is interested I can ellaborate more.