0

Is there a way to move a CMS repository to different database vendor?

Hello,

We want to move the CMS repository to a different database.  We have them in Postgres and want to move them to SQL Server.  Is there a way to port from one database to the other?

Thanks,

Rob Clark

4replies Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
  • Hi Rob,

    We've just recently completed this exercise where we migrated the Pyramid repo from Postgres to SQL, I'm not sure if this is what you are wanting to do? 

     

    I'll try briefly outline the steps. We moved from the default install of Postgres provided by Pyramid to an Azure 1600DTU SQL DB. 

    1) Install Pyramid on a separate VM and point it to the new SQL DB during installation so that it can create all the tables and indexes and whatever else it needs.
    2) Use a tool similar to DBSoft (https://www.dbsofts.com/articles/postgresql_to_sql_server/) to copy the data from the PostgreSQL to the SQL DB.
    3) I copied the repo section containing the connection details and encrypted password from the above separate Pyramid servers config.ini file and then added it to and overwrote the existing connection details in the config.ini file in the existing Pyramid install folders (make sure to have a backup) on the machines in the cluster.
    4) Restart the Pyramid services or server and they should then start using the SQL db as their repository.

    Things we discovered afterwards: 
    1) The size of the DB grew around 3 fold, we haven't found an answer as to why, not sure if PostgreSQL just has some super compression on its databases.

    2) Our SQL magicians had to add clustered column store indexes to some of the large tables like te_schedule and the content tables.
    3) We initially used a 400DTU Azure SQL DB since it was the closest in terms of spec to the VM we had PostgreSQL running on. We encountered performance issues and had to upgrade to the aforementioned 1600DTU DB.

    4) It took around 8 hours to transfer all the data, our PostgreSQL db was around 40GB big, the SQL db ended up being close to 130GB and after adding the clustered indexes, the size dropped to around 80GB.

     

    Also, make sure to test this thoroughly before doing it.

     

    Regards,

    Devon

    Like
      • NPANS
      • NPANS
      • 3 mths ago
      • Reported - view

      Devon Hunter 400DTU's is the equivalent of 4 CPUs. This is an incredibly low footprint for Ms SQL running on Windows. 16 CPU's sound a lot more reasonable. We, for instance, run our MS SQL instance on 32.

      Separately, we found that cleaning out the log tables before the transition heavily reduced the payload size. Since the log tables are not critical to ongoing operations, this eliminated over 65% of the initial payload size. 

      Like
    • NPANS 

      Yup, I know, that's why I listed it so people can learn from our mistakes.

      We wanted to clear the logs but due to business requirements, we couldn't. The only logs we cleared were the Transaction Text Logs.

      Like
  • Thank you Devon! This is very helpful!

    Like
Like Follow
  • Status Answered
  • 3 mths agoLast active
  • 4Replies
  • 21Views
  • 3 Following