postgresql 将远程postgres数据库复制到第二个远程服务器

o4hqfura  于 2023-06-22  发布在  PostgreSQL
关注(0)|答案(1)|浏览(147)

我目前有一个prod和test数据库,分别位于两个服务器azure postgres服务器上。我想做一个晚上的prod数据库备份到测试,这样每天早上两者是相同的。我的数据表有约束和键,所以我不能只复制数据本身,还要复制模式,所以一个简单的pandas df.to_sql不会涵盖它。
我目前的计划是每晚运行Azure Functions python脚本来完成复制。我尝试了sqlalchemy,但在正确复制元数据时遇到了重大问题。现在我尝试通过子进程使用postgres的pg_dump和pg_restore/psql命令,代码如下:

def backup_database(location, database, password, username, backup_file):
    # Use pg_dump command to create a backup of the specified database
    cmd = [
        'pg_dump',
        '-Fc',
        '-f', backup_file,
        '-h', location,
        '-d', database,
        '-U', username,
        '-p', '5432',
        '-W',
    ]
    subprocess.run(cmd, check=True, input=password.encode())

def clear_database(engine, metadata):
    # Drop all tables in the database
    metadata.drop_all(bind=engine, checkfirst=False)

def restore_database(location, database, password, username, backup_file):
    # Use pg_restore command to restore the backup onto the database
    # cmd = ['pg_restore', '-Fc', '-d', engine.url.database, backup_file]
    cmd = [
        'pg_restore',
        '-Fc',
        '-C',
        '-f', backup_file,
        '-h', location,
        #'-d', database,
        '-U', username,
        '-p', '5432',
        '-W',
    ]

    try:
        subprocess.run(cmd, check=True, capture_output=True, text=True)
        print("Backup restored onto the test server.")
    except subprocess.CalledProcessError as e:
        print("Error occurred while restoring the backup:")
        print(e.stdout)  # Print the output from the command
        print(e.stderr)  # Print the error message, if available

# Define backup file path
backup_file = '/pathtofile/backup_file.dump'  # Update with the desired backup file path
backup_file2 = 'backup_file.dump'  # Update with the desired backup file path

# Backup the production database
backup_database(input_host, input_database, input_password, input_user, backup_file)
print("Backup of the production database created.")

# Create metadata object for test server
output_metadata = MetaData(bind=output_engine)

clear_database(output_engine, output_metadata)
print("Test server cleared.")

restore_database(output_host, output_datebase, output_password, output_user, backup_file2)
print("Backup restored onto the test server.")

此代码似乎正在创建转储文件,但没有成功还原到测试数据库。如果我让此代码工作,如何在Azure Functions中指定文件路径,这是从Azure Functions运行的合适解决方案吗?如果没有,如何让sqlalchemy成功地清除测试数据/元数据,然后每天晚上从prod复制数据?

3hvapo4f

3hvapo4f1#

我已经提到了MSDOC PsycopgPostgreSQL

import  psycopg2
src_conn_string = "SourceConnectionString"
dst_conn_string = "DStConnectionString"
try:
src_conn = psycopg2.connect(src_conn_string)
src_cursor = src_conn.cursor()
print("Connected to source database.")
try:
dst_conn = psycopg2.connect(dst_conn_string)
dst_cursor = dst_conn.cursor()
print("Connected to destination database.")
try:
src_cursor.execute(
"SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE'"
)
tables = src_cursor.fetchall()
for  table  in  tables:
src_cursor.execute("SELECT * FROM {0}".format(table[0]))

rows = src_cursor.fetchall()

for  row  in  rows:
dst_cursor.execute("INSERT INTO {0} VALUES {1}".format(table[0],  row))
print("Data transferred successfully.")
except  psycopg2.Error  as  e:
print("Error transferring data: ",  e)
finally:
dst_conn.commit()
dst_cursor.close()
dst_conn.close()
print("Destination database connection closed.")
except  psycopg2.Error  as  e:
print("Error connecting to destination database: ",  e)
finally:
src_cursor.close()
src_conn.close()
print("Source database connection closed.")
except  psycopg2.Error  as  e:
print("Error connecting to source database: ",  e)

输出:

在Azure中:
来源:

目的地:

相关问题