hmu.ai
Back to Operator Agents
Operator Agent

Hyper-Focused Database Backup Automation Script for Freelance Developers

Stop doing this manually. Deploy an autonomous Operator agent to handle database backup automation script entirely in the background.

Zero-Shot Command Setup

generate a daily database backup script for PostgreSQL database 'client_db', including rotation for 7 days, to S3 bucket 'client-backups-2023' with encryption.

Core Benefits & ROI

  • Data Security & Integrity
  • Disaster Recovery Preparedness
  • Time Savings
  • Automated Compliance
  • Peace of Mind

Ecosystem Integration

This agent is a cornerstone of the "Operations & Maintenance" pillar and critical for "Development & Coding Assistance." It directly assists freelance developers in implementing robust, automated data protection strategies for their client projects. By generating a ready-to-deploy script, it not only saves significant development time but also ensures that critical client data is regularly backed up and safely stored, adhering to best practices and providing crucial disaster recovery capabilities.

Sample Output

```bash #!/bin/bash # Configuration DB_NAME="client_db" DB_USER="backup_user" DB_HOST="localhost" # Or your database host S3_BUCKET="client-backups-2023" BACKUP_DIR="/tmp/db_backups" # Temporary local directory for backup files TIMESTAMP=$(date +"%Y%m%d_%H%M%S") BACKUP_FILE="${DB_NAME}_${TIMESTAMP}.sql.gz" MAX_BACKUPS=7 # Number of backups to retain in S3 # Ensure temporary backup directory exists mkdir -p $BACKUP_DIR # 1. Perform PostgreSQL backup echo "Starting PostgreSQL backup for ${DB_NAME}..." PGPASSWORD="your_db_password" pg_dump -h $DB_HOST -U $DB_USER $DB_NAME | gzip > ${BACKUP_DIR}/${BACKUP_FILE} if [ $? -eq 0 ]; then echo "Database backup created: ${BACKUP_DIR}/${BACKUP_FILE}" else echo "ERROR: Database backup failed." exit 1 fi # 2. Upload to S3 with encryption echo "Uploading backup to S3://${S3_BUCKET}/${BACKUP_FILE}..." aws s3 cp ${BACKUP_DIR}/${BACKUP_FILE} s3://${S3_BUCKET}/${BACKUP_FILE} --sse AES256 if [ $? -eq 0 ]; then echo "Backup uploaded successfully to S3." else echo "ERROR: S3 upload failed." exit 1 fi # 3. Clean up local backup file echo "Cleaning up local backup file..." rm ${BACKUP_DIR}/${BACKUP_FILE} # 4. Implement S3 rotation (keep only the latest MAX_BACKUPS) echo "Implementing S3 backup rotation..." OLD_BACKUPS=$(aws s3 ls s3://${S3_BUCKET}/ --recursive | grep "${DB_NAME}_" | sort | head -n -${MAX_BACKUPS} | awk '{print $4}') for backup in $OLD_BACKUPS; do echo "Deleting old backup: s3://${S3_BUCKET}/${backup}" aws s3 rm s3://${S3_BUCKET}/${backup} done echo "S3 backup rotation complete." echo "Backup process finished." ```

Frequently Asked Questions

How do I ensure the `PGPASSWORD` and `aws` credentials are secure when using this script?

It's highly recommended to avoid hardcoding `PGPASSWORD` directly in the script. Instead, use environment variables (e.g., `export PGPASSWORD="your_password"` before running the script), or `~/.pgpass` for PostgreSQL. For AWS, configure your credentials using `aws configure` or IAM roles for EC2 instances, rather than embedding them in the script.

Can this script be adapted for other databases like MySQL or MongoDB?

Yes, the core logic for S3 upload and rotation is generic. You would need to replace the `pg_dump` command with the appropriate backup utility for your specific database (e.g., `mysqldump` for MySQL, `mongodump` for MongoDB), and adjust any database-specific authentication or compression parameters.