As MPP PostgreSQL platforms grow, traditional logical backup strategies quietly stop meeting business expectations. Part 1 sets the stage by examining why multi-day backups and restores are no longer acceptable, and why disaster recovery needs to be rethought before scale makes change unavoidable.
From script sprawl to structured, drill-down diagnostics.
When I assess customer Greenplum environments, I lean on a suite of lightweight shell utilities that make deep inspection fast and repeatable. At the core is gpview.sh, an interactive catalog viewer. Around it are wrappers for scheduled health checks, single-report runs, and persistence for trending over time.
Disaster Recovery (DR) isn’t just “take a backup.” At MPP scale, you need a repeatable end-to-end flow with minimal coordination and downtime. This post walks through the DBA Operations Kickstarter framework that wraps gpbackup, gpbackup_manager, and gprestore into a fully automated DR pipeline
Partitioning is a cornerstone of scalable analytics in Greenplum Database. The hardest part of partitioning isn’t design, it’s keeping partitions current. The kickstarter Partition Maintenance toolset is a set of Bash/Python utilities that operationalizes this lifecycle for Greenplum. This post will dive into this set of tools.
The DBA Operations Kickstarter is a comprehensive toolkit designed by Mugnano Data Consulting to automate and operationalize best-practice DBA workflows for Greenplum and Cloudberry environments. Whether you're onboarding a new system or stabilizing an existing one, this solution delivers an enterprise-grade operational foundation in hours, not months.


Part 2 dives into the discovery phase of a real-world DR redesign, uncovering the constraints that matter most at scale—WAL volume, retention windows, and storage behavior. It shows why understanding these realities is critical before any recovery architecture can succeed.