README.md in pgslice-0.2.0 vs README.md in pgslice-0.2.1
- old
+ new
@@ -1,8 +1,8 @@
# pgslice
-Postgres partitioning as easy as pie. Works great for both new and existing tables, with zero downtime and minimal app changes.
+Postgres partitioning as easy as pie. Works great for both new and existing tables, with zero downtime and minimal app changes. Archive older data on a rolling basis to keep your database size under control.
:tangerine: Battle-tested at [Instacart](https://www.instacart.com/opensource)
## Install
@@ -66,12 +66,17 @@
```sh
pgslice fill <table> --swapped
```
-8. Archive and drop the original table
+8. Back up the retired table with a tool like [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and drop it
+ ```sql
+ pg_dump -c -Fc -t <table>_retired $PGSLICE_URL > <table>_retired.dump
+ psql -c "DROP <table>_retired" $PGSLICE_URL
+ ```
+
## Sample Output
pgslice prints the SQL commands that were executed on the server. To print without executing, use the `--dry-run` option.
```sh
@@ -84,11 +89,11 @@
CREATE TABLE visits_intermediate (LIKE visits INCLUDING ALL);
CREATE FUNCTION visits_insert_trigger()
RETURNS trigger AS $$
BEGIN
- RAISE EXCEPTION 'Date out of range. Create partitions first.';
+ RAISE EXCEPTION 'Create partitions first.';
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER visits_insert_trigger
BEFORE INSERT ON visits_intermediate
@@ -153,22 +158,22 @@
pgslice fill visits
```
```sql
/* 1 of 3 */
-INSERT INTO visits_intermediate (id, user_id, ip, created_at)
- SELECT id, user_id, ip, created_at FROM visits
+INSERT INTO visits_intermediate ("id", "user_id", "ip", "created_at")
+ SELECT "id", "user_id", "ip", "created_at" FROM visits
WHERE id > 0 AND id <= 10000 AND created_at >= '2016-08-01'::date AND created_at < '2016-11-01'::date
/* 2 of 3 */
-INSERT INTO visits_intermediate (id, user_id, ip, created_at)
- SELECT id, user_id, ip, created_at FROM visits
+INSERT INTO visits_intermediate ("id", "user_id", "ip", "created_at")
+ SELECT "id", "user_id", "ip", "created_at" FROM visits
WHERE id > 10000 AND id <= 20000 AND created_at >= '2016-08-01'::date AND created_at < '2016-11-01'::date
/* 3 of 3 */
-INSERT INTO visits_intermediate (id, user_id, ip, created_at)
- SELECT id, user_id, ip, created_at FROM visits
+INSERT INTO visits_intermediate ("id", "user_id", "ip", "created_at")
+ SELECT "id", "user_id", "ip", "created_at" FROM visits
WHERE id > 20000 AND id <= 30000 AND created_at >= '2016-08-01'::date AND created_at < '2016-11-01'::date
```
```sh
pgslice swap visits
@@ -216,10 +221,25 @@
n.nspname = 'public' AND
c.relname = '<table>_' || to_char(NOW() + INTERVAL '3 days', 'YYYYMMDD')
-- for months, use to_char(NOW() + INTERVAL '3 months', 'YYYYMM')
```
+## Archiving Partitions
+
+Back up and drop older partitions each day or month.
+
+```sh
+pg_dump -c -Fc -t <table>_201609 $PGSLICE_URL > <table>_201609.dump
+psql -c "DROP <table>_201609" $PGSLICE_URL
+```
+
+If you use [Amazon S3](https://aws.amazon.com/s3/) for backups, [s3cmd](https://github.com/s3tools/s3cmd) is a nice tool.
+
+```sh
+s3cmd put <table>_201609.dump s3://<s3-bucket>/<table>_201609.dump
+```
+
## Additional Commands
To undo prep (which will delete partitions), use:
```sh
@@ -266,10 +286,10 @@
You can also use pgslice to reduce the size of a table without partitioning by creating a new table, filling it with a subset of records, and swapping it in.
```sh
pgslice prep <table> --no-partition
-pgslice fill <table> --start 1000 # starting primary key
+pgslice fill <table> --where "id > 1000" # use any conditions
pgslice swap <table>
```
## Upgrading