README.md in strong_migrations-1.6.3 vs README.md in strong_migrations-1.6.4
- old
+ new
@@ -41,11 +41,11 @@
self.ignored_columns = ["name"]
end
Deploy the code, then wrap this step in a safety_assured { ... } block.
-class RemoveColumn < ActiveRecord::Migration[7.0]
+class RemoveColumn < ActiveRecord::Migration[7.1]
def change
safety_assured { remove_column :users, :name }
end
end
```
@@ -94,11 +94,11 @@
#### Bad
Active Record caches database columns at runtime, so if you drop a column, it can cause exceptions until your app reboots.
```ruby
-class RemoveSomeColumnFromUsers < ActiveRecord::Migration[7.0]
+class RemoveSomeColumnFromUsers < ActiveRecord::Migration[7.1]
def change
remove_column :users, :some_column
end
end
```
@@ -115,11 +115,11 @@
2. Deploy the code
3. Write a migration to remove the column (wrap in `safety_assured` block)
```ruby
- class RemoveSomeColumnFromUsers < ActiveRecord::Migration[7.0]
+ class RemoveSomeColumnFromUsers < ActiveRecord::Migration[7.1]
def change
safety_assured { remove_column :users, :some_column }
end
end
```
@@ -132,11 +132,11 @@
#### Bad
In earlier versions of Postgres, MySQL, and MariaDB, adding a column with a default value to an existing table causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.
```ruby
-class AddSomeColumnToUsers < ActiveRecord::Migration[7.0]
+class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
def change
add_column :users, :some_column, :text, default: "default_value"
end
end
```
@@ -146,11 +146,11 @@
#### Good
Instead, add the column without a default value, then change the default.
```ruby
-class AddSomeColumnToUsers < ActiveRecord::Migration[7.0]
+class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
def up
add_column :users, :some_column, :text
change_column_default :users, :some_column, "default_value"
end
@@ -167,11 +167,11 @@
#### Bad
Active Record creates a transaction around each migration, and backfilling in the same transaction that alters a table keeps the table locked for the [duration of the backfill](https://wework.github.io/data/2015/11/05/add-columns-with-default-values-to-large-tables-in-rails-postgres/).
```ruby
-class AddSomeColumnToUsers < ActiveRecord::Migration[7.0]
+class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
def change
add_column :users, :some_column, :text
User.update_all some_column: "default_value"
end
end
@@ -182,11 +182,11 @@
#### Good
There are three keys to backfilling safely: batching, throttling, and running it outside a transaction. Use the Rails console or a separate migration with `disable_ddl_transaction!`.
```ruby
-class BackfillSomeColumn < ActiveRecord::Migration[7.0]
+class BackfillSomeColumn < ActiveRecord::Migration[7.1]
disable_ddl_transaction!
def up
User.unscoped.in_batches do |relation|
relation.update_all some_column: "default_value"
@@ -201,11 +201,11 @@
#### Bad
Adding a stored generated column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.
```ruby
-class AddSomeColumnToUsers < ActiveRecord::Migration[7.0]
+class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
def change
add_column :users, :some_column, :virtual, type: :string, as: "...", stored: true
end
end
```
@@ -219,11 +219,11 @@
#### Bad
Changing the type of a column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.
```ruby
-class ChangeSomeColumnType < ActiveRecord::Migration[7.0]
+class ChangeSomeColumnType < ActiveRecord::Migration[7.1]
def change
change_column :users, :some_column, :new_type
end
end
```
@@ -265,11 +265,11 @@
#### Bad
Renaming a column that’s in use will cause errors in your application.
```ruby
-class RenameSomeColumn < ActiveRecord::Migration[7.0]
+class RenameSomeColumn < ActiveRecord::Migration[7.1]
def change
rename_column :users, :some_column, :new_name
end
end
```
@@ -290,11 +290,11 @@
#### Bad
Renaming a table that’s in use will cause errors in your application.
```ruby
-class RenameUsersToCustomers < ActiveRecord::Migration[7.0]
+class RenameUsersToCustomers < ActiveRecord::Migration[7.1]
def change
rename_table :users, :customers
end
end
```
@@ -315,11 +315,11 @@
#### Bad
The `force` option can drop an existing table.
```ruby
-class CreateUsers < ActiveRecord::Migration[7.0]
+class CreateUsers < ActiveRecord::Migration[7.1]
def change
create_table :users, force: true do |t|
# ...
end
end
@@ -329,11 +329,11 @@
#### Good
Create tables without the `force` option.
```ruby
-class CreateUsers < ActiveRecord::Migration[7.0]
+class CreateUsers < ActiveRecord::Migration[7.1]
def change
create_table :users do |t|
# ...
end
end
@@ -349,11 +349,11 @@
#### Bad
Adding a check constraint blocks reads and writes in Postgres and blocks writes in MySQL and MariaDB while every row is checked.
```ruby
-class AddCheckConstraint < ActiveRecord::Migration[7.0]
+class AddCheckConstraint < ActiveRecord::Migration[7.1]
def change
add_check_constraint :users, "price > 0", name: "price_check"
end
end
```
@@ -361,21 +361,21 @@
#### Good - Postgres
Add the check constraint without validating existing rows:
```ruby
-class AddCheckConstraint < ActiveRecord::Migration[7.0]
+class AddCheckConstraint < ActiveRecord::Migration[7.1]
def change
add_check_constraint :users, "price > 0", name: "price_check", validate: false
end
end
```
Then validate them in a separate migration.
```ruby
-class ValidateCheckConstraint < ActiveRecord::Migration[7.0]
+class ValidateCheckConstraint < ActiveRecord::Migration[7.1]
def change
validate_check_constraint :users, name: "price_check"
end
end
```
@@ -387,11 +387,11 @@
### Executing SQL directly
Strong Migrations can’t ensure safety for raw SQL statements. Make really sure that what you’re doing is safe, then use:
```ruby
-class ExecuteSQL < ActiveRecord::Migration[7.0]
+class ExecuteSQL < ActiveRecord::Migration[7.1]
def change
safety_assured { execute "..." }
end
end
```
@@ -403,11 +403,11 @@
#### Bad
In Postgres, adding an index non-concurrently blocks writes.
```ruby
-class AddSomeIndexToUsers < ActiveRecord::Migration[7.0]
+class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
def change
add_index :users, :some_column
end
end
```
@@ -415,11 +415,11 @@
#### Good
Add indexes concurrently.
```ruby
-class AddSomeIndexToUsers < ActiveRecord::Migration[7.0]
+class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
disable_ddl_transaction!
def change
add_index :users, :some_column, algorithm: :concurrently
end
@@ -441,11 +441,11 @@
#### Bad
Rails adds an index non-concurrently to references by default, which blocks writes in Postgres.
```ruby
-class AddReferenceToUsers < ActiveRecord::Migration[7.0]
+class AddReferenceToUsers < ActiveRecord::Migration[7.1]
def change
add_reference :users, :city
end
end
```
@@ -453,11 +453,11 @@
#### Good
Make sure the index is added concurrently.
```ruby
-class AddReferenceToUsers < ActiveRecord::Migration[7.0]
+class AddReferenceToUsers < ActiveRecord::Migration[7.1]
disable_ddl_transaction!
def change
add_reference :users, :city, index: {algorithm: :concurrently}
end
@@ -471,21 +471,21 @@
#### Bad
In Postgres, adding a foreign key blocks writes on both tables.
```ruby
-class AddForeignKeyOnUsers < ActiveRecord::Migration[7.0]
+class AddForeignKeyOnUsers < ActiveRecord::Migration[7.1]
def change
add_foreign_key :users, :orders
end
end
```
or
```ruby
-class AddReferenceToUsers < ActiveRecord::Migration[7.0]
+class AddReferenceToUsers < ActiveRecord::Migration[7.1]
def change
add_reference :users, :order, foreign_key: true
end
end
```
@@ -493,21 +493,21 @@
#### Good
Add the foreign key without validating existing rows:
```ruby
-class AddForeignKeyOnUsers < ActiveRecord::Migration[7.0]
+class AddForeignKeyOnUsers < ActiveRecord::Migration[7.1]
def change
add_foreign_key :users, :orders, validate: false
end
end
```
Then validate them in a separate migration.
```ruby
-class ValidateForeignKeyOnUsers < ActiveRecord::Migration[7.0]
+class ValidateForeignKeyOnUsers < ActiveRecord::Migration[7.1]
def change
validate_foreign_key :users, :orders
end
end
```
@@ -535,11 +535,11 @@
#### Bad
In Postgres, there’s no equality operator for the `json` column type, which can cause errors for existing `SELECT DISTINCT` queries in your application.
```ruby
-class AddPropertiesToUsers < ActiveRecord::Migration[7.0]
+class AddPropertiesToUsers < ActiveRecord::Migration[7.1]
def change
add_column :users, :properties, :json
end
end
```
@@ -547,11 +547,11 @@
#### Good
Use `jsonb` instead.
```ruby
-class AddPropertiesToUsers < ActiveRecord::Migration[7.0]
+class AddPropertiesToUsers < ActiveRecord::Migration[7.1]
def change
add_column :users, :properties, :jsonb
end
end
```
@@ -563,25 +563,25 @@
#### Bad
In Postgres, setting `NOT NULL` on an existing column blocks reads and writes while every row is checked.
```ruby
-class SetSomeColumnNotNull < ActiveRecord::Migration[7.0]
+class SetSomeColumnNotNull < ActiveRecord::Migration[7.1]
def change
change_column_null :users, :some_column, false
end
end
```
#### Good
Instead, add a check constraint.
-For Rails 6.1, use:
+For Rails 6.1+, use:
```ruby
-class SetSomeColumnNotNull < ActiveRecord::Migration[7.0]
+class SetSomeColumnNotNull < ActiveRecord::Migration[7.1]
def change
add_check_constraint :users, "some_column IS NOT NULL", name: "users_some_column_null", validate: false
end
end
```
@@ -598,14 +598,14 @@
end
```
Then validate it in a separate migration. A `NOT NULL` check constraint is [functionally equivalent](https://medium.com/doctolib/adding-a-not-null-constraint-on-pg-faster-with-minimal-locking-38b2c00c4d1c) to setting `NOT NULL` on the column (but it won’t show up in `schema.rb` in Rails < 6.1). In Postgres 12+, once the check constraint is validated, you can safely set `NOT NULL` on the column and drop the check constraint.
-For Rails 6.1, use:
+For Rails 6.1+, use:
```ruby
-class ValidateSomeColumnNotNull < ActiveRecord::Migration[7.0]
+class ValidateSomeColumnNotNull < ActiveRecord::Migration[7.1]
def change
validate_check_constraint :users, name: "users_some_column_null"
# in Postgres 12+, you can then safely set NOT NULL on the column
change_column_null :users, :some_column, false
@@ -654,11 +654,11 @@
```ruby
config.active_record.partial_writes = false
```
-For Rails 7, use:
+For Rails 7+, use:
```ruby
config.active_record.partial_inserts = false
```
@@ -667,11 +667,11 @@
#### Bad
Adding a non-unique index with more than three columns rarely improves performance.
```ruby
-class AddSomeIndexToUsers < ActiveRecord::Migration[7.0]
+class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
def change
add_index :users, [:a, :b, :c, :d]
end
end
```
@@ -679,11 +679,11 @@
#### Good
Instead, start an index with columns that narrow down the results the most.
```ruby
-class AddSomeIndexToUsers < ActiveRecord::Migration[7.0]
+class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
def change
add_index :users, [:b, :d]
end
end
```
@@ -693,11 +693,11 @@
## Assuring Safety
To mark a step in the migration as safe, despite using a method that might otherwise be dangerous, wrap it in a `safety_assured` block.
```ruby
-class MySafeMigration < ActiveRecord::Migration[7.0]
+class MySafeMigration < ActiveRecord::Migration[7.1]
def change
safety_assured { remove_column :users, :some_column }
end
end
```
@@ -791,29 +791,10 @@
ALTER ROLE myuser SET statement_timeout = '1h';
```
Note: If you use PgBouncer in transaction mode, you must set timeouts on the database user.
-## Lock Timeout Retries [experimental]
-
-There’s the option to automatically retry statements when the lock timeout is reached. Here’s how it works:
-
-- If a lock timeout happens outside a transaction, the statement is retried
-- If it happens inside the DDL transaction, the entire migration is retried (only applicable to Postgres)
-
-Add to `config/initializers/strong_migrations.rb`:
-
-```ruby
-StrongMigrations.lock_timeout_retries = 3
-```
-
-Set the delay between retries with:
-
-```ruby
-StrongMigrations.lock_timeout_retry_delay = 10.seconds
-```
-
## App Timeouts
We recommend adding timeouts to `config/database.yml` to prevent connections from hanging and individual queries from taking up too many resources in controllers, jobs, the Rails console, and other places.
For Postgres:
@@ -853,15 +834,34 @@
lock_wait_timeout: 10 # sec
```
For HTTP connections, Redis, and other services, check out [this guide](https://github.com/ankane/the-ultimate-guide-to-ruby-timeouts).
+## Lock Timeout Retries [experimental]
+
+There’s the option to automatically retry statements for migrations when the lock timeout is reached. Here’s how it works:
+
+- If a lock timeout happens outside a transaction, the statement is retried
+- If it happens inside the DDL transaction, the entire migration is retried (only applicable to Postgres)
+
+Add to `config/initializers/strong_migrations.rb`:
+
+```ruby
+StrongMigrations.lock_timeout_retries = 3
+```
+
+Set the delay between retries with:
+
+```ruby
+StrongMigrations.lock_timeout_retry_delay = 10.seconds
+```
+
## Existing Migrations
To mark migrations as safe that were created before installing this gem, create an initializer with:
```ruby
-StrongMigrations.start_after = 20170101000000
+StrongMigrations.start_after = 20230101000000
```
Use the version from your latest migration.
## Target Version