docs/batch_processing.md in smarter_csv-1.12.0.pre1 vs docs/batch_processing.md in smarter_csv-1.12.0
- old
+ new
@@ -1,6 +1,20 @@
+### Contents
+
+ * [Introduction](./_introduction.md)
+ * [The Basic API](./basic_api.md)
+ * [**Batch Processing**](././batch_processing.md)
+ * [Configuration Options](./options.md)
+ * [Row and Column Separators](./row_col_sep.md)
+ * [Header Transformations](./header_transformations.md)
+ * [Header Validations](./header_validations.md)
+ * [Data Transformations](./data_transformations.md)
+ * [Value Converters](./value_converters.md)
+
+--------------
+
# Batch Processing
Processing CSV data in batches (chunks), allows you to parallelize the workload of importing data.
This can come in handy when you don't want to slow-down the CSV import of large files.
@@ -42,12 +56,13 @@
filename = '/tmp/some.csv'
options = {:chunk_size => 100, :key_mapping => {:unwanted_row => nil, :old_row_name => :new_name}}
n = SmarterCSV.process(filename, options) do |chunk|
# we're passing a block in, to process each resulting hash / row (block takes array of hashes)
# when chunking is enabled, there are up to :chunk_size hashes in each chunk
- MyModel.collection.insert( chunk ) # insert up to 100 records at a time
+ MyModel.insert_all( chunk ) # insert up to 100 records at a time
end
=> returns number of chunks we processed
```
-
+----------------
+PREVIOUS: [The Basic API](./basic_api.md) | NEXT: [Configuration Options](./options.md)