docs/getting-started/overview.md in polyphony-0.71 vs docs/getting-started/overview.md in polyphony-0.72
- old
+ new
@@ -112,11 +112,11 @@
active concurrent connections, each advancing at its own pace, consuming only a
single CPU core.
Nevertheless, Polyphony fully supports multithreading, with each thread having
its own fiber run queue and its own libev event loop. Polyphony even enables
-cross-thread communication using [fiber messaging](#message-passing).
+cross-thread communication using [fiber messaging](#message-passing).
## Fibers vs Callbacks
Programming environments such as Node.js and libraries such as EventMachine have
popularized the usage of event loops for achieving concurrency. The application
@@ -136,11 +136,11 @@
Fibers, in contrast, let the developer express the business logic in a
sequential, easy to read manner: do this, then that. State can be stored right
in the business logic, as local variables. And finally, the sequential
programming style makes it much easier to debug your code, since stack traces
-contain the entire history of execution from the app's inception.
+contain the entire history of execution from the app's inception.
## Structured Concurrency
Polyphony's tagline is "fine-grained concurrency for Ruby", because it makes it
really easy to spin up literally thousands of fibers that perform concurrent
@@ -227,11 +227,11 @@
```ruby
fiber1 = spin { sleep 1; raise 'foo' }
fiber2 = spin { sleep 1 }
-supervise # blocks and then propagates the error raised in fiber1
+supervise # blocks and then propagates the error raised in fiber1
```
## Message Passing
Polyphony also provides a comprehensive solution for using fibers as actors, in
@@ -264,11 +264,11 @@
```
Notice how the state (the `subscribers` variable) stays local, and how the logic
of the chat room is expressed in a way that is both compact and easy to extend.
Also notice how the chat room is written as an infinite loop. This is a common
-pattern in Polyphony, since fibers can always be stopped at any moment.
+pattern in Polyphony, since fibers can always be stopped at any moment.
The code for handling a chat room user might be expressed as follows:
```ruby
def chat_user_handler(user_name, connection)
@@ -348,18 +348,18 @@
backend is an object having a uniform interface, that performs all blocking
operations.
While a standard event loop-based solution would implement a blocking call
separately from the fiber scheduling, the system backend integrates the two to
-create a blocking call that is already knows how to switch and schedule fibers.
+create a blocking call that already knows how to switch and schedule fibers.
For example, in Polyphony all APIs having to do with reading from files or
sockets end up calling `Thread.current.backend.read`, which does all the work.
This design offers some major advantages over other designs. It minimizes memory
-allocations, of both Ruby objects and C structures. For example, instead of
+allocations of both Ruby objects and C structures. For example, instead of
having to allocate libev watchers on the heap and then pass them around, they
-are allocated on the stack instead, which saves up on both memory and CPU cycles.
+are allocated on the stack instead, which saves both memory and CPU cycles.
In addition, the backend interface includes two methods that allow maximizing
server performance by accepting connections and reading from sockets in a tight
loop. Here's a naive implementation of an HTTP/1 server:
@@ -480,7 +480,7 @@
Polyphony is a young project, and will still need a lot of development effort to
reach version 1.0. Here are some of the exciting directions we're working on.
- Support for more core and stdlib APIs
- More adapters for gems with C-extensions, such as `mysql`, `sqlite3` etc
-- Use `io_uring` backend as alternative to the libev backend
+- Use `io_uring` backend as alternative to the libev backend
- More concurrency constructs for building highly concurrent applications