**phase 4 - 2016-12-22** > Added phase 3 logs to question. It looks like some how there is a new scheduler process that is subsequently created and then destroyed inside the model code. Thanks again for your diligence on this! Is that really happening in the model code? Your logs tell us that it happens in another process. Your initial Ruby process initializes rufus-scheduler then your HTTP requests are served in worker processes which are forks of your initial process (without the threads, in other words with inactive schedulers). You're using Puma in clustered mode. I should have immediately asked you about your configuration. Read carefully its documentation at https://github.com/puma/puma#configuration An easy fix would be not to use the clustered mode so that there is only one Ruby process involved, serving all the HTTP requests. If you need the clustered mode, you have to change your way of thinking. You probably don't want to have 1 rufus-scheduler instance per worker thread. You could focus on having the core (live) rufus-scheduler in the main process. It could have a "management" job that checks recently updated metrics and unschedules/schedules jobs. SCHEDULER.every '10s', overlap: false do Metric.recently_updated.each do |metric| SCHEDULER.jobs(tags: metric.id).each(&:unschedule) SCHEDULER.every(metric.frequency, tags: self.id) { metric.add_value } end end # or something like that... Have fun!