ruby-on-rails 如何让Rails.cache(内存缓存)与Puma一起工作?

huwehgph  于 2023-01-10  发布在  Ruby
关注(0)|答案(3)|浏览(120)

我使用的是Rails5.1,我有应用程序范围的memory_store缓存,这是在我的config/environments/development.rb文件中设置的

£ Enable/disable caching. By default caching is disabled.
  if Rails.root.join('tmp/caching-dev.txt').exist?
    config.action_controller.perform_caching = true

    config.cache_store = :memory_store
    config.public_file_server.headers = {
      'Cache-Control' => 'public, max-age=172800'
    }
  else
    config.action_controller.perform_caching = true
    config.cache_store = :memory_store
  end

这让我可以做一些事情

Rails.cache.fetch(cache_key) do
        msg_data
      end

在我的应用程序的一个部分(一个Web Socket)中访问数据,并在应用程序的另一个部分(一个控制器)中访问数据。然而,我注意到的是,如果我在puma运行的情况下启动Rails服务器(例如,在config/puma.rb中包含以下文件)...

threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count

£ Specifies the `port` that Puma will listen on to receive requests, default is 3000.
£
port        ENV.fetch("PORT") { 3000 }

£ Specifies the number of `workers` to boot in clustered mode.
£ Workers are forked webserver processes. If using threads and workers together
£ the concurrency of the application would be max `threads` * `workers`.
£ Workers do not work on JRuby or Windows (both of which do not support
£ processes).
£
workers ENV.fetch("WEB_CONCURRENCY") { 4 }

app_dir = File.expand_path("../..", __FILE__)
shared_dir = "£{app_dir}/shared"

£ Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env

£ Set up socket location
bind "unix://£{shared_dir}/sockets/puma.sock"

£ Logging
stdout_redirect "£{shared_dir}/log/puma.stdout.log", "£{shared_dir}/log/puma.stderr.log", true

£ Set master PID and state locations
pidfile "£{shared_dir}/pids/puma.pid"
state_path "£{shared_dir}/pids/puma.state"
activate_control_app



£ Use the `preload_app!` method when specifying a `workers` number.
£ This directive tells Puma to first boot the application and load code
£ before forking the application. This takes advantage of Copy On Write
£ process behavior so workers use less memory. If you use this option
£ you need to make sure to reconnect any threads in the `on_worker_boot`
£ block.
£
£ preload_app!

£ The code in the `on_worker_boot` will be called if you are using
£ clustered mode by specifying a number of `workers`. After each worker
£ process is booted this block will be run, if you are using `preload_app!`
£ option you will want to use this block to reconnect to any threads
£ or connections that may have been created at application boot, Ruby
£ cannot share connections between processes.
£
on_worker_boot do
  require "active_record"
  ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
  ActiveRecord::Base.establish_connection(YAML.load_file("£{app_dir}/config/database.yml")[rails_env])
end

£ Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart

内存缓存不再起作用。换句话说

Rails.cache.fetch(cache_key)

总是什么都不返回。我希望有一个多线程的puma环境(最终)来优雅地处理许多请求。然而,我也希望我的缓存工作。我如何让它们一起运行?

klr1opcd

klr1opcd1#

puma运行在集群模式下(即有多个工作者),你不能使用memory_store,Rails指南中有这样的说明,你不能在独立的进程之间共享内存,所以这显然是有道理的。
如果不能将puma的工作线程减少到1个,那么可以考虑使用Redis或Memcached,Rails指南中的文档在这方面已经很完整了--你需要在Gemfile中添加一两个gem,并更新config.cache_store,你需要在机器上安装相关的服务。或者有很多托管服务提供商会为你管理它(Heroku Redis,Redis To Go,Memcachier等)

vuv7lop3

vuv7lop32#

我不知道你能不能做到--但是在任何情况下都不要这么做,使用一个真实的的缓存服务,比如memcached。
http://guides.rubyonrails.org/caching_with_rails.html

config.cache_store = :mem_cache_store, "localhost" # assuming you run memcached on localhost

嗯......就这些。

hmmo2u0o

hmmo2u0o3#

虽然Redis是一个很好的解决方案,但另一种可能性是使用FileStore缓存。如果你不想运行Redis并简化环境,这可能是可取的。
使用此缓存存储,同一主机上的多个服务器进程可以共享一个缓存。
https://guides.rubyonrails.org/caching_with_rails.html#activesupport-cache-filestore
此外,这可以在可能比SSD更快的RAM驱动器上实现。

相关问题