-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phased restart crash when upgrading Rack Gem despite using 'prune_bundler' #705
Comments
Same issue here, locking puma and rack to specific versions as a workaround, so that bundle update doesn't try to update them.
|
Also happens with USR2. I think I see how too. Puma fires up the new instance with a bunch of -I options pointing to the few dependencies it needs. One of these is rack. I need to look closer to understand why it doesn't pick up the newer version, although the newer version may not be the one your application is about to use. It's also not clear whether this is a known weakness in the mechanism. It's not documented, at least. Wait, I've got an idea… |
Hmm it's harder than I thought. I was trying to get it to do a Bundler.with_clean_env do
Kernel.exec(Gem.ruby, '-S', 'bundle', 'exec', *argv)
end I quickly hacked this in and it seems to work here but maybe I'm missing the bigger picture. |
I'm looking into this today. |
With a totally minimal app I'm not reproducing what you're seeing. I did the following steps:
Can you try this with your Gemfile and the simple hello.ru and see what the results are? |
Also, everyone seeing this problem, can you dump in your configuration files? Maybe there is something that is cause this in there. |
I'm back from work now so can't test this in a hurry but try modifying the rack version in Gemfile.lock by hand instead of locking the Gemfile to a specific version. That's more representative of what's happening in production. |
Editing the Gemfile.lock directly to change the version of rack doesn't On Wed, Jun 10, 2015 at 12:01 PM, James Le Cuirot notifications@github.com
|
Indeed, I just tried to reproduce that way at home and failed. I tried with a pristine Rails app and that crashed like before. So Rails must be a factor. Actually that makes sense. hello.ru does not load Bundler by itself so all that's happening is that Puma is stripping Bundler out and it subsequently never gets used. Rails fires up Bundler in boot.rb. Bingo, if you add |
I should add that the existing solution is broken for basic applications like hello.ru that do not explicitly load Bundler by themselves. Puma strips out Bundler and only loads dependencies required for itself so any additional dependencies required by the application would be missing. My simpler solution, suggested above, would fix this problem, as well as negating the need for any of the existing prune_bundler code, including puma-wild. Is there a downside? |
@chewi Oh yes, quite right. Let me dig into this and figure out what's going on. |
Rack 1.5.5 is out now and I'm apprehensive about this tripping us (and others!) up in production again. I've made a suggestion but it's down to you to decide on the best solution. |
@evanphx Well that was unexpected. Are you sure this is the right way to go? It does sidestep the issue as Puma has no other dependencies but it's not inconceivable that you may want to add one later. Removing Rack clearly also came at the cost of expanding the code base a little. Was there a problem with my suggestion? |
@chewi Your solution doesn't work because it leaves the process in a unreliable state because it's simply not possible to load the gems from a Gemfile into the process where another Gemfile is already loaded. Removing the rack dependency is really the only way to make puma resilient against Gemfile changes. |
@evanphx I thought that between |
@chewi Part of the reason that doesn't work (I had to think through the whole flow just now) is would blow up if the users Gemfile did not include rack, which is certainly possible. |
@evanphx Shouldn't the Gemfile include puma and therefore rack? |
Mmmm, that is a good point. I started the experiment to remove the rack dependency on a plane with the idea that I wasn't actually using much of rack anyway. I was correct as well, I only used a tiny bit of utils, the builder, and the common logger. If I had used a ton of it, I doubt I'd have gone this direction, but since I was just a little bit, the dependency seemed more cumbersome. |
That's fair enough and I really don't mind but I thought I should follow this up because my approach would ultimately be less code (e.g. no more puma-wild) and would keep the door open for other dependencies in future. This could have happened with any gem, it's just that rack was the only dependency. |
@chewi You're right, it is simpler. I think now that I've removed the only gem dependency that puma has I should be able to remove a whole bunch of code from puma-wild (perhaps even all of it). In the future, if gem dependencies are added, I'll likely implement your idea. |
Sorry, how do I resolve this problem? Im using a Rails application and when restarting I get the follow message:
|
Add |
After adding this to top, puma server did not start at all. Is there any other solution for updated gems on puma workers? |
Summary
I upgraded
rack
in myGemfile
and Puma/Bundler unexpectedly crashed on restart. As we useprune_bundler
in our cluster and don't preload, we expected that the Puma cluster master would be isolated from Gem changes, but able to respawn new workers which use the new Gems.It only seems to crash if Gems directly used by Puma are upgraded. The version of Bundler doesn't seem to matter (but not exhaustively tested.)
Have I understood the expected behaviour correctly? Any workarounds or suggestions welcomed.
How to replicate
The Production issue can be simply replicated in Development without needing to do a full deploy:
Gemfile.lock
specifiesrack (1.5.2)
andpuma (2.11.2)
bundle install
bundle exec puma -C ./config/puma.rb
Gemfile.lock
to userack (1.5.3)
bundle install
kill -USR1 <pid of puma cluster master>
The text was updated successfully, but these errors were encountered: