![]() The scheduler in the BEAM will de-schedule long running processes and put them in the back of the run queue. You can do all of this and saturate the cpu without latency exploding. You elixir application will often be doing things like background work and managing a key value store. > So is rails, worst case latency is generally caused by slow SQL requests or having to render complex documents (which can be offloaded to the background easily) You also get to match on really specific shapes and cases to handle really granular errors without much effort or cognitive overhead, and you don't need to do things like catching errors like people often do in Rails controller error handling with rescue_from. This makes it faster and you don't have the problems of inherited methods stepping on each other. Not exactly, it works on the basis of pattern matching and the Fallback functions are included into the plug (think Rack) pipeline. > Sounds just like controller inheritance in rails You can do that, but its cheaper to get more out of each cpu and Elixir/BEAM give you that for free with a similarly flexible dynamic language. > That's neat, but this doesn't matter until you reach serious scale, as scaling a rails app horizontally by throwing more server instances works for a long time Let me quickly address these to the best of my ability, knowing Jose's answers are probably better :) Even if you have a small app, it is fewer pieces and less to keep in your head.Īt the end of the day, you may still think the bullet points from the previous reply are not sufficient, but I think they are worth digging a bit deeper (although I'm obviously biased). In Phoenix you just do the work from the channel. This adds indirection and operational complexity. If you compare Phoenix Channels with Action Cable, in Action Cable you must avoid blocking the channel, so incoming messages are pushed to background workers which then pick it up and broadcast. ![]() In Elixir this isn't necessarily a concern because there are no worries about "blocking the main thread". For example, you say you can offload complex documents rendering to a background tool. The ability to leverage concurrency in production often means less operational complexity. You get concurrent testing out-of-the-box that can multiplex on both CPU and IO resources (important given how frequently folks complain about slow suites).ģ. Development is faster if your compiler (or code loader), tasks, and everything else is using all cores by default.Ģ. It is not only about scaling, it can actually affect every step from development to production:ġ. I'd say those points deserve a deeper look. > Eex is built on compile time linked listsĬool but sounds irrelevant for 99.9% of cases, string interpolation isn't what causes rails apps to be slow Once things get complex, you're gonna be writing SQL directly anyway Can be a hinderance for junior devs or devs without rails experience though I don't have a problem with ActiveRecord, and while N+1s are easy to create, there are a ton of tools to help prevent these in rails. > The query builder/data mapper, Ecto, is IMO far better than ActiveRecor being more explicit and prevents N+1s out of the box So is rails, worst case latency is generally caused by slow SQL requests or having to render complex documents (which can be offloaded to the background easily) ![]() Sounds just like controller inheritance in rails > The fallback controller pattern for error handling is incredible for boilerplate error handling That's neat, but this doesn't matter until you reach serious scale, as scaling a rails app horizontally by throwing more server instances works for a long time ![]() You get a concurrent request processing spread across all of your cpu/vcpu cores out of the box ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |