Skip to content

Conversation

@the-mikedavis
Copy link
Collaborator

@the-mikedavis the-mikedavis commented Dec 5, 2025

Shrinking a member node off of a QQ can be parallelized. The operation involves

  • removing the node from the QQ's cluster membership (appending a command to the log and committing it) with ra:remove_member/3
  • updating the metadata store to remove the member from the QQ type state with rabbit_amqqueue:update/2
  • deleting the queue data from the node with ra:force_delete_server/2 if the node can be reached

All of these operations are I/O bound. Updating the cluster membership and metadata store involves appending commands to those logs and replicating them. Writing commands to Ra synchronously in serial is fairly slow - sending many commands in parallel is much more efficient. By parallelizing these steps we can write larger chunks of commands to WAL(s).

ra:force_delete_server/2 benefits from parallelizing if the node being shrunk off is no longer reachable, for example in some hardware failures. The underlying rpc:call/4 will attempt to auto-connect to the node and this can take some time to time out. By parallelizing this, each rpc:call/4 reuses the same underlying distribution entry and all calls fail together once the connection fails to establish.

Discussed in #15057

@the-mikedavis
Copy link
Collaborator Author

the-mikedavis commented Dec 5, 2025

With this change and the default 64 set here (just a sensible-seeming constant) I see my test in #15057 of shrinking from 1000 QQs go from taking ~2hrs to taking 1min52sec.

@kjnilsson
Copy link
Contributor

This looks fine to me, at least for now.

It would be quite possible to get much higher throughput on this and use command pipelining instead of spawning a bunch of processes just to exercise the WAL more. We'd need to add that as an option to the Ra API however.

@the-mikedavis
Copy link
Collaborator Author

Ah yeah, with pipelining we could use the WAL much more efficiently. That shouldn't be too bad to add to Ra - just a new function in ra that would use ra_server_proc:cast_command/3, right? Once mnesia is gone we could use Khepri async commands for the metadata store updates so both of those parts could be done with pipelining.

I'm actually more worried about the ra:force_delete_server/2 part since that step can take a while (7 seconds) if the connection to the node times out. An easy way around that would be adding a function in rabbit_quorum_queue to call ra:force_delete_server/2 on all queues after the membership and metadata store parts are done. Then it would just be one RPC call which could time out.

In the meantime making this parallel seems like an easy improvement since we can continue using the delete_member/2 helper. But in the long run we should definitely use pipelining instead 👍

Shrinking a member node off of a QQ can be parallelized. The operation
involves

* removing the node from the QQ's cluster membership (appending a
  command to the log and committing it) with `ra:remove_member/3`
* updating the metadata store to remove the member from the QQ type
  state with `rabbit_amqqueue:update/2`
* deleting the queue data from the node with `ra:force_delete_server/2`
  if the node can be reached

All of these operations are I/O bound. Updating the cluster membership
and metadata store involves appending commands to those logs and
replicating them. Writing commands to Ra synchronously in serial is
fairly slow - sending many commands in parallel is much more efficient.
By parallelizing these steps we can write larger chunks of commands to
WAL(s).

`ra:force_delete_server/2` benefits from parallelizing if the node being
shrunk off is no longer reachable, for example in some hardware
failures. The underlying `rpc:call/4` will attempt to auto-connect to
the node and this can take some time to time out. By parallelizing this,
each `rpc:call/4` reuses the same underlying distribution entry and
all calls fail together once the connection fails to establish.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants