Skip to content

Server architecture questions #6

@mbuptivo

Description

@mbuptivo

Hi,
I am quite new to Stream and I am trying to understand if the current server implementation is production ready or not. I've found some problema documented in the previous issues, but I've more or less worked around them. Now I am trying to understand the current architectural design, in particular if it is ready for large scale implementation.

Some points:

  1. Assuming to have to handle a large number of concurrent AI agents, how does the server scale? What resources are required (CPU, memory, open connections) to make it run depending on the number of sessions?

  2. In case more than a single server is required, what's the suggested way to make this scale out?

  3. The current implementation uses a new "ai-bot" user for each channel. This wastes MAU on bot users. Also, this creates a new Stream client SDK instance for each channel, causing a growing number of open websocket connections. Is this really required?
    I am working on a custom server that has only 2 SDKs:

  • 1 "server" sdk for creating channels and adding a single ai-bot user
  • 1 "client" sdk authenticated as the single "ai-bot" user
    On a new request, the user is added to the channel an the single client sdk listens for new messages on all joined channels and used the event data to differenciate among requests.
    Is there any problem with this. On paper, it seems more efficient than having multiple WS connections.

Thanks for any consideration.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions