AgentX modes of operation
AgentX supports two modes of operation: standalone mode and distributed mode. The mode determines how AgentX processes logs and executes automated responses across your environment.
Standalone mode
In standalone mode, all AgentX components run on a single Logpoint instance. This instance handles agent connections, log processing, and automated response execution.
Use standalone mode when:
Log collection, storage, and analysis occur on a single Logpoint
The same Logpoint executes automated response or SOAR commands
Your environment has fewer than 5,000 endpoints
High availability through clustering is not required
Standalone mode is the default operating mode and requires no additional configuration beyond standard AgentX installation.
Distributed mode
In distributed mode, AgentX operates across multiple Logpoint instances in a cluster architecture. The cluster includes one master node and one or more worker nodes.
Master node responsibilities:
Accept new agent registrations
Manage cluster configuration
Execute automated response and SOAR commands on all agents in the cluster
Coordinate agent management across worker nodes
Worker node responsibilities:
Receive and process log data from agents
Forward processed logs to Logpoint for storage
Execute automated response commands on connected agents when instructed by the master
Use distributed mode when:
You need to process more than 5,000 EPS (events per second)
You require high availability for log collection
You need to distribute processing load across multiple Logpoint instances
You want to implement load balancing for agent connections
Switching between modes
You can switch between standalone and distributed modes at any time. When switching from distributed to standalone, AgentX disconnects worker nodes and consolidates all operations to the local Logpoint instance.
When switching from standalone to distributed, you select which Distributed Logpoints become worker nodes in the cluster. AgentX automatically configures agent connections and synchronizes certificates across the cluster.
Load balancer integration
In distributed mode, you can implement a load balancer to distribute agent connections across worker nodes. The load balancer provides:
Automatic failover when a worker node becomes unavailable
Even distribution of agent connections across healthy workers
Continued operation during worker node maintenance
Improved reliability through redundancy
The load balancer monitors worker node health on port 1514 and redirects agent traffic to available workers. Agent registrations still go directly to the master node on port 1515.
Failure scenarios
Master node failure (distributed mode): When the master node fails, worker nodes continue processing logs from connected agents, but:
New agents cannot register with the cluster
Automated response and SOAR commands cannot be executed
Agent configuration changes cannot be synchronized
You can promote a worker node to master to restore full functionality.
Worker node failure (distributed mode): When a worker node fails, connected agents automatically reconnect to other available worker nodes (if using a load balancer) or remain disconnected until the worker recovers. No data is lost if agents buffer logs locally until reconnection.
Standalone node failure: When the standalone Logpoint fails, all agent connections are lost and no logs are processed until the Logpoint recovers. Agents buffer logs locally according to their buffer configuration.
Next steps
Last updated
Was this helpful?