Architecture Configuration
Logpoint On-premise SIEM consists of three components that perform three important tasks:
Log Management & Analytics or Search Head
Log Ingestion through Collectors or Fetchers
Storage Nodes or Repos
They can be deployed to a single physical appliance or split across multiple dedicated physical servers, virtual servers or cloud-based servers. Smaller organizations may deploy all three components within a single virtual appliance. Larger organizations can divide the components across various network zones to balance load and ensure resiliency.
Connection between your Logpoint appliances or servers can be through Open VPN or through Logpoint Open Door.

Before you setup your environment, consider:
data
location
data access
agents
devices
cloud
retention
legal requirements
load balancing
high availability
security zones
organizational structure
multi-tenancy
geography
Log Management & Analytics or Search Head
The Search Head is the user interface where you control and manage other Logpoint servers and Logpoint products (SIEM, SOAR Automation, Case Management). It distributes queries to relevant repos (storage nodes), collates results, and displays them in the GUI. The Search Head does not store data.
Use the Search Head (Main Logpoint) to setup, modify, or configure features. You can switch to a storage/repo node to perform searches there if needed (useful for slow searches). If you don't see or have access to particular UI items (for example, user management), ensure you are using the Search Head/Main Logpoint.
For scalability or failover, you can include more than one search head that independently runs and aggregates queries to/from the repos.
Log Ingestion through Collectors
Collectors retrieve log data and buffer it. A Collector listens on dedicated ports, retrieves logs, normalizes (splits messages into key/value pairs), and forwards logs to a Storage Node. Collectors work together with Windows agents to retrieve, encrypt, buffer, and monitor files and the Windows Registry. Collectors also support static enrichment.
A Collector can be deployed at a remote site to aggregate local log streams and perform caching, buffering, and compression before sending the aggregate stream over a single VPN port to the Search Head.
In a Distributed Environment, add a Logpoint Node and convert it to a Collector or Fetcher.
Storage Nodes or Repos
All ingested logs are stored as flat files in individual repositories (repos) in a NoSQL-based database.
When you setup a repo, configure:
How long to retain or store log data before automatic discard.
Which users or user groups have access to the log data in the repo
Whether log data is replicated so you can still access log data even when a server or repository is down or unresponsive.
What storage tier data is kept from and moved to automatically. For example, moving older data to a cheaper storage tier.
A repo can use multiple storage tiers located on one or more underlying disks because repos are logical spaces rather than physical disk spaces.
Which logs are forwarded to which repos is setup through a Routing Policy that divides incoming log data so it can be forwarded to different repos.
Repos or storage nodes can be located close to their data sources to minimize log traffic egress costs from cloud locations and to minimize network bandwidth requirements. When repos are located close to their data source, only search queries and search responses are transmitted over the WAN.
Your Environment
Environments entirely depend on your data traffic, events per second and geographical location to name just a few. Logpoint Customer Success works closely with our customers to design and apply the right architecture. Your Logpoint Architecture can span On-premise deployments, the Cloud, and off-site locations.

Standalone / All-in-One
Standalone or all-in-one deployments combine collection, normalization, and analytics in a single Logpoint instance.


If you have an all-in-one/standalone, the same interface is used to:
Sizing
Considerations before setup:
Geographic Distribution - where the search head(s), data nodes and collectors will be.
Number of events per second - Logpoint can provide you with a spreadsheet to help you or you can also get a general idea using our sizing calculator.
High Availability and load balancing requirements.
Number of live searches you will perform, including dashboards, alert rules, searches and the number of users who will perform these live searches.
Number of SOAR Automation playbooks you will use.
Number of Repos you need.
Log Retention Period.
The following sizing specifications are only guidelines to help you get a general idea of how to size. Your infrastructure and environment may differ.
Logpoint Search Head
Average live searches: 100
Hardware: 4 CPU cores, 18 GB RAM
Typical disk use < 200 GB
Data Node (no collector)
HW for 1000 EPS
8 CPU cores
32 GB RAM
Disk according to type and retention. Compressed logs and indexes take up the same amount of disk space as raw logs stored directly. Even when raw logs are compressed, the index will require a space ratio of 1:1.
HA or shadow repositories require additional resources. Generally speaking, you will need to double the amount of storage and increase CPU cores and RAM.
Collector Node
HW for 1000 EPS
6 CPU cores
16 GB RAM
Sufficient disk space for buffering during lost connectivity
Virtual Logpoint
To use a virtual environment, be aware:
Some virtual environments have a large number of servers sharing disk access, which can impact I/O.
There needs to be enough reserved resources and I/O, or you need to use SSD, if not performance will be poor even if sizing is correctly calculated.
Distributed Logpoint
A full Logpoint server operating with another Logpoint server with Search Head is termed Distributed Logpoint or environment. It segregates indexing and searching between separate instances or servers. The Search Head performs its search searches throughout the logs' index files across your environment's repos.

In distributed environments you can connect multiple nodes operating in different modes to store and analyze logs centrally.

High Availability should be configured to duplicate and store configurations and logs as backups.
Setup Distributed Logpoint
Activate Open Door.
Configure the Search Head.
Add Data Node Connections.
Configure Collector and/or Syslog forwarder.
Open Door
Open Door is the gateway for communication between two Logpoints. It must be activated for a Logpoint to be on a distributed architecture. When activated, it creates a virtual interface (tun10000) that allows secure communication between the two Logpoints.
If the connection is between Distributed Logpoint (DLP) server or instance and Search Head, enable Open Door on the server or instance.
If the connection is between Logpoint Collector (LPC) and Search Head, activate Open Door on the Search Head or main Logpoint.
The private network address must be unique for each Logpoint.
Before activating open door, open the following ports:
1194/UDP
Allow Open VPN to access the distributed Logpoint
Inbound for DLP-Search Head connection; Outbound for LPC-Search Head connection
443/TCP (HTTPS)
Secure communication for Logpoint
Allow request/response communication to the Search Head/Main Logpoint from the Distributed Logpoint
Search Head or Main Logpoint
The Search Head is the user interface where you control and manage all the other Logpoint machines in your environment in addition to all Logpoint products, including SIEM, SOAR Automation and Case Management. You can collect, index, and store logs from multiple Logpoint machines and search through them from a single, main Search Head.
Add Data Nodes
After setting up the Search Head or Main Logpoint, add the additional nodes or distributed Logpoints to create your environment. You will need to add at least one node and then configure it as a collector or forwarder. You can add any additional nodes to make up your environment, you can add data nodes and later convert them to a collector or forwarder. After you have added other nodes, you can switch between them in your main Logpoint. Go to Settings, use the drop-down at the top right.

You can always modify or delete existing nodes.
Add a Collector and/or Syslog Forwarder
After you setup the Search Head or main Logpoint and add the nodes for your environment, Setup one of the nodes as a Collector or Syslog Forwarder to get your log data into Logpoint.
Converting Logpoint to a Collector or Syslog Forwarder is done through Modes of Operation.
Modes available:
Logpoint Collector
Syslog Forwarder
You can also convert a regular Logpoint into either a Logpoint Collector or a Syslog Forwarder.
When a Logpoint node accepts incoming log data via Syslog and forwards it to another target, for example a Logpoint Collector you can use a Raw Sys Log Forwarder to export the raw logs to a remote target. Useful when a lot of incoming log streams need to be aggregated and forwarded over the network without each source device having direct connectivity to the destination Logpoint Collector node.
Collector
Collectors ingest normalize, and forward logs to a remote Logpoint. In a standalone Logpoint, add a collector as a device in the Main Logpoint.
Distributed Collectors only collect logs, they have no dashboards. search or report generation capabilities. The remote Logpoint then configures the sources and storage locations for the logs. Before configuring a Distributed Collector, remember to activate Open Door in the remote Logpoint first.
After setting up Distributed Collectors you need to add a device to determine where or from which location the collector will ingest the logs.
You can also add a Syslog Collector Collects logs from the sources that follow the Syslog protocol. These logs are then forwarded to Logpoint for storage and analysis. Users can create syslog collector log sources from scratch or use templates tailored to specific devices or applications.
Fetchers
Syslog Forwarder File Fetcher is configured to fetch logs from remote targets. Once fetched, the logs are stored in Logpoint for centralized management and analysis.
Syslog Collector
Syslog Collector collects logs from the sources that follow the Syslog protocol. Users can create syslog collector log sources from scratch or use templates tailored to specific devices or applications. Syslog Collector is typically used when logs need to be standardised, normalized, and enhanced before forwarding.
Syslog Forwarder
Syslog Forwarder collects logs from different sources, normalizes them using the signatures applied, and forwards them to a configured Logpoints and a target storage. Unlike Logpoint Collectors, Syslog Forwarder can not act as a buffer.
Syslog Forwarder supports Air Gap. The Main Logpoints are usually located in high-security zones whereas Syslog Forwarders and other devices are in low-security zones.
Using a Syslog Forwarder
Before using a Syslog Forwarder:
Export a config file from Main Logpoint
Import the config file on the Syslog Forwarder
Add target(s)
Add target storage for air gap
Add devices
Syslog Forwarder File Fetcher
Syslog Forwarder File Fetcher is configured to fetch logs from remote targets. Once fetched, the logs are stored in Logpoint for centralized management and analysis.
Raw Syslog Forwarder
Raw Syslog Forwarders collect and forward raw logs from a Logpoint to a remote target.
You can enable IP Spoofing to directly add the log collection devices in the target Logpoint instead of adding them in the raw syslogforwarder and still distinguish the Logpoint where the logs are collected.
To use Raw Syslog Forwarders, you must configure
target(s) or which devices the raw syslog messages are forwarded
device(s) or where Logpoint will collect and forward the raw syslog messages
To view logs forwarded from a localhost, you must add the IP of the Raw Syslog Forwarder to the remote target. You have to add a device in target logpoint and configure its syslog collector to view the logs forwarded from that device. A remote target supports both TCP and UCP for localhost; however, it supports only UDP for other devices.
Last updated
Was this helpful?













