Log Management Devices
There are two main ways to set up Log Management using devices:
Processing Policy and Device : A processing policy combines normalization, enrichment, and routing policies into a single policy that is then assigned to a device.
Log Collection Policy and Device: A Log Collection Policy groups the configuration of collector/fetchers, normalization, enrichment, routing and repos that you add to a Device. You can use Log Collection Policies to apply or reuse the same configurations to one or more devices.
Normalization
Normalization translates a raw log message into Logpoint taxonomy. Raw log messages come from different source devices in a variety of formats. Normalizing allows for searches, identification of patterns and correlation across logs from different sources. For example, different firewall vendors may label fields differently or have no labels. Normalization maps various input fields like source or third field from the left into standardized field names such as source_address.
Logpoint uses two types of normalizers:
Compiled Normalizers: They are hard-coded and fast.
Normalization Packages: They contain one or more normalizers that use regex or signatures, to find and extract the key value pairs from the raw logs. The resulting key value pairs depend on the log source and which signatures and potential labels are used. Use signature-based normalizers for raw logs that are not well-defined.
Normalization Packages
Normalization Packages contain one or more signature-based normalizers. Each normalizer contains a list of signatures that are looked up in the log message. The signature ID of the line that the log message was matched against is added as a field to the log, in addition to a norm_id field with the name of the normalizer package used.
Normalization Packages Types:
Vendor Packages: Normalization Packages Logpoint developed and are part of the Log Source Integration. You can't modify or edit Vendor Packages. You can clone them and then make your changes or edits.
My Packages: Normalization Packages that you create. You can create your own packages based on Vendor Packages. First, clone a package, then make your changes. After that, share your packages with other Logpoint A/Susers.
Go to Settings >> Knowledge Base from the navigation bar and click Normalization Packages. You can switch between My Packages and Vendor Packages by clicking the dropdown at the top-left. Export a normalization package from one Logpoiny and import it into other to save configuration time.
Before creating a Normalization Package, you must create signatures to not use the packages that come out-of-the-box with the integrations. Go to Signatures to learn how to make them.
Normalization Policies
If a normalization policy contains both compiled and regex-based normalizers, Logpoint first tries compiled normalizers. Regex-based normalizers (normalization packages) are used only if compiled normalizers fail. The order of normalization packages matters.
Enrichment
Enrichment adds metadata from enrichment sources to log events that were not part of the original log. Enriched fields in Logpoint logs are marked red.
Enrichment Types
Static: This enrichment is applied at data ingestion, either during collection or storage. Static enrichment is indexed, which makes queries over large datasets run faster.
Dynamic: This enrichment is applied during analysis, or when a query runs. It is useful for lookups that are only possible long after logs are received. For example from a threat intelligence table, as the threat was likely unknown when the logs were originally ingested. Dynamic Enrichment uses less storage than static enrichment, uses less of a collection load, and is good for small data sets and short time ranges.
Enrichment data comes from an Enrichment Source. To use the Enrichment Source, create an Enrichment Policy and add it to a Processing Policy or a Log Collection Policy so the log data is enriched. Enrichment Sources are Integrations that is downloaded from the Marketplace.
Logpoint Enrichment Sources
LDAP: User information from an LDAP server.
GeoIP: Geographical information of a public IP address.
CSV: Data in a Comma-Separated Values (CSV) file.
IPtoHost: Hostname from IP address.
Stix/Taxii: Cyber Threat Intelligence (CTI) data written in STIX format from a TAXII server.
Oracle: Data from an Oracle database.
ODBC: Data from ODBC database server. Supported ODBS databases include PostgreSQL, MSSQL, and MySQL.
Threat Intelligence: Groups a number of threat intelligence sources into a single Integration for download.
To learn how to use and apply enrichment data to your logs, go to Enrichment.
Routing
Routing specifies which repository on which specific device incoming log data should be stored based on a log message's key or key value pair.
Routing Policies are made up of one or more routing criteria, which determines where or which repo incoming log data should go to. Part of a Processing Policy, a routing policy can also be used to selectively discard incoming log messages.
You use Routing Policies to select which repos the incoming logs should go. Routing criteria uses keys to direct the logs to the correct repo.
KeyPresentValueMatches routes the log files if they match a Key-Value pair.
KeyPresent routes the log files if a specific key matches.
There is a default LogpointAlerts routing policy that routes messages with norm_id = LogpointAlerts to the _LogpointAlerts repo; otherwise they go to the default repo.
Processing Policies
A Processing Policy combines normalization, enrichment, and routing policies into a single policy assigned to a device. All collectors and fetchers use processing policies, eliminating the need to attach separate normalization, enrichment, and routing policies each time.
Repositories
A repository or repo is a log storage location where device logs are routed to. When you setup or run a search query, you select which repos to run the search on. A Routing Policy determines which repo logs are sent to. Repo properties control how long log data is stored until its automatically deleted, which storage tier to use, where log data is potentially moved to, and whether the data is replicated using Logpoint High Availability. Repos, are different from actual storage volumes. A server can have multiple disks or volumes. These volumes will have the same or different mount points. While repos are logical volumes within Logpoint, a repository can use multiple storage tiers located on one or more of the underlying volumes.
A single repo consists of one or more repo-paths, each with its own retention policy. Log retention period depends on the policies set.
High-Availability (HA) Repos
You can replicate one Logpoint repo to another Distributed Logpoints repo. These High Availability (HA) Repos replicate logs from one Logpoint to multiple Logpoint machines, ensuring logs remain available even if one Logpoint becomes unavailable.

In the above diagram, the repos R1 and R2 belong to Logpoint L1. When you add Logpoint L2 as the Remote Logpoint of L1 and configure them for high availability, Logpointreplicates the repos of L1 to the corresponding repos in L2. Ingested logs are stored in both repos. If you configure HA Repo for R1 and R2, their HA repos L1R1 and L1R2 are replicated in Logpoint L2.
Log Retention specifies the number of days for which logs are kept in storage before they are removed. You can add multiple Repo Path to transfer older logs from primary to secondary storage according to their retention period. This frees up space in primary storage for the newer logs.
For example, you can define two repo paths for repo as /opt/immune/storage/tier1 and /opt/immune/storage/tier2 and select the Retention (day) as five days and fifteen days, respectively. Then, Logpoint stores the collected logs in the first path for the first five days. Then, they are moved to the second path and stored for fifteen days. The total retention period for the repo is twenty days. After twenty days, the logs are deleted from the database.
Logs are stored in folders, each with a maximum capacity of 5 million. When a folder reaches this capacity or its retention time exceeds, it becomes immutable, meaning no additional logs can be added. Once a folder is immutable, it is moved from primary to secondary storage, ensuring that all logs within it are moved.
Collectors & Fetchers
A collector retrieves logs from the source and buffers them. It receives logs through specific ports and/or forwards them to a Logpoint Storage Node. The collector uses a normalizer to split each log message into key-value pairs and apply static enrichment during processing.
Logpoint has the following out-of-the-box collectors.
Syslog Collector
Snare Collector
FTP Collector
SNMP Trap Collector
SFlow Collector
File System Collector
Syslog Collector
Syslog Collector collects data from sources that follow the Syslog protocol and serves as the collector for most Logpoint Log Sources. You can use Syslog Collector in one of three ways:
As a Syslog Collector:
It collects syslog messages from the source devices, processes the logs, and forwards them to Logpoint. It facilitates direct communication between source devices and Logpoint.
As a Proxy Collector:
It collects syslog messages from the source device and sends them to Logpoint, acting as a proxy between the source device and Logpoint. It is configured where direct communication between source devices and Logpoint is not possible, such as when there are network restrictions, location restrictions, or security policies in place.
Uses a Proxy
The source device sends the logs to a proxy device, which acts as an intermediate log forwarder. The proxy device collects, processes, and forwards the logs to Logpoint. A proxy device can be a log forwarder or a collector that sits between the source devices and the log source, centralizing log forwarding from multiple sources to reduce the load. This ensures reliability by caching logs if the SIEM is temporarily unavailable.
To set up the Syslog Collector
For Windows-based systems, use third-party agents
For Linux-based systems, use the command-line. After using the command line to add it, you will be able to link it to a device using the Logpoint UI.
Sequence Numbering
You can add sequence numbers to logs collected by Syslog Collector. Sequence numbers start from 1, are reset when you restart Syslog and when the number of logs reach 1,000,000,000,000. They help you identify the order of the logs received from a device.
Supported Specification and Syslog Format
Syslog Collector supports the RFC 6587 specification for UTF-8 encoded logs with the following formats:
RFC 3164
Standard RFC 3164:
<PRI> MTH DD HH:MM:SS Hostname LogContent
For example:
RFC 3164 with year:
<PRI> YYYY MTH DD HH:MM:SS Hostname LogContent
For example:
RFC 5424
<PRI> [PRIVAL] [FULL-DATE]T[FULL-TIME] Log Content
Here,
FULL-DATE = DATE-FULLYEAR “-” DATE-MONTH “-” DATE-MDAY
FULL-TIME = PARTIAL-TIME TIME-OFFSET
PARTIAL-TIME = TIME-HOUR “:” TIME-MINUTE “:” TIME-SECOND [TIME-SECFRAC]
TIME-SECFRAC = “.” 1*6DIGIT
IME-OFFSET = Z / (“+” / “-“) TIME-HOUR “:” TIME-MINUTE
For example:
Logpoint prioritizes the timezone offset in the log over the device’s timezone when extracting log_ts. If both the device and the log contain timezone information, the log’s timezone offset is used for log_ts.
Before you start receiving logs, you must configure the log source settings on your Linux or Windows to forward logs to Logpoint:
Sequence Numbering in logs collected from Syslog Collector
A sequence number is assigned per device per protocol to each log collected from the Syslog Collector. This helps you identify the order of the logs received from a particular device.
The log collected from a device with the device IP 192.168.0.135 and communicating via the TCP protocol is:
The sequence number for the above log is 41, shown in the seq_num_tcp field. It means that this log is the 41st TCP log message received from the device with the device IP 192.168.0.135.
Similarly, the log collected from a device with the device ip 192.168.0.135 and communicating via the UDP protocol is as follows:

The sequence number for the above log is 83, as shown in the seq_num_udp field. It means that this log is the 83rd UDP log message received from the device with the device IP 192.168.0.135.
And, the log collected from a device with the device ip 192.168.0.135 and communicating via the SSL protocol is as follows:

The sequence number for the above log is 68, shown in the seq_num_ssl field. It means that this log is the 68th SSL log message received from the device with the device IP 192.168.0.135.
Parsers
Parsers process raw logs by identifying log formats and extracting meaningful fields, including timestamps, source, event details, and attributes. They transform unstructured or semi-structured logs into structured, searchable data, enabling accurate normalization, correlation, and analysis across different log sources.
In addition to built-in default parsers, you can create custom parsers to handle unique log formats. The regex pattern defined in a parser splits incoming messages into individual log entries and extracts the required fields for further processing.
Default Parsers
Line Parser
Line Parser splits each line in the log file into individual logs. If the size of the log file is larger than 12 KB, the log file is split into individual logs.
Example: Logs are divided into 2.
Syslog Parser
Syslog Parser splits syslog-formatted logs into individual messages using either the newline character (n) or octet-counting. It splits logs based on the message length specified in the log. If the size of the log exceeds the defined Message Length set from Syslog, the log is split into segments of that length.
For example, if the message length is set at 12 KB, logs larger than 12 KB size are divided into 12 KB segments. Use Syslog Parser only if the syslog message is formatted in one of the supported syslog formats.
This log entry will be split into two separate logs.
Multi Line Syslog Parser
Multi Line Syslog Parser splits multiple syslog messages written in multiple lines into individual logs. It uses Priority Value, or PRI, a numerical value enclosed in angle brackets “<>”, to split the message.
Example: The following Log entries are split into three logs
Email Parser
Email Parser aggregates logs with the same message ID into a single log. After aggregation, a compiled normalizer specific to each supported email service is required in the normalization policy to extract key-value pairs. Compiled Normalizers are hard-coded by Logpoint. While they not be flexible they are very CPU efficient.
Email Parser only works with the Syslog Collector.
Exim
EximMTACompiledNormalizer
Qmail
QmailCompiledNormalizer
Cisco IronPort
CiscoIronPortESGCompiledNormalizer
Sendmail
SendMailCompiledNormalizer
Postfix MTA
PostFixCompiledNormalizer
Proofpoint Email Protection Parser
splits logs coming from Proofpoint’s Email Protection service.
DB2 Parser
splits logs from IBM DB2 servers.
RACF Parser
RACF Parser splits logs from Resource Access Control Facility (RACF) devices.
CSVParser
CSVParser processes comma-separated values from a file. CSVParser can only be used with file-based collectors and fetchers.
JSONLineParser
Processes JSON lines from a file. JSONLineParser can only be used with file-based collectors and fetchers.
Log Collection Policy
A Log Collection Policy defines the rules and settings for collecting, forwarding, and managing log data from various sources. It determines which log sources are included, how logs are filtered, and the frequency and method of collection. Administrators can configure policies to ensure that only relevant log data is collected, helping optimize storage and improve system performance.
Devices
Devices represent the hardware or software components that generate and send logs to the Logpoint. These include servers, workstations, network devices, applications, or cloud services. Each device is identified by its IP address or addresses and must be added and configured in Logpoint before log collection begins. If a device is not configured, Logpoint’s internal firewall will block its traffic.
Logs from a device are retrieved by a collector or fetcher, either configured directly on the device or through a collection policy. Devices can optionally be organized into device groups, which are logical groupings of similar devices, but this is not mandatory. Registering devices ensures accurate log tracking, proper application of collection policies, and efficient management of log sources.
Use .CSV File to Add Devices
The first line of the CSV file must be a header row with the following fields:
device_name
device_ips
device_groups
log_collection_policies
distributed_collector
confidentiality
integrity
availability
timezone
The device_name and device_ips fields are mandatory. The values provided for all the non-mandatory fields must already exist in Logpoint. For example: If you add windows in the list of device_groups, the Windows device group must already exist in Lopoint.
The field values are separated with a comma (,) but if a field has multiple values, it should be written within a double quotation mark (“”).
Logpoint predefines which timezone values you need to use in the CSV file. Use the names exactly as listed in the List of Timezones.
Device Groups
Device Groups allow you to organize two or more devices by common characteristics, such as operating system, location, or function. For example, create group devices by platform (e.g., Windows or Linux) or by location (e.g., Office A or Office B).
Device Groups are also used for user access management, allowing you to control which users can view and interact with specific device sets. Grant one user group access to a particular device group and restrict access to others. For example, Windows administrators with Logpoint access can be limited to viewing and managing only the devices in the Windows device group. One device can belong to multiple device groups.
Use the in command in search to retrieve logs from devices in a specific device group. For example, to search for logs from devices in the Linux device group, use the device_ip in DG_LINUX query.
Last updated
Was this helpful?



