Process Commands

You can use the process command to execute different one-to-one functions which produce one output for one input given.

Some default process commands available in Logpoint are:

JSON Parser

The JavaScript Object Notation (JSON) Parser reads JSON data and extracts key values from the fields with valid JSON field values of normalized logs. A string filter is applied to the provided field which defines a path for extracting values from it.

The supported filter formats for JSON Parser are:

  • Chaining for nested JSON

    Example: .fields.user.Username

  • Array access

    Example: .[1]


| process json_parser (field name, "filter") as field name


| process json_parser (msg, ".AzureLogAnalytics") as analytics

Here, the “| process json_parser (msg, “.AzureLogAnalytics”) as analytics” query applies the AzureLogAnalytics filter to the msg field and extracts the key values to the analytics field.

JQ Parser

The JQ Parser applies the JQ filter to the fields with valid JSON field values of normalized logs and extracts key values from that field. The JQ filter defines a path for extracting the required data from a JSON file and has a wide variation and functionality. Go to Basic filters, to learn more about the supported filter formats for JQ Parser.


| process jq_parser (field name, "filter") as field name


| process jq_parser (conditional_access_policies, ".[].result") as cap_result

Here, the “| process jq_parser (conditional_access_policies, “.[].result”) as cap_result” query applies [] (array filter) and result filter to the conditional_access_policies field and extracts the key values to the cap_result field.

JSON Expand

The JSON Expand takes the field with a valid JSON array value and creates separate log instances for individual array items of that field. Each array item takes the original field name.


| process json_expand (field name)


| process json_expand (policy)

The log before applying the JSON Expand query


Four log instances after applying the JSON Expand query

Here, the “| process json_expand (policy)” query expands the policy field into four log instances. After expansion, each array item takes the policy as a field name.

String Concat

This process command lets you join multiple field values of the search results.


| process concat(fieldname1, fieldname2, ...., fieldnameN) as string


| process concat(city, country) as geo_address

Domain Lookup

This process command provides the domain name from a URL.


| process domain(url) as domain_name


url=* | process domain(url) as domain_name |
chart count() by domain_name, url


This process command calculates the difference between two numerical field values of a search.


| process diff(fieldname1,fieldname2) as string


| process diff(sent_datasize,received_datasize) as difference
| chart count() by sent_datasize, received_datasize,difference


This process command calculates the sum between two numerical field values of a search.


| chart sum(fieldname)


label = Memory | chart sum(used) as Memory_Used by col_ts

Experimental Median Quartile Quantile

This process command performs statistical analysis (median, quartile and quantile) of events based on fields. All these commands take numerical field values as input.



| chart median(fieldname) as string


doable_mps=* |chart median(doable_mps)



| chart quartile(fieldname) as string1, string2, string3


doable_mps=* |chart quartile(doable_mps)



| process quantile(fieldname)


doable_mps=* | process quantile(doable_mps)
|search quantile>0.99
|chart count() by doable_mps order by doable_mps desc


Percentiles are numbers below which a portion of data is found. This process command calculates the statistical percentile from the provided field and informs whether the field’s value is high, medium or low compared to the rest of the data set.


| chart percentile (field name, percentage)


doable_mps = *| chart percentile (doable_mps, 99)

Here, the “| chart percentile (doable_mps, 99)” command calculates the percentile for the value of the doable_mps field.


Entropy measures the degree of randomness in a set of data. This process command calculates the entropy of a field using the Shanon entropy formula and displays data in the provided field. A higher entropy number denotes a data set with more randomness, which increases the probability that a system artificially generated the values and could potentially lead to a malicious conclusion.


| process entropy (field) as field_entropy


device_address = *| process entropy (device_address) as test

Here, the “| process entropy (device_address) as test” command calculates the entropy of the device_address field and displays it in test.


| process entropy (url_address, url) as entropy_url

Here, the “| process entropy (url_address, url) as entropy_url” command takes url as an optional parameter and extracts the domain name from the url_address to perform entropy calculation on it and displays it in entropy_url.


| process entropy ("", string) as en

Here, the “| process entropy (“”, string) as en” command takes string as an optional parameter and calculates the entropy of raw string field and displays it in en.

Process lookup

This process command looks up for the related data from the user defined table.


| process lookup(table,field)


| process lookup(lookup_table, device_ip)


This process command gives the geographical information of a public IP address. It adds a new value “internal” to all the fields generated for the private IP supporting the RFC 1918 Address Allocation for Private Internets.


| process geoip (fieldname)


| process geoip (source_address)

For the Private IP:


For the Public IP:



This process command encodes the field values to base64 format or decodes the base64 format to their text value.


| process codec(<encode/decode function>, <field to be encoded/decoded>) as <attribute_name>


| process codec(encode, name) as encoded_name


This process command determines whether a certain field-value falls within the range of two given values. The processed query returns TRUE if the value is in the range.


| process in_range(endpoint1, endpoint2, field, result, inclusion)


endpoint1 and endpoint2 are the endpoint fields for the range,
the field is the fieldname to check whether its value falls within the given range,
result is the user provided field to assign the result (TRUE or FALSE),
inclusion is the parameter to specify whether the range is inclusive or exclusive of
given endpoint values. When this parameter is TRUE, the endpoints will be included for
the query and if it is FALSE, the endpoints will be excluded.


| process in_range(datasize, sig_id, duration,Result, True)


This process command extracts specific parts of the log messages into custom field names.


| process regex("_regexpattern", _fieldname)
| process regex("_regexpattern", "_fieldname")

Both syntaxes are valid.


| process regex("(?P<type>\S*)",msg)

DNS Process

This process command returns the domain name assigned to an IP address and vice-versa. It takes an IP address or a Domain Name and a Field Name as input. The plugin then verifies the value of the field. If the input is an IP Address, it resolves the address to a hostname and if the input is a Domain Name, it resolves the address to an IP Address. The output value is stored in the Field Name provided.


| process dns(IP Address or Hostname)


destination_address=* | process dns(destination_address) as domain
| chart count() by domain


This process command compares two values to check if they match or not.


| process compare(fieldname1,fieldname2) as string


| process compare(source_address, destination_address) as match
| chart count() by match, source_address, destination address

IP Lookup

This process command enriches the log messages with the Classless Inter-Domain Routing (CIDR) address details. A list of CIDRs is uploaded in the CSV format during the configuration of the plugin. For any IP Address type within the log messages, it matches the IP with the content of the user-defined Lookup table and then enriches the search results by adding the CIDR details.


| process ip_lookup(IP_lookup_table, column, fieldname)
 where IP_lookup_table is the lookup table configured in the plugin,
 Column is the column name of the table which is to be matched
  with the fieldname of the log message.


| process ip_lookup(lookup_table_A, IP, device_ip)

This command compares the IP column of the lookup_table_A with the device_ip field of the log and if matched, the search result is enriched.


Compare Network

This process command takes a list of IP addresses as inputs and checks if they are from the same network or different ones. It also checks whether the networks are public or private. The comparison is carried out using either the default or the customized CIDR values.


| process compare_network(fieldname1,fieldname2)

Example: (Using default CIDR value)

source_address=* destination_address=*
| process compare_network (source_address, destination_address)
| chart count() by  source_address_public, destination_address_public,
same_network, source_address, destination_address

Clean Char

This process command removes all the alphanumeric characters present in a field-value.


| process clean_char(<field_name>) as <string_1>, <string_2>


| process clean_char(msg) as special, characters
| chart count() by special, characters

Current Time

This process command gets the current time from the user and adds it as a new field to all the logs. This information can be used to compare, compute, and operate the timestamp fields in the log message.


| process current_time(a) as string


source_address=* | process current_time(a) as time_ts
| chart count() by time_ts, log_ts, source_address

Count Char

This process command counts the number of characters present in a field-value.


| process count_char(fieldname) as int


| process count_char(msg) as total_chars
| search total_chars >= 100

DNS Cleanup

This process command converts a DNS from an unreadable format to a readable format.


| process dns_cleanup(fieldname) as string


col_type=syslog | norm dns=<DNS.string>| search DNS=*
|process dns_cleanup(DNS) as cleaned_dns
| norm on cleaned_dns .<dns:.*>.
| chart count() by DNS, cleaned_dns, dns


This process command enables you to extract key-value pairs from logs during query runtime using Grok patterns. Grok patterns are the patterns defined using regular expression that match with words, numbers, IP addresses, and other data formats.

Refer to Grok Patterns and find a list of all the Grok patterns and their corresponding regular expressions.


| process grok("<signature>")

A signature can contain one or more Grok patterns.


To extract the IP address, method, and URL from the log message: GET /index.html

Use the command:

| process grok("%{IP:ip_address_in_log} %{WORD:method_in_log} %{URIPATHPARAM:url_in_log}")

Using this command adds the ip_address_in_log, method_in_log, and url_in_log fields and their respective values to the log if it matches the signature pattern.



This process command converts hexadecimal (hex) value and decimal (dec) value of various keys to their corresponding readable ASCII values. The application supports the Extended ASCII Table for processing decimal values.

Hexadecimal to ASCII


| process  ascii_converter(fieldname,hex) as string


| process ascii_converter(sig_id,hex) as alias_name

Decimal to ASCII


| process  ascii_converter(fieldname,dec) as string


| process ascii_converter(sig_id,dec) as alias_name


This process command enriches the search result with the information related to the given field name from the WHOIS database.The WHOIS database consists of information about the registered users of an Internet resource such as registrar, IP address, registry expiry date, updated date, name server information and other information. If the specified field name and its corresponding value are matched with the equivalent field values of the WHOIS database, the process command enriches the search result, however, note that the extracted values are not saved.


| process whoislookup(field_name)


domain =* | process whoislookup(domain)


This process command evaluates mathematical, boolean and string expressions. It places the result of the evaluation in an identifier as a new field.


| process eval("identifier=expression")


| process eval("Revenue=unit_sold*Selling_price")


For more information, go to Evaluation Process Plugin Manual.


This process command populates the dynamic list with the field values of the search result.


| process toList (list_name, field_name)


device_ip=* | process toList(device_ip_list, device_ip)


For more information, go to Dynamic List.


This process command populates the dynamic table with the fields and field values of the search result.


| process toTable (table_name, field_name1, field_name2,...., field_name9)


device_ip=* | process toTable(device_ip_table, device_name, device_ip, action)


For more information, go to Dynamic Table.


This process command returns the number of elements in the list.


| process list_length(list) as length


| chart distinct_list(actual_mps) as lst | process list_length(lst) as lst_length


This process command calculates the percentile value of a given list. It requires at least two input parameters. The first parameter is mandatory and must be a list. This command can also accept up to five additional parameters. The second parameter must be an alias, which is used in conjunction with the percentile percentage to determine the required percentile. The alias is concatenated with the percentile percentage to store the required percentile value.


| process list_percentile(list, 25, 75, 95, 99) as x

Result: x_25th_percentile = respective_value
    x_75th_percentile = respective_value
    x_95th_percentile = respective_value
    x_99th_percentile = respective_value

| process list_percentile(list,p) as aliasalias_pth_percentile


| actual_mps=* chart distinct_list(actual mps) as a | process list_percentile(a, 50, 95,99) as x | chart count() by a,
x_50th_percentile, x_95th_percentile, x_99th_percentile


This process command sorts a list in ascending or descending order. By default, the command sorts a list in ascending order. The first parameter is mandatory and must be a list. The second parameter desc is optional.


| process sort_list(list) as sorted_list
| process sort_list(list, "desc") as sorted_list


chart distinct list(actual_mps) as lst | process sort_list(lst) as LP_KB_Dynamictable_Populate_Values | chart count by lst, sorted list


This command processes two lists, calculates the difference between them, and returns the absolute value of the difference as the delta. The two lists must contain timestamps. It requires two first and second input parameters that are mandatory and can either be a list or a single field. The third parameter is mandatory and represents the required difference between the two input fields. This difference must be specified in either seconds, minutes or hours. The purpose of the third parameter is to determine how the difference between the two input fields can be represented. For instance, if the difference is specified in seconds, the output will show the absolute difference in seconds.


| process datetime_diff(ts_list1, ts_list2) as delta


| chart list(user) as list | process next(list, 1) as next_list | chart count() by list next_list


We are glad this guide helped.

Please don't include any personal information in your comment

Contact Support