Skip to content

Commit 69df8fd

Browse files
content-botxsoar-botmerit-maita
authored
[Marketplace Contribution] Splunk - Content Pack Update (#41418)
* [Marketplace Contribution] Splunk - Content Pack Update (#41374) * "contribution update to pack 'Splunk'" * Update Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.yml --------- Co-authored-by: merit-maita <[email protected]> * edit * edit * edit * edit --------- Co-authored-by: xsoar-bot <[email protected]> Co-authored-by: merit-maita <[email protected]> Co-authored-by: merit-maita <[email protected]>
1 parent e3e884f commit 69df8fd

File tree

5 files changed

+115
-34
lines changed

5 files changed

+115
-34
lines changed

Packs/SplunkPy/Integrations/SplunkPy/README.md

Lines changed: 33 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -128,13 +128,13 @@ Configured by the instance configuration fetch_limit (behind the scenes an query
128128
| The app context of the namespace | | False |
129129
| HEC Token (HTTP Event Collector) | | False |
130130
| HEC BASE URL (e.g: https://localhost:8088 or https://example.splunkcloud.com/). | | False |
131-
| Enrichment Types | Enrichment types to enrich each fetched notable. If none are selected, the integration will fetch notables as usual \(without enrichment\). Multiple drilldown searches enrichment is supported from Enterprise Security v7.2.0. For more info about enrichment types see [Enriching Notable Events](#enriching-notable-events). | False |
131+
| Enrichment Types | Enrichment types to enrich each fetched notable. If none are selected, the integration will fetch notables as usual \(without enrichment\). Multiple drilldown searches enrichment is supported from Enterprise Security v7.2.0. For more info about enrichment types see the integration additional info. | False |
132132
| Asset enrichment lookup tables | CSV of the Splunk lookup tables from which to take the Asset enrichment data. | False |
133133
| Identity enrichment lookup tables | CSV of the Splunk lookup tables from which to take the Identity enrichment data. | False |
134134
| Enrichment Timeout (Minutes) | When the selected timeout was reached, notable events that were not enriched will be saved without the enrichment. | False |
135135
| Number of Events Per Enrichment Type | The limit of how many events to retrieve per each one of the enrichment types \(Drilldown, Asset, and Identity\). In a case of multiple drilldown enrichments the limit will apply for each drilldown search query. To retrieve all events, enter "0" \(not recommended\). | False |
136136
| Advanced: Extensive logging (for debugging purposes). Do not use this option unless advised otherwise. | | False |
137-
| Advanced: Time type to use when fetching events | Defines which timestamp will be used to filter the events:<br/>- creation time: Filters based on when the event actually occurred.<br/>- index time \(Beta\): \*Beta feature\* – Filters based on when the event was ingested into Splunk. <br/> This option is still in testing and may not behave as expected in all scenarios. <br/> When using this mode, the parameter "Fetch backwards window for the events occurrence time \(minutes\)" should be set to \`0\`\`, as indexing time ensures there are no delay-based gaps.<br/> The default is "creation time".<br/> | |
137+
| Advanced: Time type to use when fetching events | Defines which timestamp will be used to filter the events:<br/>- creation time: Filters based on when the event actually occurred.<br/>- index time \(Beta\): \*Beta feature\* – Filters based on when the event was ingested into Splunk. <br/> This option is still in testing and may not behave as expected in all scenarios. <br/> When using this mode, the parameter "Fetch backwards window for the events occurrence time \(minutes\)" should be set to \`0\`\`, as indexing time ensures there are no delay-based gaps.<br/> The default is "creation time".<br/> | False |
138138
| Advanced: Fetch backwards window for the events occurrence time (minutes) | The fetch time range will be at least the size specified here. This will support events that have a gap between their occurrence time and their index time in Splunk. To decide how long the backwards window should be, you need to determine the average time between them both in your Splunk environment. | False |
139139
| Advanced: Unique ID fields | A comma-separated list of fields, which together are a unique identifier for the events to fetch in order to avoid fetching duplicates incidents. | False |
140140
| Enable user mapping | Whether to enable the user mapping between Cortex XSOAR and Splunk, or not. For more information see https://xsoar.pan.dev/docs/reference/integrations/splunk-py\#configure-user-mapping-between-splunk-and-cortex-xsoar | False |
@@ -628,47 +628,31 @@ Parses the raw part of the event.
628628
### splunk-submit-event-hec
629629

630630
***
631-
Sends events Splunk. if `batch_event_data` or `entry_id` arguments are provided then all arguments related to a single event are ignored.
631+
Sends events to an HTTP Event Collector using the Splunk platform JSON event protocol.
632632

633-
##### Base Command
633+
#### Base Command
634634

635635
`splunk-submit-event-hec`
636636

637-
##### Input
637+
#### Input
638638

639639
| **Argument Name** | **Description** | **Required** |
640640
| --- | --- | --- |
641-
| event | The event payload key-value pair. An example string: "event": "Access log test message.". | Optional |
641+
| event | Event payload key-value pair.<br/>String example: "event": "Access log test message". | Optional |
642642
| fields | Fields for indexing that do not occur in the event payload itself. Accepts multiple, comma-separated, fields. | Optional |
643643
| index | The index name. | Optional |
644644
| host | The hostname. | Optional |
645-
| source_type | The user-defined event source type. | Optional |
646-
| source | The user-defined event source. | Optional |
647-
| time | The epoch-formatted time. | Optional |
648-
| batch_event_data | A batch of events to send to Splunk. For example, `{"event": "something happened at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test2, test2"}, "index": "index0","sourcetype": "sourcetype0","source": "/example/something" } {"event": "something happened at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1","source": "/example/something", "fields":{ "fields" : "severity: INFO, category: test2, test2"}}`. **If provided, the arguments related to a single event and the `entry_id` argument are ignored.** | Optional |
649-
| batch_event_data | A batch of events to send to splunk. For example, `{"event": "something happened at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test2, test2"}, "index": "index0","sourcetype": "sourcetype0","source": "/example/something" } {"event": "something happened at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1","source": "/exeample/something", "fields":{ "fields" : "severity: INFO, category: test2, test2"}}`. **If provided, the arguments related to a single event and the `entry_id` argument are ignored.** | Optional |
650-
| entry_id | The entry id in Cortex XSOAR of the file containing a batch of events. Content of the file should be valid batch event's data, as it would be provided to the `batch_event_data`. **If provided, the arguments related to a single event are ignored.** | Optional |
645+
| source_type | User-defined event source type. | Optional |
646+
| source | User-defined event source. | Optional |
647+
| time | Epoch-formatted time. | Optional |
648+
| request_channel | A channel identifier (ID) where to send the request, must be a Globally Unique Identifier (GUID). If the indexer acknowledgment is turned on, a channel is required. | Optional |
649+
| batch_event_data | A batch of events to send to Splunk. For example, `{"event": "something happened at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test2, test2"}, "index": "index0","sourcetype": "sourcetype0","source": "/example/something" } {"event": "something happened at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1","source": "/example/something", "fields":{ "fields" : "severity: INFO, category: test2, test2"}}`. If provided all arguments except of `request_channel` are ignored. | Optional |
650+
| entry_id | The entry ID in Cortex XSOAR of the file containing a batch of events. If provided, the arguments related to a single event are ignored. | Optional |
651651

652-
##### Batched events description
653-
654-
This command allows sending events to Splunk, either as a single event or a batch of multiple events.
655-
To send a single event: Use the `event`, `fields`, `host`, `index`, `source`, `source_type`, and `time` arguments.
656-
To send a batch of events, there are two options, either use the batch_event_data argument or use the entry_id argument (for a file uploaded to Cortex XSOAR).
657-
Batch format requirements: The batch must be a single string containing valid dictionaries, each representing an event. Events should not be separated by commas. Each dictionary should include all necessary fields for an event. For example: `{"event": "event occurred at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test1"}, "index": "index0", "sourcetype": "sourcetype0", "source": "/path/event1"} {"event": "event occurred at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1", "source": "/path/event2", "fields": {"severity": "INFO", "category": "test2"}}`.
658-
This formatted string can be passed directly via `batch_event_data`, or, if saved in a file, the file can be uploaded to Cortex XSOAR, and the `entry_id` (e.g., ${File.[4].EntryID}) should be provided.
659-
660-
##### Context Output
652+
#### Context Output
661653

662654
There is no context output for this command.
663655

664-
##### Command Example
665-
666-
```!splunk-submit-event-hec event="something happened" fields="severity: INFO, category: test, test1" source_type=access source="/var/log/access.log"```
667-
668-
##### Human Readable Output
669-
670-
The event was sent successfully to Splunk.
671-
672656
### splunk-job-status
673657

674658
***
@@ -1313,3 +1297,23 @@ Under **Used for communication between Cortex XSOAR and customer resources**. Ch
13131297
If you encounter fetch issues and you have enriching enabled, the issue may be the result of pressing the `Reset the "last run" timestamp` button.
13141298
Note that the way to reset the mechanism is to run the `splunk-reset-enriching-fetch-mechanism` command.
13151299
See [here](#resetting-the-enriching-fetch-mechanism).
1300+
1301+
### splunk-job-share
1302+
1303+
***
1304+
Change job settings to share its results to all Splunk users, and change its TTL.
1305+
1306+
#### Base Command
1307+
1308+
`splunk-job-share`
1309+
1310+
#### Input
1311+
1312+
| **Argument Name** | **Description** | **Required** |
1313+
| --- | --- | --- |
1314+
| sid | Comma-separated list of job IDs to share. | Required |
1315+
| ttl | Time in seconds for the job's expiry time. Default is 1800. | Optional |
1316+
1317+
#### Context Output
1318+
1319+
There is no context output for this command.

Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.py

Lines changed: 60 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
import demistomock as demisto # noqa: F401
2+
from CommonServerPython import * # noqa: F401
13
import hashlib
24
import io
35
import json
@@ -7,10 +9,10 @@
79

810
import dateparser
911
from collections import defaultdict
10-
import demistomock as demisto # noqa: F401
12+
1113
import pytz
1214
import requests
13-
from CommonServerPython import * # noqa: F401
15+
1416
from packaging.version import Version # noqa: F401
1517
from splunklib import client, results
1618
from splunklib.binding import AuthenticationError, HTTPError, namespace
@@ -3589,6 +3591,60 @@ def splunk_job_status(service: client.Service, args: dict) -> list[CommandResult
35893591
return job_results
35903592

35913593

3594+
def splunk_job_share(service: client.Service, args: dict) -> list[CommandResults]: # pragma: no cover
3595+
sids = argToList(args.get("sid"))
3596+
try:
3597+
ttl = int(args.get("ttl", 1800))
3598+
except ValueError:
3599+
return_error(f"Input error: Invalid TTL provided, '{args.get('ttl')}'. Must be a valid integer.")
3600+
3601+
job_results = []
3602+
for sid in sids:
3603+
try:
3604+
job = service.job(sid)
3605+
except HTTPError as error:
3606+
if str(error) == "HTTP 404 Not Found -- Unknown sid.":
3607+
job_results.append(CommandResults(readable_output=f"Not found job for SID: {sid}"))
3608+
else:
3609+
job_results.append(
3610+
CommandResults(readable_output=f"Querying splunk for SID: {sid} resulted in the following error {str(error)}")
3611+
)
3612+
else:
3613+
try:
3614+
ttl_results = True
3615+
job.set_ttl(ttl) # extend time-to-live for results
3616+
except HTTPError as error:
3617+
job_results.append(
3618+
CommandResults(
3619+
readable_output=f"Error increasing TTL for SID: {sid} resulted in the following error {str(error)}"
3620+
)
3621+
)
3622+
ttl_results = False
3623+
try:
3624+
share_results = True
3625+
endpoint = f"search/jobs/{sid}/acl"
3626+
service.post(endpoint, **{"sharing": "global", "perms.read": "*"})
3627+
except HTTPError as error:
3628+
job_results.append(
3629+
CommandResults(
3630+
readable_output=f"Error changing permissions for SID: {sid} resulted in the following error {str(error)}"
3631+
)
3632+
)
3633+
share_results = False
3634+
3635+
entry_context = {"SID": sid, "TTL updated": str(ttl_results), "Sharing updated": str(share_results)}
3636+
human_readable = tableToMarkdown("Splunk Job Updates", entry_context)
3637+
job_results.append(
3638+
CommandResults(
3639+
outputs=entry_context,
3640+
readable_output=human_readable,
3641+
outputs_prefix="Splunk.JobUpdates",
3642+
outputs_key_field="SID",
3643+
)
3644+
)
3645+
return job_results
3646+
3647+
35923648
def splunk_parse_raw_command(args: dict):
35933649
raw = args.get("raw", "")
35943650
rawDict = rawToDict(raw)
@@ -3990,6 +4046,8 @@ def main(): # pragma: no cover
39904046
splunk_submit_event_hec_command(params, service, args)
39914047
elif command == "splunk-job-status":
39924048
return_results(splunk_job_status(service, args))
4049+
elif command == "splunk-job-share":
4050+
return_results(splunk_job_share(service, args))
39934051
elif command.startswith("splunk-kv-") and service is not None:
39944052
app = args.get("app_name", "search")
39954053
service.namespace = namespace(app=app, owner="nobody", sharing="app")

Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.yml

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -181,6 +181,7 @@ configuration:
181181
displaypassword: HEC Token (HTTP Event Collector)
182182
hiddenusername: true
183183
required: false
184+
display: ''
184185
- display: 'HEC Token (HTTP Event Collector)'
185186
name: hec_token
186187
type: 4
@@ -257,6 +258,7 @@ configuration:
257258
The default is "creation time".
258259
section: Collect
259260
advanced: true
261+
required: false
260262
- defaultvalue: '15'
261263
display: 'Advanced: Fetch backwards window for the events occurrence time (minutes)'
262264
name: occurrence_look_behind
@@ -476,7 +478,6 @@ script:
476478
Event payload key-value pair.
477479
String example: "event": "Access log test message".
478480
name: event
479-
required: false
480481
- description: Fields for indexing that do not occur in the event payload itself. Accepts multiple, comma-separated, fields.
481482
name: fields
482483
- description: The index name.
@@ -700,14 +701,25 @@ script:
700701
- contextPath: Splunk.UserMapping.SplunkUser
701702
description: Splunk user mapping.
702703
type: String
703-
dockerimage: demisto/splunksdk-py3:1.0.0.4418698
704+
- arguments:
705+
- description: Comma-separated list of job IDs to share.
706+
isArray: true
707+
name: sid
708+
required: true
709+
- defaultValue: '1800'
710+
description: Time in seconds for the job's expiry time.
711+
name: ttl
712+
description: Change job settings to share its results to all Splunk users, and change its TTL.
713+
name: splunk-job-share
714+
dockerimage: demisto/splunksdk-py3:1.0.0.4887903
704715
isfetch: true
705716
ismappable: true
706717
isremotesyncin: true
707718
isremotesyncout: true
708719
script: ''
709720
subtype: python3
710721
type: python
722+
runonce: false
711723
tests:
712724
- SplunkPySearch_Test_default_handler
713725
- SplunkPy-Test-V2_default_handler
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
2+
#### Integrations
3+
4+
##### SplunkPy
5+
6+
- Added support for **splunk-job-share** command that change job settings to share its results to all splunk users, and change its ttl.
7+
- Updated the Docker image to: *demisto/splunksdk-py3:1.0.0.4887903*.

Packs/SplunkPy/pack_metadata.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
"name": "Splunk",
33
"description": "Run queries on Splunk servers.",
44
"support": "xsoar",
5-
"currentVersion": "3.3.0",
5+
"currentVersion": "3.3.1",
66
"author": "Cortex XSOAR",
77
"url": "https://www.paloaltonetworks.com/cortex",
88
"email": "",

0 commit comments

Comments
 (0)