<![CDATA[Elastic Blog - Elasticsearch, Kibana, and ELK Stack]]>https://www.elastic.coRSS for NodeSun, 09 Jan 2022 07:06:22 GMTSun, 09 Jan 2022 07:06:22 GMT<![CDATA[How search enables role-based data classification and sharing across the government]]>https://www.elastic.co/blog/how-search-enables-role-based-data-classification-sharing-across-governmenthttps://www.elastic.co/blog/how-search-enables-role-based-data-classification-sharing-across-governmentTue, 04 Jan 2022 17:00:00 GMT<![CDATA[How search enables role-based data classification and sharing across the government]]>Government data strategies lay a promising groundwork for how data will be used to drive more informed decision making internally and more streamlined public services externally. A commonality between these strategies is the need for improved role-based data sharing and data re-use. The sticking point, however, is in the way to implement data sharing when there are known silos across and within various departments. More often than not, these silos exist for good reason, particularly for data privacy compliance requirements.

How can these hurdles be overcome and the promise of data sharing across government departments be realized?

Key stakeholders of government data sharing initiatives

Before tackling this dilemma, it’s important to understand the key stakeholders of government data sharing initiatives. Let’s look at a hypothetical state government example where the state wants to get a more granular understanding of public health matters impacting small business start-ups and growth, employs a shared IT services model, and has a new data science practice. In this scenario, stakeholders include:

  • Line of business departments: At a minimum, the economic development department would have relevant data to consume, as would the public health department.
  • Information resources or information technology team: This team ensures the IT infrastructure can handle the individual department workloads and monitors overall security of the shared infrastructure.
  • Data science resources: These resources may be pulled in to assist the economic development and public health departments make sense of data for their near-term reporting, but they may also look to perform analysis for longer term outlooks.

Working with silos and compliance requirements

In this scenario, both the economic development and public health departments will need to draw upon siloed data that is subject to data privacy compliance requirements. The economic development department will likely have tax data subject to Publication 1075 (Pub 1075), which helps government agencies safeguard federal tax returns and return information, and the public health department will likely have health related data protected under the Health Information Portability and Accountability Act (HIPAA). How can role-based data sharing occur under these conditions?

The first step to working with silos and compliance requirements like this is to classify data. At Elastic, we help government customers work across silos by classifying data at its source and normalizing it for querying using a common schema. Data classification starts by tagging data using an add_tags processor on the Beats/Elastic Agent, and additional transformations can occur in the data pipeline with Logstash and ingest pipelines.


Data normalization then occurs using the Elastic Common Schema (ECS). ECS is an open source specification that facilitates the analysis of data from diverse sources by defining a common set of document fields for ingested data. ECS enables users to overcome data formatting inconsistencies that result from disparate data types, heterogeneous environments with diverse vendor standards, or similar-but-different data sources. With ECS, the data is not only available in a common format, it’ is also classified for role-based access control so that it’ is clear what data can and cannot be shared. Field level document access control can also be applied so that specific attributes, including Personally Identifiable Information (PII), may only be viewed by those with the appropriate access level.https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt293cea32c34870f8/61ce0a0a14048326a80982d5/Screen_Shot_2021-12-30_at_2.35.21_PM.png,Screen_Shot_2021-12-30_at_2.35.21_PM.png,

Next, with Elastic Cross-Cluster Search (CCS), those with role-based access use search to analyze data stored on clusters, which can be in different data centers. The data resides in its compliant environment but is queried at the endpoint. These queries can also be re-used for additional operational efficiency. In this way, Elastic helps users bring questions to the data, even if silos exist — enabling compliant inter-departmental data sharing through the power of search. Our hypothetical example addresses Pub 1075 and HIPAA compliance requirements, but this functionality extends to information security requirements that other departments would have for their particular use cases, such as NERC/CIP or special Security Operations Center (SOC) requirements.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt73910fdbecc3407a/61ce0a43f8d92f2155892159/Screen_Shot_2021-12-30_at_2.36.22_PM.png,Screen_Shot_2021-12-30_at_2.36.22_PM.png,

Visibility for day-to-day security and long-term analytics

In our state government scenario, the shared IT services team will want visibility into the clusters accessed in the different data centers to ensure that the infrastructure can support the user base. Perhaps more importantly, they always want a view of the entire ecosystem they support to ensure security vulnerabilities are not introduced. Using Elastic log monitoring, security solutions including SIEM, and Kibana for visualization, IT teams benefit from a single pane of glass view into the different departments they support. The same platform that enables role-based data sharing across departments also enables IT and security teams in their day-to-day infrastructure and security operations oversight.

Going a step further, the same platform can also be used by data science resources for long-term data analytics. With Elastic frozen tiering, older data remains actionable by being stored in the object store at a lower cost, which can then be queried without having to rehydrate the data using Elastic searchable snapshots. In our state government scenario, data science resources with role-based access can not only get a view of the current state, but they can also perform data lookback to build trend modeling for longer term outlooks.

Resources for more due diligence

Data sharing across government departments is not an easy undertaking, but our job at Elastic is to help you use the power of search to solve data challenges like this, and in turn, keep your data stakeholders in sync. As you perform due diligence on inter-departmental data sharing, get in touch with our state and local team at sled@elastic.co or our federal team at federal@elastic.co to take a deeper dive into the Elastic solutions outlined here. And when you’re ready, leverage a free trial of Elastic for your data sharing use case — available on the cloud marketplace, on FedRAMP cloud, or on-premises.


]]>
https://www.elastic.co/blog/how-search-enables-role-based-data-classification-sharing-across-governmenthttps://www.elastic.co/blog/how-search-enables-role-based-data-classification-sharing-across-governmentTue, 04 Jan 2022 17:00:00 GMT
<![CDATA[Gain the upper hand over adversaries with Osquery and Elastic]]>https://www.elastic.co/blog/gain-upper-hand-over-adversaries-with-osquery-and-elastichttps://www.elastic.co/blog/gain-upper-hand-over-adversaries-with-osquery-and-elasticTue, 04 Jan 2022 14:00:00 GMT<![CDATA[Gain the upper hand over adversaries with Osquery and Elastic]]>With the Elastic 7.16 release, Osquery Manager is now generally available for Elastic Agent, making it easier than ever to deploy and run Osquery across your environments. By collecting Osquery data and combining it with the power of the Elastic Stack, you can greatly expand your endpoint telemetry, enabling enhanced detection and investigation, and improved hunting for vulnerabilities and anomalous activities.

This blog post gives a brief intro to the Osquery Manager integration for Elastic Agent ​​and how it can be used in conjunction with Elastic Security. Included are examples that show how to operationalize the Osquery data with use cases such as building critical security alerts, querying isolated hosts during investigations, and monitoring for anomalous host activities with ML detections.

How does Osquery Manager work?

Osquery is an open source tool that lets you query Operating Systems like a database using SQL. When you add the Osquery Manager integration to an Elastic Agent policy, Osquery is deployed to all agents assigned to that policy. Once that’s added, from Kibana, you can run live queries and schedule recurring queries for those agents to gather data from hundreds of tables across your entire enterprise. These capabilities help with real time incident response, threat hunting, and regular monitoring to detect vulnerability or compliance issues.https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt7798a49bc1bfba08/61b8e555c455652925b787df/1-schedule-queries.png,1-schedule-queries.pnghttps://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blta36a97d483a6ebb6/61b8e56dd73c783f9a3d157b/2-it-compliance-details.png,2-it-compliance-details.png,

When you run live or scheduled queries, the results are automatically stored in an Elasticsearch index and can easily be mapped to the Elastic Common Schema, normalizing your data to a common set of fields to work with the SIEM app and enabling you to easily search, analyze, and visualize data across multiple sources.

Build security alerts for Osquery data

Osquery surfaces a broad swath of data about operating systems. When combined with the Elastic Security solution, security teams are able craft queries that help them to detect threats within their environment, monitor for issues that matter the most to their organization, and then take action when there’s a problem.

As an example, one issue to monitor is whether any of your systems have processes running where the executable is no longer on disk. This can be an indicator of a malicious process, for example, when malware deletes itself after execution to avoid detection.

You can monitor this using Osquery across Windows, Linux, and Mac systems with a simple query:

SELECT * FROM processes;

The response from the processes table includes several useful fields, like the name, pid, and path of all running processes on the target systems, as well as whether the process path exists on_disk. If on_disk = 0 for a process, that means the file is no longer on the disk and there may be an issue. This is a perfect use case for 1) scheduling a query to monitor for this across your fleet, and 2) creating an alert to notify you when a process is found that doesn’t have a binary on disk.

While it’s possible to schedule a query that specifically checks for processes where no binary is on disk (for example, using SELECT name, path, pid FROM processes WHERE on_disk = 0), it can be beneficial to schedule a broader query that retrieves all fields for the processes table, because you can use that data to drive several cases you may want to monitor.https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt47ee7a2c74db7e05/61b8e5b6c548b77c207db560/3-threat-detection-details.png,3-threat-detection-details.png,

Once this query is running regularly, you can then write a detection rule to alert you when query results include a process that’s missing a binary on disk. This example rule will alert if it finds any results for the running-processes query in the threat-detection pack where the on_disk field is 0.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blted6c4a96ed50aaf8/61b8e5d78ae3ee7d4c154be9/4-running-process-without-binary.png,4-running-process-without-binary.png,

Query isolated hosts

Combining Osquery with the Endpoint Security integration can take your security operations to the next level. With Endpoint Security enabled, when you are handling a security incident and suspect that a system has been compromised, you can isolate the host from your network to block communication and prevent lateral movement to other hosts. Isolating a host in this situation can give you time to investigate the issue and recover to a safe state.

While a host is isolated, it can still communicate with the Elastic Stack, and you can use Osquery to run live queries against the host to help with your investigation. For example, you can use it to help assess the impact and severity of the compromise or to confirm the issue has been resolved before releasing the host.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt594eaad526014424/61b8e5f95b7db17e976062b0/5-monitoring-osquery.gif,5-monitoring-osquery.gif,

Monitor for anomalous host activities

With scheduled query packs, you can run a set of queries regularly to establish a baseline of behavior and activity on your hosts. The data you collect over time helps you to build an understanding of what normal operating conditions are like in your environment. For example, you can write queries to monitor for the applications users have installed, who logs into which systems, which programs run on startup, and many others.

With Elastic Machine Learning, you can create anomaly detection jobs for specific Osquery data that you’re collecting so that you can identify anomalous patterns in that data.

Let’s walk through an example that shows how to monitor for anomalous programs installed on Windows systems.

First, to establish a baseline, schedule a query to begin collecting all programs installed on your Windows systems. This query is set to run once a day and also maps a few Osquery values to ECS to standardize the data:

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt9f41511887e52558/61b8e61585b59c201581aba4/6-edit-query.png,6-edit-query.png,

Next, create a saved search that you’ll use later to create your anomaly detection job. The search is based on the action_id of the scheduled query, which includes the pack name (windows-hardening) and the query name (windows-programs).

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt8972acc0d09962a2/61b8e6349793463f93f419bb/7-discover-view.png,7-discover-view.png,

Using the saved search, you can now create a Machine Learning job that detects application anomalies in these search results. This job has a detector that looks for rare application names (package.name) in the Osquery results, and it is set to run continuously.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt1ce98bcab6ab017a/61b8e64b130d6061708025c5/8-create-job.png,8-create-job.png,

Running this job helps to identify potential issues across your environment — for example, to find uncommon or unexpected applications that are installed on Windows workstations.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt7802382548a7fe6c/61b8e66085b59c201581aba8/9-anomalies.png,9-anomalies.png,

While outliers may be benign, they can also be an indicator of unwanted activity in your environment. Once you start capturing anomalies, you can write detection rules to alert on instances that merit investigation.

Give Osquery Manager a try

The Osquery Manager integration gives you greater insight into the endpoints you’re monitoring with the Elastic Security solution and helps security teams to better detect, investigate, and hunt for vulnerabilities and anomalous activities.

If you want to give all this a try and see how easy it is to deploy Osquery Manager and start running queries, you can start a free 14-day trial of Elastic. Please share any feedback on the Elastic Discuss forum or the Elastic Stack Community on Slack.]]>
https://www.elastic.co/blog/gain-upper-hand-over-adversaries-with-osquery-and-elastichttps://www.elastic.co/blog/gain-upper-hand-over-adversaries-with-osquery-and-elasticTue, 04 Jan 2022 14:00:00 GMT
<![CDATA[Elastic Security uncovers BLISTER malware campaign]]>https://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaignhttps://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaignWed, 22 Dec 2021 19:00:00 GMT<![CDATA[Elastic Security uncovers BLISTER malware campaign]]>Key takeaways:
  • Elastic Security uncovered a stealthy malware campaign that leverages valid code signing certificates to evade detection
  • A novel malware loader, BLISTER was used to execute second stage malware payloads in-memory and maintain persistence
  • The identified malware samples have very low or no detections on VirusTotal
  • Elastic provided layered prevention coverage from this threat out of the box

Overview


The Elastic Security team identified a noteworthy cluster of malicious activity after reviewing our threat prevention telemetry. A valid code signing certificate is used to sign malware to help the attackers remain under the radar of the security community. We also discovered a novel malware loader used in the campaign, which we’ve named BLISTER. The majority of the malware samples observed have very low, or no, detections in VirusTotal. The infection vector and goals of the attackers remain unknown at this time.

Elastic’s layered approach to preventing attacks protects from this and similar threats.

In one prevented attack, our malicious behavior prevention triggered multiple high-confidence alerts for Execution via Renamed Signed Binary Proxy, Windows Error Manager/Reporting Masquerading, and Suspicious PowerShell Execution via Windows Scripts. Further, our memory threat prevention identified and stopped BLISTER from injecting its embedded payload to target processes.

Finally, we have additional coverage from our open source detection engine rules [1] [2]. To ensure coverage for the entire community, we are including YARA rules and IoCs to help defenders identify impacted systems.

Details

Certificate abuse

A key aspect of this campaign is the use of a valid code signing certificate issued by Sectigo. Adversaries can either steal legitimate code-signing certificates or purchase them from a certificate authority directly or through front companies. Executables with valid code signing certificates are often scrutinized to a lesser degree than unsigned executables. Their use allows attackers to remain under the radar and evade detection for a longer period of time.

We responsibly disclosed the activity to Sectigo so they could take action and revoke the abused certificates. Below shows details about the compromised certificate. We have observed malware signed with this certificate as early as September 15, 2021.

Issuer: Sectigo Public Code Signing CA R36
Issued to: Blist LLC
Serial number: 2f4a25d52b16eb4c9dfe71ebbd8121bb
Valid from: ‎Monday, ‎August ‎23, ‎2021 4:00:00 PM
Valid to: ‎Wednesday, ‎August ‎24, ‎2022 3:59:59 PM

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/bltbd00b64be3fcda13/61c28d7c9d4a976169b70cca/1-digital-signature-information.png,1-digital-signature-information.png,

BLISTER malware loader

Another interesting aspect of this campaign is what appears to be a novel malware loader with limited detections in VirusTotal. We refer to it as the BLISTER loader. The loader is spliced into legitimate libraries such as colorui.dll, likely to ensure the majority of the on-disk footprint has known-good code and metadata. The loader can be initially written to disk from simple dropper executables. One such dropper writes a signed BLISTER loader to %temp%\Framwork\axsssig.dll and executes it with rundll32. LaunchColorCpl is a common DLL export and entry point name used by BLISTER as seen in the command line parameters:Rundll32.exe C:\Users\user\AppData\Local\Temp\Framwork\axsssig.dll,LaunchColorCpl,

Once executed, BLISTER decodes bootstrapping code stored in the resource section with a simple 4-byte XOR routine shown below:

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt7829c4ab7aa29c61/61c28dc9c9ed2d27fb59f515/2-bootstrapping-code.png,2-bootstrapping-code.png,

The bootstrapping code is heavily obfuscated and initially sleeps for 10 minutes. This is likely an attempt to evade sandbox analysis. After the delay, it decrypts the embedded malware payload. We have observed CobaltStrike and BitRat as embedded malware payloads. Once decrypted, the embedded payload is loaded into the current process or injected into a newly spawned WerFault.exe process.

Finally, BLISTER establishes persistence by copying itself to the C:\ProgramData folder, along with a re-named local copy of rundll32.exe. A link is created in the current user’s Startup folder to launch the malware at logon as a child of explorer.exe.

YARA

We have created a YARA rule to identify this BLISTER activity:

rule Windows_Trojan_Blister{ meta: author = “Elastic Security” creation_date = "2021-12-20" last_modified = "2021-12-20" os = "Windows" category_type = "Trojan" family = "Blister" threat_name = "Windows.Trojan.Blister" reference_sample = "0a7778cf6f9a1bd894e89f282f2e40f9d6c9cd4b72be97328e681fe32a1b1a00" strings: $a1 = {8D 45 DC 89 5D EC 50 6A 04 8D 45 F0 50 8D 45 EC 50 6A FF FF D7} $a2 = {75 F7 39 4D FC 0F 85 F3 00 00 00 64 A1 30 00 00 00 53 57 89 75} condition: any of them },

Defensive recommendations

Elastic Endpoint Alerts

Elastic Endpoint Security provides deep coverage for this threat by stopping the in-memory thread execution and preventing malicious behaviors.

Memory Threat Detection Alert: Shellcode Injection

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/bltc4a902a53ceeca0a/61c35e9b191c5560467dd52d/Screen_Shot_2021-12-22_at_12.21.14_PM.png,Screen_Shot_2021-12-22_at_12.21.14_PM.png,Malicious Behavior Detection Alert: Execution via Renamed Signed Binary Proxyhttps://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt5d37ecff1f214c8b/61c28e66e5a70e7d5276c18b/4-malicious-behavior-detection-alert.png,4-malicious-behavior-detection-alert.png,

Hunting queries

These queries can be used in Kibana's Security -> Timelines -> Create new timeline -> Correlation query editor. While these queries will identify this intrusion set, they can also identify other events of note that, once investigated, could lead to other malicious activities.

Proxy Execution via Renamed Rundll32

Hunt for renamed instances of rundll32.exe

process where event.action == "start" and process.name != null and (process.pe.original_file_name == "RUNDLL32.EXE" and not process.name : "RUNDLL32.EXE"),

Masquerading as WerFault

Hunt for potential rogue instances of WerFault.exe (Windows Errors Reporting) in an attempt to masquerade as a legitimate system process that is often excluded from behavior-based detection as a known frequent false positive:

process where event.action == "start" and process.executable : ("?:\\Windows\\Syswow64\\WerFault.exe" ,"?:\\Windows\\System32\\WerFault.exe") and /* legit WerFault will have more than one argument in process.command_line */ process.args_count == 1https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt52d5aeea010bfc02/61c28ec89793463f93f42c9c/5-evasion-werfault.png,5-evasion-werfault.png,

Persistence via Registry Run Keys / Startup Folder

Malware creates a new run key for persistence:

registry where registry.data.strings != null and registry.path : ( /* Machine Hive */ "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\*", "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\Run\\*", "HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon\\Shell\\*", /* Users Hive */ "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\*", "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\Run\\*", "HKEY_USERS\\*\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon\\Shell\\*" )https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/bltbd87a0de15b58aff/61c28efd3d2b1760404ed760/6-persistence-via-run-key.png,6-persistence-via-run-key.png,

Suspicious Startup Shell Folder Modification

Modify the default Startup value in the registry via COM (dllhost.exe) and then write a shortcut file for persistence in the new modified Startup folder:

sequence by host.id with maxspan=1m [registry where /* Modify User default Startup Folder */ registry.path : ( "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\User Shell Folders\\Common Startup", "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\\Common Startup", "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\User Shell Folders\\Startup", "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\\Startup" ) ] /* Write File to Modified Startup Folder */ [file where event.type : ("creation", "change") and file.path : "?:\\Users\\*\\AppData\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\*"]https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blte14e2ce37213f59d/61c28f30e5a70e7d5276c191/7-modified-startup.png,7-modified-startup.png,

Elastic Detection Engine Rules

The following existing public detection rules can also be used to detect some of the employed techniques:

Potential Windows Error Manager Masquerading

Windows Defender Exclusions Added via PowerShell

Startup or Run Key Registry Modification

Shortcut File Written or Modified for Persistence

Suspicious Startup Shell Folder Modification

MITRE ATT&CK

T1218.011 - Signed Binary Proxy Execution: Rundll32

T1055 - Process Injection

T1547.001 - Registry Run Keys / Startup Folder

T1036 - Masquerading

Summary

The BLISTER loader has several tricks which has allowed it to fly under the radar of the security community for months. This includes leveraging valid code signing certificates, infecting legitimate libraries to fool machine learning models, and executing payloads in-memory. However, the depth of protection offered with Elastic Security meant we were still able to identify and stop in-the-wild attacks.

Existing Elastic Security can access these capabilities within the product. If you’re new to Elastic Security, take a look at our Quick Start guides (bite-sized training videos to get you started quickly) or our free fundamentals training courses. You can always get started with a free 14-day trial of Elastic Cloud.

Indicators



Indicator

Type

Note

F3503970C2B5D57687EC9E31BB232A76B624C838

SHA1

Code-signing certificate thumbprint

moduleloader.s3.eu-west-2.amazonaws[.]com
discountshadesdirect[].com
bimelectrical[.]com
clippershipintl[.]com

Domain name

Malware c2

188.68.221[.]203
93.115.18[.]248
52.95.148[.]162
84.38.183[.]174
80.249.145[.]212
185.170.213[.]186

IP Address

Malware c2

ed6910fd51d6373065a2f1d3580ad645f443bf0badc398aa77185324b0284db8
cb949ebe87c55c0ba6cf0525161e2e6670c1ae186ab83ce46047446e9753a926
7b9091c41525f1721b12dcef601117737ea990cee17a8eecf81dcfb25ccb5a8f
84a67f191a93ee827c4829498d2cb1d27bdd9e47e136dc6652a5414dab440b74
cc31c124fc39025f5c3a410ed4108a56bb7c6e90b5819167a06800d02ef1f028
9472d4cb393256a62a466f6601014e5cb04a71f115499c320dc615245c7594d4
4fe551bcea5e07879ec84a7f1cea1036cfd0a3b03151403542cab6bd8541f8e5
1a10a07413115c254cb7a5c4f63ff525e64adfe8bb60acef946bb7656b7a2b3d
9bccc1862e3e5a6c89524f2d76144d121d0ee95b1b8ba5d0ffcaa23025318a60
8a414a40419e32282d33af3273ff73a596a7ac8738e9cdca6e7db0e41c1a7658
923b2f90749da76b997e1c7870ae3402aba875fdbdd64f79cbeba2f928884129
ed241c92f9bc969a160da2c4c0b006581fa54f9615646dd46467d24fe5526c7a
294c710f4074b37ade714c83b6b7bf722a46aef61c02ba6543de5d59edc97b60

sha256

Signed Droppers

df8142e5cf897af65972041024ebe74c7915df0e18c6364c5fb9b2943426ed1a
2d049f7658a8dccd930f7010b32ed1bc9a5cc0f8109b511ca2a77a2104301369
696f6274af4b9e8db4727269d43c83c350694bd1ef4bd5ccdc0806b1f014568a
a34821b50aadee0dd85c382c43f44dae1e5fef0febf2f7aed6abf3f3e21f7994
7cd03b30cfeea07b5ea4c8976e6456cb65e09f6b8e7dcc68884379925681b1c4
81edf3a3b295b0189e54f79387e7df61250cc8eab4f1e8f42eb5042102df8f1f
44e5770751679f178f90ef7bd57e8e4ccfb6051767d8e906708c52184bf27f32
0a7778cf6f9a1bd894e89f282f2e40f9d6c9cd4b72be97328e681fe32a1b1a00
a486e836026e184f7d3f30eaa4308e2f0c381c070af1f525118a484a987827c1
359ffa33784cb357ddabc42be1dcb9854ddb113fd8d6caf3bf0391380f9d640a
863228efa55b54a8d03a87bb602a2e418856e0028ae409357454a6303b128224
d0f934fd5d63a1524616bc13b51ce274539a8ead9b072e7f7fe1a14bb8b927a6
c0f3b27ae4f7db457a86a38244225cca35aa0960eb6a685ed350e99a36c32b61
216cb4f2caeaf59f297f72f7f271b084637e5087d59411ac77ddd3b87e7a90aa
00eb2f75822abeb2e222d007bdec464bfbc3934b8be12983cc898b37c6ace081
25a0d6a839c4dc708dcdd1ef9395570cc86d54d4725b7daf56964017f66be3c1
3c7480998ade344b74e956f7d3a3f1a989aaf43446163a62f0a8ed34b0c010d0
5651e8a8e6f9c63c4c1162efadfcb4cdd9ad634c5e00a5ab03259fcdeaa225ac
ba3a50930e7a144637faf88a98f2990a27532bfd20a93dc160eb2db4fbc17b58
fa885e9ea1293552cb45a89e740426fa9c313225ff77ad1980dfea83b6c4a91c
bee3210360c5d0939c5d38b7b9f0c232cf9fbf93b46a19e53930a1606bda28a5
56ca9ea3f7870561ed3c6387daf495404ed3827f212472501d2541d5ccf8b941
c61d2ba1e001c137533cd7fb6b38fe71fee489d61dbcfea45c37c5ec1bcf845c
17ea84d547e97a030d2b02ac2eaa9763ffb4f96f6c54659533a23e17268aabab
ca09d9cd2f3cfcc06b33eff91d55602cb33a66ab3fd4f540b9212fce5ddae54a
6c6f808f9b19e1fab1c1b83dc99386f0ceee8593ddfd461ac047eae812df8733

sha256

Unsigned BLISTER Loader DLL

afb77617a4ca637614c429440c78da438e190dd1ca24dc78483aa731d80832c2
516cac58a6bfec5b9c214b6bba0b724961148199d32fb42c01b12ac31f6a6099
8ae2c205220c95f0f7e1f67030a9027822cc18e941b669e2a52a5dbb5af74bc9
fe7357d48906b68f094a81d19cc0ff93f56cc40454ac5f00e2e2d9c8ccdbc388
af555d61becfcf0c13d4bc8ea7ab97dcdc6591f8c6bb892290898d28ebce1c5d
96bf7bd5f405d3b4c9a71bcd1060395f28f2466fdb91cafc6e261a31d41eb37a
f5104d0ead2f178711b1e23db3c16846de7d1a3ac04dbe09bacebb847775d76d
8e22cf159345852be585bc5a8e9af476b00bc91cdda98fd6a3244219a90ac9d9
d54dfedda0efa36ed445d501845b61ab73c2102786be710ac19f697fc8d4ca5c

sha256

Signed BLISTER Loader DLL

Launcher V7.3.13.exe
GuiFramwork.exe
ffxivsetup.exe
Predictor V8.21 - Copy.exe
Predictor Release v5.9.rar
PredictorGUI.exe
Readhelper.exe
dxpo8umrzrr1w6gm.exe
Pers.exe
razer.exe
Amlidiag.exe
Modern.exe
iuyi.exe
Cleandevicehelper.exe
installer.exe

File name

Dropper Names

Holorui.dll
Colorui.dll
Pasade.dll
Axsssig.dll
Helper.CC.dll
Heav.dll
Pasadeis.dll
Termmgr.dll
TermService.dll
rdpencom.dll
libcef.dll
tnt.dll

File name

BLISTER DLL Names

]]>
https://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaignhttps://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaignWed, 22 Dec 2021 19:00:00 GMT
<![CDATA[The Log4j2 Vulnerability: What to know, tools to learn more, and how Elastic can help]]>https://www.elastic.co/blog/log4j2-vulnerability-what-to-know-security-vulnerability-learn-more-elastic-supporthttps://www.elastic.co/blog/log4j2-vulnerability-what-to-know-security-vulnerability-learn-more-elastic-supportMon, 20 Dec 2021 17:00:00 GMT<![CDATA[The Log4j2 Vulnerability: What to know, tools to learn more, and how Elastic can help]]>Welcome to Elastic’s Log4j2 vulnerability information hub. Here we will explain what the specific Log4j2 vulnerability is, why it matters, and what tools and resources Elastic is providing to help negate the opportunity for malware exploits, cyberattacks, and other cybersecurity risks stemming from Log4j2.

What is Log4j2?

Log4j2 is an open source logging framework incorporated into many Java based applications on both end-user systems and servers. It is one of the most popular logging libraries online and it offers developers a means to log a record of their activity that can be used across various use-cases: code auditing, monitoring, data tracking, troubleshooting/tweaking, and more. Log4j2 is an open-source, free software that is used by some of the largest companies in the world.

How and why is Log4j2 being exploited?

In late November 2021, a remote code execution vulnerability was identified, reported under the CVE ID: CVE-2021-44228, and released to the public on December 10, 2021. The vulnerability is accessed and exploited through improper deserialization of user-input passed into the framework. It allows remote code execution and it lets an attacker leak sensitive data, such as environment variables, or execute malicious software on the target system which can have a dangerous domino effect. As we know, this is a serious matter and we have already seen its effects across various companies from Minecraft and Oracle to even some of our products here at Elastic. The U.S. government has issued a warning to companies to remain vigilant and be on high alert over the holidays for potential cyberattacks and ransomware issues.

The identified vulnerability impacts all versions of Log4j2 from version 2.0-beta9 to version 2.14.1. Early methods to patch the issue resulted in a number of release candidates, culminating in recommendations to upgrade the framework to Log4j2 2.15.0-rc2. For a more detailed look into how and why bad actors are using this vulnerability exploit, please refer to our blog about how to detect the log4j2 exploitation using Elastic Security.

How Elastic is approaching the Log4j2 exploit and the issues surrounding it

When Elastic learned of this vulnerability and how it affects our products, our engineering and security teams worked hard to ensure that our customers remained safe, aware, and were equipped with the knowledge of how to use our products to combat Log4j2’s vulnerabilities. We released an up-to-date advisory that outlines Elastic’s response, affected and unaffected products, updates, and more. We are also pleased to announce new versions of Elasticsearch and Logstash, 7.16.2 and 6.8.22, which upgrade to the latest release of Apache Log4j and address false positive concerns with some vulnerability scanners could be impacted. Elastic also maintains ongoing updates via our advisory to ensure our customers and communities can stay up-to-date on the latest developments, just as we are.

The Elastic Security team also released a response and analysis on the security flaw itself. As mentioned above, we also released an up-to-date blog on how to detect log4j2 exploits using Elastic Security (which we are linking to again because it is quite helpful). Those who are not using Elastic Security but now want to do so, please take a look at our free fundamentals training courses and Quick Start training videos.

What’s next for the log4j2 vulnerability and the companies and users that could be affected?

Concerns over bad actors taking advantage of this vulnerability, and the potential increasing number of malicious users doing so, is very real. Having the proper security measures in place to combat these threats and get ahead of them all together is quite necessary.

This situation is developing as more updates are made to Log4j2 in the hopes of curbing the issue, but until it is completely resolved, awareness and intentionality in regards to IT and cybersecurity are necessary. Despite the software being patched, this is not the end of the issue from a cybersecurity threat perspective. Many think that the exploits are just getting started, so in the meantime it is paramount that proper cybersecurity measures are taken, and that intentional threat mitigation is practiced.

But Elastic Security can help.

Get started with a free 14-day trial of Elastic Cloud. Or download the self-managed version of the Elastic Stack for free.

]]>
https://www.elastic.co/blog/log4j2-vulnerability-what-to-know-security-vulnerability-learn-more-elastic-supporthttps://www.elastic.co/blog/log4j2-vulnerability-what-to-know-security-vulnerability-learn-more-elastic-supportMon, 20 Dec 2021 17:00:00 GMT
<![CDATA[Introducing 7.16.2 and 6.8.22 releases of Elasticsearch and Logstash to upgrade Apache Log4j2]]>https://www.elastic.co/blog/new-elasticsearch-and-logstash-releases-upgrade-apache-log4j2https://www.elastic.co/blog/new-elasticsearch-and-logstash-releases-upgrade-apache-log4j2Sun, 19 Dec 2021 14:00:00 GMT<![CDATA[Introducing 7.16.2 and 6.8.22 releases of Elasticsearch and Logstash to upgrade Apache Log4j2]]>We are pleased to announce new versions of Elasticsearch and Logstash, 7.16.2 and 6.8.22, to upgrade to the latest release of Apache Log4j and address false positive concerns with some vulnerability scanners. Elastic also maintains ongoing updates via our advisory to ensure our Elastic customers and our communities can stay up-to-date on the latest developments.

December 10th started with the public disclosure of the Apache Log4j vulnerability - CVE-2021-44228 affecting the popular open source logging framework adopted by several Java based custom and commercial applications. This vulnerability, affecting versions 2.0-beta9 through 2.14.1 of Log4j2, and is already being exploited by nation state attackers and ransomware groups, such as APT35 and Hafnium. Research by Google using Open Source Insights estimates that over 35,000 packages (over 8% of Maven Central repository) have been impacted by the recently disclosed vulnerabilities, as of December 16th.

Apache Log4j released a fix to this initial vulnerability in Log4j version 2.15.0. However the fix was incomplete and resulted in a potential DoS and data exfiltration vulnerability, logged as CVE-2021-45046. This new vulnerability was fixed in Log4j2 version 2.16.0. However, version 2.16.0 itself was also found vulnerable to another DoS vulnerability, leading to a new CVE-2021-45105, and the eventual release of Apache Log4j2 version 2.17.0.

In our advisory post, we identify several mitigations that are effective on versions of Elasticsearch and Logstash even when using a vulnerable version of Log4j. Elasticsearch and Logstash versions 7.16.1 and 6.8.21 also fully mitigate CVE-2021-44228 and CVE-2021-45046. Despite these versions providing full protection against all known CVEs, they may trigger false positive alerts in vulnerability scanners that look at only the version of the Log4j dependency. We understand that while that may not lead to risk, some deployments and customers may still be concerned about compliance implications.

Introducing Elasticsearch 7.16.2 and Logstash 6.8.22

Today, we’re pleased to announce the availability of new versions of Elasticsearch and Logstash, 7.16.2 and 6.8.22 respectively, which upgrades Apache Log4j2 to version 2.17.0. We also retain the mitigations delivered in 7.16.1 and 6.8.21. The sum of mitigations against Log4j mitigations delivered in 7.16.2 and 6.8.22 include:

  1. Log4j upgraded to version 2.17.0
  2. JndiLookup class is completely removed to eliminate the attack surface area provided by the JNDI Lookup feature and associated risk of similar vulnerabilities
  3. log4j2.formatMsgNoLookups=true is set to disable one of the vulnerable features

Please refer to the Elastic advisory to stay up-to-date on the latest on all Elastic products and related mitigations.

While patching systems represents the best approach to stay ahead of these vulnerabilities, there may be instances where patching is delayed due to dependencies or unmanaged/rogue systems lurking within the environments. Elastic Security users can also leverage the power of detection and event correlation, using Elastic Endpoint, Auditbeat, and threat hunting capabilities, to identify any active exploitation of the Log4j2 vulnerability in the environment. Refer to Elastic’s blog on this topic to learn how Elastic can help.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt082eaa6d16739ab4/61beb250f8d92f2155890f40/blog-elastic-log4j2.png,blog-elastic-log4j2.png,

Existing Elastic Security can access these capabilities within the product. If you’re new to Elastic Security, take a look at our Quick Start guides (bite-sized training videos to get you started quickly) or our free fundamentals training courses. Refer to the documentation online to see how you can upgrade your Elasticsearch and Logstash deployments. You can always get started with a free 14-day trial of Elastic Cloud. Or download the self-managed version of the Elastic Stack for free.

Reference Material

https://logging.apache.org/log4j/2.x/security.html

https://security.googleblog.com/2021/12/understanding-impact-of-apache-log4j.html

https://www.bleepingcomputer.com/news/security/log4j-vulnerability-now-used-by-state-backed-hackers-access-brokers/

https://thehackernews.com/2021/12/hackers-begin-exploiting-second-log4j.html

https://www.cert.govt.nz/it-specialists/advisories/log4j-rce-0-day-actively-exploited/

https://www.zdnet.com/article/cisa-orders-federal-agencies-to-mitigate-log4j-vulnerabilities-in-emergency-directive/

]]>
https://www.elastic.co/blog/new-elasticsearch-and-logstash-releases-upgrade-apache-log4j2https://www.elastic.co/blog/new-elasticsearch-and-logstash-releases-upgrade-apache-log4j2Sun, 19 Dec 2021 14:00:00 GMT
<![CDATA[Better together: How Elastic supports fast-paced growth with empathy]]>https://www.elastic.co/blog/how-elastic-supports-fastpaced-growth-with-empathy-culturehttps://www.elastic.co/blog/how-elastic-supports-fastpaced-growth-with-empathy-cultureFri, 17 Dec 2021 17:00:00 GMT<![CDATA[How CISOs can better manage an emerging risk: unfilled roles]]>https://www.elastic.co/blog/how-cisos-better-manage-emerging-risk-unfilled-roles-insights-leadershiphttps://www.elastic.co/blog/how-cisos-better-manage-emerging-risk-unfilled-roles-insights-leadershipFri, 17 Dec 2021 17:00:00 GMT<![CDATA[Better together: How Elastic supports fast-paced growth with empathy]]>Our ten-year anniversary is just around the corner. As I’m sure you can imagine, a lot has changed since our humble beginnings as a recipe app. For one, we’ve gone from a scrappy startup with our core Elastic Stack to becoming a publicly traded company with a full arsenal of search-powered solutions.

Since our start, we’ve led with a human-centric approach to our culture and how we grow. It’s so integrated into what we do, it’s a part of our Source Code.

“At the beginning of defining values for the company, it was difficult to determine what would actually inspire people.” says CEO and founder, Shay Banon when talking about how our Source Code came to be. “We didn’t want our set of values to be bland or common. We wanted it to be multidimensional, so people can find themselves in it. Given that we started in the open source community and as a distributed company, our Source Code isn’t meant to be a directive from a centralized source, it’s about empowering the individual to find themselves in each part of the Source Code as an inspiration to what they can be at Elastic.”

So, in the spirit of our free and open heritage, we’re sharing some of the ways we’ve maintained our culture as we’ve grown, and what we’ve learned along the way.

Scaling teams and creating career paths

“Elastic was my first startup and distributed culture experience,” said Elyssa Emrich, global community programs team lead, when asked about her early days at an Elastic just shy of 500 employees. “In my welcome email, I shared that I’m a lover of Wisconsin sports, cheese, and beer. Right away, I was added to the craft-beer Slack channel and received “Go Pack Go” from fellow Green Bay Packer fans. That was the moment I knew I’d find a tight-knit community at Elastic.”

Our once small village has now grown to over 2,000 Elasticians. While knowing everyone’s name has become a thing of the past, a bustling city of Elasticians means new ideas. Collaborating and creating communities in our distributed work environment bolstered our culture and added to our rapid growth. And as we’ve scaled teams, we’ve been able to open up new career paths for Elasticians eager to take on new challenges and learn new skills.

“As the company grew,” said Marina Farthouat, senior director of HR business partners, “we saw opportunities to create new career opportunities for Elasticians that didn’t exist before. This was a way to continue fostering a strong culture and our alignment to the Elastic Source Code. We’ve grown our sales talent by nurturing career paths. For example, we’ve been able to promote experienced account executives internally by creating the regional vice president roles.”

Creating specific roles and more dedicated teams has also helped reduce individual workloads across the company and allowed us to hire people qualified to deliver on very specific outcomes.

“When I joined more than four years ago, we were a smaller company, with about 600 people,” says Paul Mewis, senior sales recruiter. “Back then, we looked for people who liked playing a variety of roles. As we quickly grew in size, we saw an increased need for more specialized talent to help us grow and evolve in an enterprise-ready environment. We had great individuals who helped us get to where we are today, and bring us to the next level as we scale the organization to a new stage.

As we’ve scaled as a company with new leadership roles, different types of talent, and the evolution of our technology, we also created entirely new teams and business functions and are experiencing rapid growth and success.

“I joined Elastic over a year ago as vice president of the US commercial segment,” says Melissa Humble, vice president of cloud sales. “During my first whirlwind eight months, I was promoted to vice president of our new global Inside Sales team — a rapidly expanding team that’s energetic, ambitious, and full of opportunities. We’re already the fastest-growing team at Elastic. In just the first six months of our team’s existence, we added over 120 Elasticians, from individual contributors to senior leaders ... and we are just getting started.”

Distributed doesn’t just mean remote

We’re distributed by design, but that doesn’t mean we all work remotely. In addition to our original office spaces, we’ve opened more than 30 offices around the world for those more comfortable in an office environment, whose roles require a bit of face time with customers, and for Elasticians who want to feel a more tangible sense of community.

For certain teams, these new offices have become an integral part of business. Our Inside Sales team, for example, recently opened and expanded offices in Austin, London, and Singapore to act as a home base for specific direct sales roles. While this doesn’t change the fact that we’re distributed first (most of us love working from home), it’s important for teams looking to make stronger collaborative efforts.

And face-to-face time is still important, no matter how efficiently we can work virtually. To make up for the in-person time we lost during COVID, and to find a way to connect across the business, we’ve held two virtual all hands while waiting for the all-clear to gather again in person. These virtual get-togethers were incredibly successful and inspired a sense of community during a time when human contact outside of daily work tasks was in short supply.

For many, our Global All Hands event (GAH) is an annual highlight. And because we’re optimistic types, we’re gearing up for a full company gathering in Las Vegas next June. In the last quarter we hired more people than in any previous quarter in our history and we’re really looking forward to meeting all the new people in person!

“Creating a sense of unity in a company that is used to working asynchronously, is really important to our culture" says Corey Williams, workplace experience (WeX) lead. “On the WeX team, we invest in creating opportunities for Elasticians to gather, both in-person and virtually, to continue building community and connection even as the company grows to a point where knowing everyone seems like an impossibility."

Diversity and inclusion

“Diversity is a journey, not a destination”.

As we’ve grown, we’ve experienced how invaluable diversity, equity, and inclusion are. Keeping this in focus since the beginning, we have seen a clear maturation of our internal initiatives and communities around DEI efforts. At the beginning we had various organic Slack groups where Elasticians could find a community that they identified with like our Blasticians group (supporting the Black community at Elastic), the LGBTQIA+, and Women of Elastic slack channel. During months of awareness and celebration like Black History Month or Pride, these groups would come together and share messages about their experiences at Elastic in the form of a blog or internal email.

Over the years, these once small groups grew and lent stronger voices to moments in time that mattered to them. Once a year blogs and internal communications from members of small Slack channels have turned into celebrations guided by our seven ERGs (Employee Resource Groups) and boosted by cross-functional teams around the globe.

These celebrations have featured everything from curated videos to educational session collaborations with Elastic Cares, the philanthropic goodness team at Elastic (recent sessions spotlighted a Brazil based non-profit organization providing shelter and mental health support, and an intersectional panel comprising trans BIPOC, Asian American, British Southeast Asian and African American speakers just to name two).

We even include things like internal story sharing during Black History Month, virtual cooking classes and Zumba for Elasticians Unidos Month to celebrate the different background and cultures within the Hispanic/LatinX community at Elastic, Pride-themed Elastic swag with proceeds going to organizations chosen by ERG members, and an inaugural scholarship to support underrepresented students’ pursuit of STEM education and careers.

Are we at the final destination of our DEI journey? No. The journey is never ending. But looking back at the progress we’ve made, we can see that we’re moving in the right direction. We know it’s not one person or team’s job to foster an inclusive environment, it takes a village of people excited to come on this journey with us.

Leading by listening

As we’ve added new teams, management layers, ERGs, and communities, we’ve been intentional about keeping our lines of communication open between Elasticians of all levels. Since the beginning of our company, Shay has regularly held “ask me anything” sessions (AMAs). These AMAs are extremely popular. The sessions aren’t just a time to come together across various time zones, but also a chance to get to know our leadership as they give unscripted, honest answers to real questions from Elasticians.

“Important discussions evolve out of these AMA sessions,” says Leah Sutton, senior vice president of HR. “And while the way we hold them might be different, the spirit is still there. When we started AMAs were with just Shay. Now the rest of the Senior Leadership Team joins the AMA to give a broader view across the business. We have an AMA doc that Elasticans add questions to and everyone votes on topics, which enables us to see what's on Elasticians' minds. It’s important that we know what the questions and concerns are, and have the opportunity to answer them in this open forum.”

This approach puts Elasticians first. Our leadership leads by listening.

Focusing on wellness

While life at Elastic can be thrilling and full of challenges, even the most eager of us have moments of doubt, feelings of isolation (especially in a pandemic), and a sense of being overwhelmed.

Early in the pandemic, when the rest of the world was just getting used to the idea of lockdowns, Elastic leadership established bi-weekly Shut-It-Down days. These Shut-It-Down days are a nearly company-wide paid day off every other Friday and provide a great opportunity to reset.

And Elastic’s leadership has doubled down on their commitment to continue providing solid health and wellness resources to all of our employees. We recently launched the Be.Well@Elastic program, dedicated to advancing the care we provide for Elasticians beyond their work life. We currently offer counseling plans, regular meditation sessions across the globe, and leave plans for employees who need more time off.

As Elastic grows, we’re dedicated to making sure that every Elastician (and their families) have the tools and resources they need to become and stay their best self both in and out of work.

Interested in joining Elastic? We’re hiring. Check out our teams and find the right career for you! Read more about life at Elastic on our blog.]]>
https://www.elastic.co/blog/how-elastic-supports-fastpaced-growth-with-empathy-culturehttps://www.elastic.co/blog/how-elastic-supports-fastpaced-growth-with-empathy-cultureFri, 17 Dec 2021 17:00:00 GMT
<![CDATA[How CISOs can better manage an emerging risk: unfilled roles]]>When Colonial Pipeline suffered a massive ransomware attack in early 2021, an internal vulnerability added to the crisis: The company was operating without its top cybersecurity manager.

The global security workforce needs to grow by 65% to become fully staffed, according to a new study by (ISC)², the world’s largest organization for cybersecurity pros. While modern IT platforms and tools can certainly automate many low-level tasks to help relieve overburdened security teams, chief information security officers must still find better ways to retain existing workers and recruit new ones.

If they don’t, those unfilled positions represent a significant source of risk, security leaders say. Staff shortages are causing misconfigured systems, oversights in following security procedures, rushed deployments, and an inability to recognize new threats — the very kind of lapses that often lead to breaches.

“It really does leave organizations more vulnerable if they don’t have adequately staffed cybersecurity teams,” says Clar Rosso, chief executive of(ISC)².

Moreover, the COVID-19 pandemic has made the problem especially acute. More than a quarter of security staffers temporarily left their jobs or worked reduced hours during the pandemic, according to the (ISC)² report.

In response, leading CISOs are adopting new strategies to fill open positions, and to keep their most valuable existing workers from jumping ship.

Recruit outside of IT

Because there aren’t enough highly skilled cybersecurity professionals to go around, CISOs and HR leaders are increasingly looking outside the traditional IT talent pool to find prospects with the aptitude and adaptable skills. It’s becoming more common for younger cybersecurity workers to start their careers outside of IT. According to the (ISC)² study, just 38% of Gen Z and Millennial security pros started out in IT, compared with 53% of Gen Xers and 55% of Baby Boomers.

“You’re not going to fill a 2.7 million job gap by hiring the same people,” Rosso says.

For people changing careers, cybersecurity is an attractive field: It has open positions in every region of the world with employers in different industries, and holds the promise of steady advancement. Indeed, 77% of cybersecurity pros surveyed by (ISC)² said that they were satisfied or extremely satisfied in their jobs, the highest levels ever reported in the annual study.

CISOs are increasingly considering candidates with problem-solving ability, communication skills, curiosity, and willingness to learn, as well as strong strategic thinking. For those prospects, CISOs “will invest in training for the technical skills,” says Rosso.

The military, government agencies, and trade schools are all rich sources of skills that are “readily transferable to cybersecurity roles,” says a 2020 study from Kudelski Security, a global cybersecurity company. For example, Hiring Our Heroes, a foundation supported by the U.S. Chamber of Commerce, offers 14-week cybersecurity “boot camps” for veterans interested in making a career jump.

Target diversity recruitment

Recruiting from outside the IT universe also presents an opportunity for CISOs to make progress with diversity goals. Women make up just 25% of the global cybersecurity workforce, the (ISC)² reports, and non-white employees hold only 28% of the cybersecurity jobs in North America and in the U.K.

“It’s clear that our industry faces serious future risks if it doesn’t find ways to recruit new talent to its ranks and fill the growing number of vacancies. But more than that, its current lack of diversity poses its own more immediate risks because company systems aren’t homogenous and neither are potential assailants,” says Mandy Andress, chief information security officer at Elastic.

Bringing up those numbers has a potentially greater impact beyond equity; it also supports core security objectives. Broadening the range of educational, geographic, neurodiverse and LGBTQ constituencies in cybersecurity can better equip security teams to assess and manage an ever-widening array of threats.

Andress added that the cybersecurity team she leads as an LGBTQIA+ female CISO includes people who represent the array of human nature when it comes to neurodiversity, sexual orientation, gender identity, race, and age. The picture is just as varied when it comes to background, educational pathway, and industry experience.

“In a multidisciplinary field like this, different perspectives are critical. When threats and tactics change around us daily, the diverse viewpoints on my team help counter complacency by bringing new thinking to situations,” says Andress.

Internally, companies can recruit more diverse candidates by writing job descriptions that aren’t overloaded with technical jargon. “You’ll get a more robust pool of candidates if you write higher-level job descriptions that are general in nature,” Rosso says, “and broaden the places you look for new hires.”

Boost retention with career development

Even after cybersecurity personnel are hired, CISOs often face an uphill battle to keep them. Less than 40% of organizations surveyed in 2021 by Hays, an IT executive search firm, said they could effectively retain the cybersecurity talent they recruited.

Because compensation is so competitive, companies must distinguish themselves in other ways like professional development. Paying for training and certification courses, and helping plot new career paths that promise steady advancement, can be highly effective.

“Will you retrain people? Will you bring people in at a more junior level and give them a mentor or leaders to develop them further? Those are important factors,” says Christine Wright, senior vice president at Hays.

The shift to remote work during the pandemic could ultimately pay dividends by giving employers a new perk to recruit and retain security pros.

“The pandemic has freed me to stop asking people to move to one of a few cities and instead allows me to meet talent where they are in the country or the world,” says Justin Berman, CISO at healthcare company Thirty Madison. “The ability to collaborate, communicate, and function as a team across diverse locations was always critical, but now it’s a strategic differentiator on hiring, because if you won’t let them, someone else will.”

Matt Palmquist is a freelance business journalist and former contributing editor of Strategy+Business magazine.

]]>
https://www.elastic.co/blog/how-cisos-better-manage-emerging-risk-unfilled-roles-insights-leadershiphttps://www.elastic.co/blog/how-cisos-better-manage-emerging-risk-unfilled-roles-insights-leadershipFri, 17 Dec 2021 17:00:00 GMT
<![CDATA[Use Cross Cluster Search with Elastic Cloud to improve observability]]>https://www.elastic.co/blog/use-cross-cluster-search-elastic-cloud-observabilityhttps://www.elastic.co/blog/use-cross-cluster-search-elastic-cloud-observabilityThu, 16 Dec 2021 17:00:00 GMT<![CDATA[Use Cross Cluster Search with Elastic Cloud to improve observability]]>Leveraging various tools and pieces of functionality, this blog describes how we have built a scalable cross-region and cross-provider Single Pane of Glass (SPOG) using cross-cluster search (CCS).

This blog will be useful if your Observability data is distributed across deployments, if you have multiple points of entry to view and analyze your data, and if you find yourself hopping from one Kibana instance to another to find the answers you’re looking for.

Supporting customers across multiple providers and regions means we collect and observe data within each region. Technically, we operate each region in a largely stand-alone fashion. Logs, metrics and trace data collected within that region will remain in that region. While this provides the benefits of isolation and security, it can be challenging to observe patterns across our distributed platform. If we find an interesting data point in one region, how can we easily determine if the same data point is occurring in any of our other regions?

Cross-cluster search, enhancements to the deployments API and the release of the Elastic Cloud terraform provider gives users the opportunity to create a central deployment, backed by multiple remote deployments in different regions. 

Scale

Cross-cluster search is a scalable solution. As we expand at Elastic Cloud, each new region is added to the central deployment as a new remote. This process needs to be seamless with the ability to keep pace with our rapid expansion. Adding observability to our ever expanding footprint needs to be a low-friction task.

Currently, we use cross-cluster search to view data from more than 150 remote observability clusters. These are distributed across more than 50 cloud regions from 4 providers. Combined, these deployments contain more than 700TB of observability data regarding our globally distributed platform.

Globally, our Elastic Cloud observability solution ingests 100TB+ of logs, metrics, and traces every day.

SLOs and SLIs

A key pillar of any observability solution is the definition of Service Level Indicators (SLIs) and Objectives (SLOs). This is covered in more detail by this blog post. Our reliability and development teams have been diligent in their definition and measurement of these essential indicators. This talk from ElasticON 2021 gives great insight into how this was achieved. In building a large cross-cluster search observability solution, these SLI’s should be front and center, easily accessible, and presented in the correct regional and/or global context.

Requirements:

With these considerations, our key requirements were:

  • To leverage cross-cluster search (CCS) functionality.
  • Be scalable.
    • Adding CCS configurations for new regions should be painless and easy.
    • Performance should scale with our growth.
  • Be described in code, using the Elastic Cloud terraform provider.
  • Surface both and global and regional SLIs to ensure easy visibility into the achievement of our operational objectives.

As we scale to provide services across more regions and providers, this solution will allow new remote deployments to be added to the infrastructure, described in code, and deployed with the terraform provider.

Step 1 : Deployment Identification

Which deployments will be used as “remotes” in our setup?

Using the Elastic Cloud Deployments API, we can associate some user-defined metadata tags with our deployments. For example, each deployment that will be used as a remote in our CCS setup can be tagged as `purpose : region_observability`.


The metadata object contains an array of key/value pairs, as described in DeploymentUpdateMetadata. Take a look at the Elastic Cloud API documentation to learn more about our powerful API-driven functionality.PUT /api/v1/deployments/ { "prune_orphans" : false, "metadata" : { "tags" : [ { "key" : "purpose", "value" : "region_observabilty" } ] } },

Now that your deployment is tagged, it can be programmatically identified via the API without needing the ID or name. You can, of course, apply the same label to several deployments.

Step 2 : Define your central deployment

It is time to create your new central deployment. This will be where you access the various UI elements to view the data in your remote deployments leveraging the power of CCS.

The Elastic Cloud terraform Provider allows you to describe this deployment quite easily in codeAfter configuring the provider (https://github.com/elastic/terraform-provider-ec#example-usage), create a simple `ec_deployment` resource.

resource "ec_deployment" "central_deployment" { name = "My Central CCS Deployment" region = "us-east-1" version = “7.15.1” deployment_template_id = "aws-io-optimized-v2" elasticsearch {} kibana {} },

You will note the absence of any “remote_cluster” definitions in the ec_deployment resource. We will add these using dynamic blocks and a for each argument.

Step 3 : Search for your remote deployments

This is where the fun really starts. We can use the provider’s ec_deployments datasource to return a list of deployment_ids for those deployments we tagged in Step 1. Iterate over this list of deployment_ids to generate an instance of the ec_deployment datasource for each remote cluster.data "ec_deployments" “metadata_filter" { tags = { "purpose" = "region_observabilty" } } data "ec_deployment" "remote_deployments" { count = length(data.ec_deployments.metadata_filter.deployments.*.deployment_id) id = data.ec_deployments.metadata_filter.deployments[count.index].deployment_id },

Step 4 : Flattening

The resulting nested object (remote_deployments) is not yet suitable for use with dynamic blocks and a for each argument. We will need to flatten this structure into something we can use.

flat_remote_deployments = flatten([ for c in data.ec_deployment.remote_deployments : { deployment_id = c.id alias = format("%s-%s", c.tags.purpose, c.region) ref_id = c.elasticsearch.0.ref_id } ]),

The “alias” field defined here will become the CCS alias used to perform searches of remote clusters. In this case, we can expect something like “region_observabilty-us-east-1”.

Step 5 : Put it all together

The flattened object can now be used to populate the relevant “remote_cluster” field in the original ec_deployment resource created in Step 2.resource "ec_deployment" "central_deployment" { name = "My Central CCS Deployment" region = "us-east-1" version = “7.15.0” deployment_template_id = "aws-io-optimized-v2" elasticsearch { dynamic "remote_cluster" { for_each = local.flat_remote_deployments content { deployment_id = flat_remote_deployments.value.deployment_id alias = flat_remote_deployments.value.alias ref_id = flat_remote_deployments.value.ref_id skip_unavailable = true } } } kibana {} },

Step 6 : Kibana configuration

Once your terraform configuration has been successfully applied, your central deployment will be configured with the relevant remote cluster as CCS backends. Log in here and take a look around.


https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt8ae14a7dd45dcf59/61afd9bf982fe80ec570014f/Screen_Shot_2021-12-07_at_5.00.35_PM.png,Screen_Shot_2021-12-07_at_5.00.35_PM.png,

Under the menu “Stack Management -> Remote Clusters” you will see the configured deployments.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt46efca89ce0fdb12/61afd9e41a1be33b1b23fe3b/Screen_Shot_2021-12-07_at_5.01.00_PM.png,Screen_Shot_2021-12-07_at_5.01.00_PM.png,As you can see, the configured “alias” field from Step 4 has been populated here as the remote cluster names. This name/alias is used in the syntax for searching the remote deployments.GET region_observability-eu-west-1:filebeat-*/_search GET region_observability-*:filebeat-*/_search,

Use these alias patterns to configure the various UI components in Kibana. For example, with Observability -> Logs, you can set the application to look for logs in your remote deployments.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt1fb220191b290d83/61afda5a6a78b877510025a6/Screen_Shot_2021-12-07_at_5.04.00_PM.png,Screen_Shot_2021-12-07_at_5.04.00_PM.png,

Wrapping it up

We now have a central Kibana point-of-entry for data exploration. Data from all your tagged deployments is now available here to view and analyze.

Have a new cluster to add? Simply add the relevant metadata tags (see Step 1), your terraform definition will build a new plan, and apply the changes seamlessly.

Hopefully you now have a glimpse of the power and flexibility that comes from combining the power of terraform, the Elastic Cloud provider, and the functionality available from the Elastic Cloud API.

]]>
https://www.elastic.co/blog/use-cross-cluster-search-elastic-cloud-observabilityhttps://www.elastic.co/blog/use-cross-cluster-search-elastic-cloud-observabilityThu, 16 Dec 2021 17:00:00 GMT
<![CDATA[Advice for CIOs from the IT frontlines: Design training programs for higher impact and retention]]>https://www.elastic.co/blog/upskilling-reskilling-it-key-strategies-successhttps://www.elastic.co/blog/upskilling-reskilling-it-key-strategies-successThu, 16 Dec 2021 14:00:00 GMT<![CDATA[Advice for CIOs from the IT frontlines: Design training programs for higher impact and retention]]>The shortage of tech skills worsened during the pandemic, and the battle to hire and retain top IT talent is only getting more challenging. To close the IT talent gap, more CIOs are looking to expand the skills of their current workforce, according to a recent McKinsey Global Survey.

But effective upskilling and reskilling isn’t simply a matter of paying for IT practitioners to earn certifications in the latest technology. Leaders must decide what options align best with business objectives, specific skills gaps, and rapidly changing learning styles of knowledge workers.

To get a front-line perspective on these issues, we interviewed a group of seasoned IT pros for their insights into how enterprise leaders should approach employee training and reskilling challenges. Here are highlights from the conversations.

Invest more in tomorrow’s needs, not today’s

Sometimes leaders can be shortsighted in wanting their IT employees to only get skills that are relevant to the job they’re doing at that moment. A lot of times, that’s not really exciting to the employee. We want personal growth and to learn new things. For us to have vibrant careers, we have to stay ahead of the latest technologies.

It’s also shortsighted for the company. Even if an employee is not doing artificial intelligence at the moment, by gaining an understanding of it, they might see business opportunities for the company involving AI that allow the company to evolve.

It’s about gaining the skills of the future versus the skills of today. Obviously, you’ve got to have a skill baseline to do your job, but leaders need to recognize that even if there’s not an immediate one-to-one connection between the training and the current job, eventually it will pay off both for the employee and the company.

Carmen Fontana

Software developer and director of operations at digital healthcare firm Augment Therap

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt83dc46a0ddda021f/61a92f057282fa2393f78037/Carmen_Fontana.png,Carmen_Fontana.png,

Reskilling employees: It takes time to master

What I’ve observed is that a lot of people in leadership roles expect employees to start using a specific skill set correctly immediately after taking a training course. But employees need time to delve deeper into specific topics after the training ends.

They should understand that most trainings cover a lot of material, much of which might not be relevant for your work. In my career, I’ve done a number of certification courses, which tend to cover topics at the surface level. It’s a good introduction, but it’s not enough for skills mastery.

Once you are comfortable with the subject, you need to dive into certain areas yourself and find what is most relevant to your work. That means self-directed study such as blogs and YouTube tutorials, or even seeking out an expert or two directly on Twitter, Facebook or LinkedIn with specific questions.

—Rashid Feroze

Lead security engineer at credit card payment platform CRED

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/bltb103f835c1e1385f/61a9326637c855238d66f0b9/Rashid_Feroze.png,Rashid_Feroze.png,

Allow time and space for more DIY learning

Managers and higher ups sometimes focus too much on certification training. They don’t see that somebody might actually have more cutting-edge skills by studying the latest advances online on their own.

I like to say that for somebody to be successful in security, they have to be obsessed with it. I’m learning new skills all the time, pretty much every day, by reading the latest blogs and watching the newest YouTube tutorials by people who are respected in the community. Something fresh from a top leader in security is going to have more up-to-date information than an online course.

Then you have to learn by doing. I have a full set of virtual machines and equipment where I will practice what the blog or YouTube video is teaching. Leaders who “get it” understand that skill acquisition requires engineers and analysts to build those muscles in a lab environment to really get their skills to the cutting edge.

Ivan Ninichuck

Solutions engineer at Siemplify, a cybersecurity software provider

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt9fc238f3b5fe5854/61a932869a2dce135acd6b42/Ivan_Ninichuck.png,Ivan_Ninichuck.png,

Make sure your team is onboard before you invest in new tools

Executives don’t need to understand every new security technology they’re buying, but they need to make sure we understand it.

For example, in response to a new threat, they’ll buy a defense product without having anyone on the Blue Team who knows how to use it. A product’s not going to secure the company on its own. Either train the Blue Team so we have the skills to deploy it or hire the specialized talent who can use it out of the box.

If they bring in someone with that skill, that person can bring the rest of the team up to speed by showing us how to use it on the job. That’s my preference, because I’m a hands-on learner. It’s not about taking a multiple-choice test and getting a certificate that says, “Now I have this skill.” I have to get my hands on the keyboard and learn by doing.

Ronnie Watson

IT security analyst in the financial services sector

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt4b3360ba1424a65a/61a932962dc01977975bd499/Ronnie_Watson.png,Ronnie_Watson.png,

What is one key skill you think all CIOs should recruit for?

Feroze: Agility. These days you have to pivot quickly from solution A to solution B, because solution A is not working. Not everybody has that mindset.

Fontana: People skills. In college, I did a lot of math, not a lot of talking to other human beings. After I gained those communication and leadership skills, I was a more effective technologist.

Ninichuk: Writing ability. Not only do you need to know the technology, you need to be able to communicate about it.

Watson: Analytical thinking. In security, the ability to take an analytical approach to tasks is everything.

]]>
https://www.elastic.co/blog/upskilling-reskilling-it-key-strategies-successhttps://www.elastic.co/blog/upskilling-reskilling-it-key-strategies-successThu, 16 Dec 2021 14:00:00 GMT
<![CDATA[How observability drives better customer experiences]]>https://www.elastic.co/blog/how-observability-drives-better-customer-experienceshttps://www.elastic.co/blog/how-observability-drives-better-customer-experiencesThu, 16 Dec 2021 07:00:00 GMT<![CDATA[How observability drives better customer experiences]]>As marketing director of technology for Stanley Black & Decker, it’s no surprise that Colleen Romero is into power tools. But her favorite utility isn’t a jigsaw or cordless drill.

It’s observability technology.

Observability—software that integrates data from various departments and applications —makes it possible to monitor how customers experience the performance of the 350 websites that Romero oversees. On her watch, the tools have spotted some problems before customers notice.

“We no longer have customers calling us to say our websites are down,” says Romero. “Observability tools let us spot any degradation of performance and identify the root cause so we can address it proactively.”

Observability enables businesses and IT teams to trace customer issues back to an offending application, process, or device, so that they can quickly address the issue and make changes to prevent it from recurring.

Before the emergence of observability software, most IT shops used a patchwork of point solutions to track metrics, logs and traces for individual departments. But the fragmented products were expensive to support and didn’t capture bigger-picture insights.

The latest observability tools can search many types of data—such as operational data logs and records from application performance monitoring (APM) systems—to spot relationships that no one had thought to track. A cable company, for example, might learn that increased customer churn was associated not with the usual suspects such as router problems, but perhaps to a glitchy upgrade of remote control software.

Observability drives better customer experiences

Observability can be especially critical in improving customer experience. If a website doesn’t work as advertised, 42% of visitors will leave and never come back; a site that takes too long to load can lose two-thirds of its visitors.

“You need observability tools in this age because the web is like a public utility,” says Romero. “It’s no longer OK for a page to load slowly or not at all. Customer expectations are far too high for that.”

Observability can also make IT operations efficient by directing staffers to the precise cause of a problem instead of forcing them to go on fishing expeditions using data that may not result in the right solution. Through additional data analysis, observability software also generates actionable insights for understanding the customer experience from the enormous amount of data produced by monitoring systems, says Jayne Groll, CEO of the DevOps Institute.

“You could easily get buried under mountains of data without observability tools,” she says.

For example, Furuno, a Japanese maker of marine electronics and satellite communications systems, recently used observability tools from Elastic to resolve complaints from shipping customers about spotty internet service and increasing monthly data charges. Previously these problems had been hard to troubleshoot because it took on average two days to get performance logs from satellite service providers, and even then the data came in the form of massive spreadsheets. It wasn’t obvious whether the problems stemmed from a broken antenna, bad weather, or sailors using excessive bandwidth to stream movies.

With observability, Furuno can now see an integrated picture of the various data sources, delivered to computers on customers’ vessels, which can lead to diagnosing why the internet was not working properly within an hour.

The observable future

Observability technology is positioned to make significant advances as ML capabilities are added. With current tools, it’s still up to human analysts to figure out how to respond to the tool’s insights, says Scott Sinclair, a senior industry analyst with Enterprise Strategy Group. Over the next few years, he says, machine learning capabilities will help observability tools diagnose failures and present solutions in real time, giving customer and IT teams a better chance to achieve a core objective—to keep more customers happier, around the clock, at times before they’re even aware of the problem.

“We’ve only scratched the surface,” says Sinclair.


]]>
https://www.elastic.co/blog/how-observability-drives-better-customer-experienceshttps://www.elastic.co/blog/how-observability-drives-better-customer-experiencesThu, 16 Dec 2021 07:00:00 GMT
<![CDATA[Implementing academic papers: lessons learned from Elasticsearch and Lucene]]>While developing Elasticsearch, we occasionally come across an important problem with no simple or established approach to solving it. It's natural to ask “hmm, is there an academic paper that addresses this?” Other times, academic work is a source of inspiration. We'll encounter a paper proposing a new algorithm or data structure and think “this would be so useful!” Here are just a few examples of how Elasticsearch and Apache Lucene incorporate academic work:

Academic papers are an invaluable resource for engineers developing data-intensive systems. But implementing them can be intimidating and error-prone — algorithm descriptions are often complex, with important practical details omitted. And testing is a real challenge: for example, how can we thoroughly test a machine learning algorithm whose output depends closely on the dataset?

This post shares strategies for implementing academic papers in a software application. It draws on examples from Elasticsearch and Lucene in hopes of helping other engineers learn from our experiences. You might read these strategies and think “but this is just software development!” And that would indeed be true: as engineers we already have the right practices and tools, they just need to be adapted to a new challenge.

Evaluate the paper as you would a software dependency

Adding a new software dependency requires careful evaluation: if the other package is incorrect, slow, or insecure, our project could be too. Before pulling in a dependency, developers make sure to evaluate its quality.

The same applies to academic papers you're considering implementing. It may seem that because an algorithm was published in a paper, it must be correct and perform well. But even though it passed a review process, an academic paper can have issues. Maybe the correctness proof relies on assumptions that aren't realistic. Or perhaps the “experiments” section shows much better performance than the baseline, but this only holds on a specific dataset. Even if the paper is of great quality, its approach may not be a good fit for your project.

When thinking about whether to take a “dependency” on an academic paper, it's helpful to ask the same questions we would of a software package:

  • Is the library widely-used and “battle tested”? → Have other packages implemented this paper, and has it worked well for them?
  • Are performance benchmarks available? Do these seem accurate and fair? → Does the paper include realistic experiments? Are they well designed?
  • Is a performance improvement big enough to justify the complexity? → Does the paper compare to a strong baseline approach? How much does it outperform this baseline?
  • Will the approach integrate well with our system? → Do the algorithm's assumptions and trade-offs fit our use case?

Somehow when a software package publishes a performance comparison against their competitors, the package always comes out fastest! If a third party designed the benchmarks, they may be more balanced. The same phenomenon applies to academic papers. If an algorithm performs well not only in the original paper, but also appears in other papers as a strong baseline, then it is very likely to be solid.

Get creative with testing

Algorithms from academic papers often have more sophisticated behavior than the types of algorithms we routinely encounter. Perhaps it's an approximation algorithm that trades off accuracy for better speed. Or maybe it's a machine learning method that takes in a large dataset, and produces (sometimes unexpected) outputs. How can we write tests for these algorithms if we can't characterize their behavior in a simple way?

Focus on invariants

When designing unit tests, it's common to think in terms of examples: if we give the algorithm this example input, it should have that output. Unfortunately for most mathematical algorithms, example-based testing doesn't sufficiently cover their behavior.

Let's consider the C3 algorithm, which Elasticsearch uses to figure out what node should handle a search request. It ranks each node using a nuanced formula that incorporates the node's previous service and response times, and its queue size. Testing a couple examples doesn't really verify we understood the formula correctly. It helps to step back and think about testing invariants: if service time increases, does the node's rank decrease? If the queue size is 0, is the rank determined by response time, as the paper claims?

Focusing on invariants can help in a number of common cases:

  • Is the method supposed to be order-agnostic? If so, passing the input data in a different order should result in the same output.
  • Does some step in the algorithm produce class probabilities? If so, these probabilities should sum to 1.
  • Is the function symmetric around the origin? If so, flipping the sign of the input should simply flip the sign of the output.

When we first implemented C3, we had a bug in the formula where we accidentally used the inverse of response time in place of response time. This meant slower nodes could be ranked higher! When fixing the issue, we made sure to add invariant checks to guard against future mistakes.

Compare to a reference implementation

Alongside the paper, the authors hopefully published an implementation of the algorithm. (This is especially likely if the paper contains experiments, as many journals require authors to post code for reproducing the results.) You can test your approach against this reference implementation to make sure you haven't missed important details of the algorithm.

While developing Lucene's HNSW implementation for nearest-neighbor search, we tested against a reference library by the paper's authors. We ran both Lucene and the library against the same dataset, comparing the accuracy of their results and the number of computations they performed. When these numbers match closely, we know that Lucene faithfully implements the algorithm.

When incorporating an algorithm into a system, you often need to make modifications or extensions, like scaling it to multiple cores, or adding heuristics to improve performance. It's best to first implement a "vanilla" version, test it against the reference, then make incremental changes. That way you can be confident you've captured all the key parts before making customizations.

Duel against an existing algorithm

The last section raises another idea for a test invariant: comparing the algorithm's output to a simpler and better-understood algorithm's output. As an example, consider the block-max WAND algorithm in Lucene, which speeds up document retrieval by skipping over documents that can't appear in the top results. It is difficult to describe exactly how block-max WAND should behave in every case, but we do know that applying it shouldn't change the top results! So our tests can generate several random search queries, then run them both with and without the WAND optimization and check that their results always match.

An important aspect of these tests is that they generate random inputs on which to run the comparison. This can help exercise cases you wouldn't have thought of, and surface unexpected issues. As an example, Lucene's randomized comparison test for BM25F scoring has helped catch bugs in subtle edge cases. The idea of feeding an algorithm random inputs is closely related to the concept of fuzzing, a common testing technique in computer security.

Elasticsearch and Lucene frequently use this testing approach. If you see a test that mentions a "duel" between two algorithms (TestDuelingAnalyzers, testDuelTermsQuery...), then you know this strategy is in action.

Use the paper's terminology

When another developer works with your code, they'll need to consult the paper to follow its details. The comment on Elasticsearch's HyperLogLog++ implementation says it well: “Trying to understand what this class does without having read the paper is considered adventurous.” This method comment also sets a good example. It includes a link to the academic paper, and highlights what modifications were made to the algorithm as it was originally described.

Since developers will base their understanding of the code on the paper, it's helpful to use the exact same terminology. Since mathematical notation is terse, this can result in names that would not usually be considered “good style”, but are very clear in the context of the paper. Formulas from academic papers are one of the few times you'll encounter cryptic variable names in Elasticsearch like rS and muBarSInverse.

elastic-blog-academicpaper.jpg
The author's recommended way of reading a paper: with a large coffee.

You can email the author

When working through a tough paper, you may spend hours puzzling over a formula, unsure if you're misunderstanding or if there's just a typo. If this were an open source project, you could ask a question on GitHub or StackOverflow. But where can you turn for an academic paper? The authors seem busy and might be annoyed by your emails.

On the contrary, many academics love hearing that their ideas are being put into practice and are happy to answer questions over email. If you work on a product they're familiar with, they might even list the application on their website!

There's also a growing trend for academics to discuss papers in the open, using many of the same tools from software development. If a paper has an accompanying software package, you might find answers to common questions on Github. Stack Exchange communities like “Theoretical Computer Science” and “Cross Validated” also contain detailed discussions about popular papers. Some conferences have begun to publish all paper reviews online. These reviews contain back-and-forth discussions with the authors that can surface helpful insights about the approach.

To be continued

This post focuses on the basics of choosing an academic paper and implementing it correctly, but doesn't cover all aspects of actually deploying the algorithm. For example, if the algorithm is just one component in a complex system, how do we ensure that changes to the component lead to end-to-end improvements? And what if integrating the algorithm requires substantial modifications or extensions that the original paper doesn't cover? These are important topics we hope to share more about in future posts.

]]>
https://www.elastic.co/blog/implementing-academic-papers-lessons-learned-from-elasticsearch-and-lucenehttps://www.elastic.co/blog/implementing-academic-papers-lessons-learned-from-elasticsearch-and-luceneThu, 30 Sep 2021 01:00:00 GMT
<![CDATA[Implementing academic papers: lessons learned from Elasticsearch and Lucene]]>https://www.elastic.co/blog/implementing-academic-papers-lessons-learned-from-elasticsearch-and-lucenehttps://www.elastic.co/blog/implementing-academic-papers-lessons-learned-from-elasticsearch-and-luceneThu, 30 Sep 2021 01:00:00 GMT<![CDATA[Elastic on Elastic: Configuring the Security app to use Cross Cluster Search]]>The Elastic Infosec Detections and Analytics team is responsible for building, tuning, and maintaining the security detections used to protect all Elastic systems. Within Elastic we call ourselves Customer Zero and we strive to always use the newest versions of our products. 

In the previous blog posts we gave an overview of our architecture and what data we send to our clusters. In this blog post we will provide instructions on how we use Cross Cluster Search (CCS) with the Security and Machine Learning (ML) applications

Configuring CCS

Getting the Security app to work with CCS only requires a few minor changes. The first step is to configure the Security app so that it knows to look for the CCS index patterns instead of local index patterns. To do this open the Kibana ‘Advanced Settings’ within the Stack Management menu, then located within the Security Solution section update the Elasticsearch indices. These should be set to match the CCS index patterns such as *:auditbeat-*, *:filebeat-*, *:logs-*, and any other index patterns you want to add to your Security app. 

Now when you return to the Security app you should see the Overview, Hosts, and Network pages displaying events from all of your remote clusters.

Configuring Built In Detection Rules to use CCS

Creating brand new custom Detection rules does not require any additional steps, the CCS index patterns will be available to select when you create a new detection rule. If you want to use the 500+ built in Detection rules with CCS it requires a little extra work. To do this you will need to import, duplicate, and bulk edit all of the rules to change the index patterns to use Cross Cluster Search. 

The first step is to load all of the pre-built Elastic rules into your Security App. Within the detections tab of the security app navigate to the ‘Manage Detections’ page and click the ‘Load Elastic prebuilt rules’ button.

The Prebuilt rules can not be directly modified but they can be duplicated and then the duplicates can be modified. To do this you will need to select all of the rules and use the ‘Bulk Actions’ menu to duplicate them all. Before starting this process I recommend changing the ‘refresh settings’ to disable automatically refreshing the table. If you don’t then the table may refresh automatically while you are trying to duplicate the rules causing you to deselect some rules.

To Select all rules on clusters older than 7.14 you will need to change the ‘Rows per page’ to the maximum amount.

When all of the rules are displayed select all rules and then in the bulk actions menu select ‘Duplicate Selected’. Duplicating all of the rules may take a couple minutes.

When this is complete you will have two copies of every rule. Using the filter on the right side of the rule table select only the ‘Custom rules’ that you just created

With only the Custom Rules displayed ‘select all’ again and select ‘Export Selected’ from the ‘bulk actions’ menu to download all of the rules as an ndjson file. 

Within this ndjson file you will need to find and replace all instances of the normal index patterns with the CCS index pattern. For example, to use the remote auditbeat index pattern you will need to find "auditbeat-*" and replace it with "*:auditbeat-*".  Repeat this for the other index patterns until all of the index patterns have been changed to the new CCS pattern.

I also recommend replacing the word ‘[Duplicate]’ that was appended to every rule name with the current stack version. This will help you manage your rules over time and track when each rule was installed or last updated.

After all of the changes have been saved to the custom rules ndjson file you can import the modified rules over the existing rules. Click the ‘Import rule’ button to open the rule import interface

Drag and drop the modified rule file to the pop up window and select the button to ‘Automatically overwrite saved objects with the same rule ID’. If you forget to do this it will create new rules instead of updating the existing rules and you will probably want to clean up all of the unnecessary rules.

After the rules have all been re-imported you can activate all of the rules that are applicable to your environment.

Note on Elastic Query Language (EQL)

EQL support for CCS is available as of Elastic release 7.14, but it requires that all of your remote clusters have also been upgraded to 7.14 or newer. Many of the built in Elastic detection rules use EQL so if you are using CCS with clusters older than 7.14 you will need to disable those rules.

Machine Learning with CCS

To use the Machine Learning with CCS you will need to update each of the datafeeds to use the CCS index patterns. When creating your own custom Machine Learning jobs this is easy to do, you simply select the CCS index pattern when creating the new job. To use the built in Security Machine Learning jobs requires modifying the datafeeds of each job to use the CCS index patterns. Because you cannot change a datafeed’s index pattern via the UI, doing this requires access to the Kibana Devtools or API access. My method for doing this is to load the built-in Machine Learning jobs, stop all of the jobs, and then use the Machine Learning API to Get the datafeed, make the change to the datafeed’s Index pattern, and then update the datafeed with the changes. After this is complete all of the ML jobs and datafeeds can be restarted. If you used the built in Security ML jobs you can now enable the built in Detection rules that use Machine Learning.

Conclusion

In this post we showed you how to configure your Security Application and Machine Learning jobs to work with Cross Cluster Search. Keep an eye out for future blog posts from the Elastic Infosec team on how we use Elastic to protect Elastic.

]]>
https://www.elastic.co/blog/elastic-on-elastic-configuring-the-security-app-to-use-cross-cluster-searchhttps://www.elastic.co/blog/elastic-on-elastic-configuring-the-security-app-to-use-cross-cluster-searchTue, 28 Sep 2021 15:00:00 GMT
<![CDATA[Elastic on Elastic: Configuring the Security app to use Cross Cluster Search]]>https://www.elastic.co/blog/elastic-on-elastic-configuring-the-security-app-to-use-cross-cluster-searchhttps://www.elastic.co/blog/elastic-on-elastic-configuring-the-security-app-to-use-cross-cluster-searchTue, 28 Sep 2021 15:00:00 GMT<![CDATA[Ingest data directly from Google Pub/Sub into Elastic using Google Dataflow]]>Today we’re excited to announce the latest development in our ongoing partnership with Google Cloud. Now developers, site reliability engineers (SREs), and security analysts can ingest data from Google Pub/Sub to the Elastic Stack with just a few clicks in the Google Cloud Console. By leveraging Google Dataflow templates, Elastic makes it easy to stream events and logs from Google Cloud services like Google Cloud Audit, VPC Flow, or firewall into the Elastic Stack. This allows customers to simplify their data pipeline architecture, eliminate operational overhead, and reduce the time required for troubleshooting.

Many developers, SREs, and security analysts who use Google Cloud to develop applications and set up their infrastructure also use the Elastic Stack to troubleshoot, monitor, and identify security anomalies. Google and Elastic have worked together to provide an easy-to-use, frictionless way to ingest logs and events from applications and infrastructure in Google Cloud services to Elastic. And all of this is possible with just a few clicks in the Google Cloud Console, without ever installing any data shippers.   

In this blog post, we’ll cover how to get started with agentless data ingestion from Google Pub/Sub to the Elastic Stack using Google Dataflow.

Skip the overhead

Pub/Sub is a popular serverless asynchronous messaging service used to stream data from Google Operations (formerly Stackdriver), applications built using Google Cloud services, or other use cases involving streaming data integration pipelines. Ingesting Google Cloud Audit, VPC Flow, or firewall logs to third-party analytics solutions like the Elastic Stack requires these logs to be shipped to Google Operations first, then Pub/Sub.  Once the logs are in Pub/Sub, a Google Cloud user must decide on the ingestion method to ship messages stored in Google Pub/Sub to third-party analytics solutions.

A popular option for joint Google and Elastic users is to install Filebeat, Elastic Agent, or Fluentd on a Google Compute Engine VM (virtual machine), then use one of these data shippers to send data from Pub/Sub to the Elastic Stack. Provisioning a VM and installing data shippers requires process and management overhead. The ability to skip this step and ingest data directly from Pub/Sub to Elastic is valuable to many users — especially when  it can be done with a few clicks in the Google Cloud Console. Now this is possible through a dropdown menu in Google Dataflow.

Streamline data ingest

Google Dataflow is a serverless asynchronous messaging service based on Apache Beam. Dataflow can be used instead of Filebeat to ship logs directly from the Google Cloud Console. The Google and Elastic teams worked together to develop an out-of-the-box Dataflow template that pushes logs and events from Pub/Sub to Elastic. This template replaces lightweight processing such as data format transformation previously completed by Filebeat in a serverless manner — with no other changes for users who previously used the Elasticsearch ingest pipeline.          

Here is a summary of data ingestion flow. The integration works for all users, regardless of whether they are using the Elastic Stack on Elastic Cloud, Elastic Cloud in the Google Cloud Marketplace, or a self-managed environment.

blog-gcp-integration-pubsub-1.png

Get started

In this section, we’ll go into a step-by-step tutorial on how to get started with the Dataflow template for analyzing GCP Audit Logs in the Elastic Stack.

Audit logs contain information that help you answer the question of "where, how and when" of operational changes that happen in your Google Cloud account. With our Pub/Sub template, you can stream audit logs from GCP to Elasticsearch and gather insights within seconds. 

We’ll start with installing the Elastic GCP integration straight from the Kibana web UI, which contains prebuilt dashboards, ingest node configurations, and other assets that help you get the most of the audit logs you ingest. 

Before configuring the Dataflow template, you will have to create a Pub/Sub topic and subscription from your Google Cloud Console where you can send your logs from Google Operations Suite.

blog-gcp-integration-pubsub-2.png

Next, navigate to the Google Cloud Console to configure our Dataflow job. 

In the Dataflow product, click “Create job from template” and select "Pub/Sub to Elasticsearch" from the Dataflow template dropdown menu.

blog-gcp-integration-pubsub-3.png

Fill in required parameters, including your Cloud ID and Base64-encoded API Key for Elasticsearch. Since we are streaming audit logs, add “audit” as a log type parameter. Cloud ID can be found from Elastic Cloud UI as shown below. API Key can be created using the Create API key API.

blog-gcp-integration-pubsub-4.png

blog-gcp-integration-pubsub-5.png

Click “Run Job” and wait for Dataflow to execute the template, which takes about a few minutes. As you can see, you don’t need to leave the Google Cloud Console or manage agents!

Now, navigate to Kibana to see your logs parsed and visualized in the [Logs GCP] dashboard.

blog-gcp-integration-pubsub-6.png

Wrapping up

Elastic is constantly making it easier and more frictionless for customers to run where they want and use what they want — and this streamlined integration with Google Cloud is the latest example of that. Elastic Cloud extends the value of the Elastic Stack, allowing customers to do more, faster, making it the best way to experience our platform. For more information on the integration, visit Google’s documentation. To get started using Elastic on Google Cloud, visit the Google Cloud Marketplace or elastic.co.

]]>
https://www.elastic.co/blog/ingest-data-directly-from-google-pub-sub-into-elastic-using-google-dataflowhttps://www.elastic.co/blog/ingest-data-directly-from-google-pub-sub-into-elastic-using-google-dataflowMon, 27 Sep 2021 17:00:00 GMT
<![CDATA[Ingest data directly from Google Pub/Sub into Elastic using Google Dataflow]]>https://www.elastic.co/blog/ingest-data-directly-from-google-pub-sub-into-elastic-using-google-dataflowhttps://www.elastic.co/blog/ingest-data-directly-from-google-pub-sub-into-elastic-using-google-dataflowMon, 27 Sep 2021 17:00:00 GMT<![CDATA[Elastic APM iOS agent technical preview released]]>We are proud to announce the preview release of the Elastic APM iOS agent! This release is intended to elicit feedback from the community, while providing some initial functionality within the Elastic Observability stack and is not intended for production use. Now is your chance to influence the direction of this new iOS agent and let us know what you think on our discussion forum. If you find an issue, or would like to contribute yourself, visit the GitHub repository.

Elastic APM is an Application Performance Monitoring solution from Elastic and alongside the iOS agent, there are official agents available for Java, Node.js, Python, Ruby, JavaScript/RUM, .NET, PHP, and Go. Elastic APM helps you to gain insight into the performance of your application, track errors, and gauge the end-user experience in the browser.

I’ll be going into the details of the release below, but if you’re ready to jump into the documentation right away you can find it at the Elastic APM iOS agent documentation.

Supported frameworks

The Elastic APM iOS agent is built on the opentelementry-swift sdk. This means that any frameworks or libraries that are instrumented with Open Telemetry will be captured by the Elastic APM iOS agent. Additionally, any custom OTel instrumentation you add to your application will be picked up by our agent.

We are initially providing auto instrumentation of following:

  • URLSession
  • CPU & Memory usage
  • Network connectivity
  • Device & Application attributes

Our main focus is to provide insight into your backend services from the perspective of your mobile application, automatically displaying distributed traces starting at your mobile app.

Downloading the agent

The agent will initially be provided through the Swift Package Manager. It can be added to an iOS project through the Xcode SPM dependency manager or through a Package.swift file. 

Simply add the following to your Package.swift dependencies

         dependencies: [ 
        .package(name: "apm-agent-ios", url: "https://github.com/elastic/apm-agent-ios", .branch(“v0.1.0")), 
… 

And add “iOSAgent” to the targets you wish to instrument:

.target( 
            name: "MyLibrary", 
            dependencies: [ 
                .product(name: "iOSAgent", package: "apm-agent-ios") 
            ]), 

The agent API

The Elastic APM iOS Agent has a few project requirements:

  • It’s only compatible with Swift (sorry Objective-C engineers) 
  • It requires Swift v5.3
  • It requires the minimum of iOS v11

The agent API is fairly slim. We provide a configuration object that allows the agent to be set up for an on-prem or cloud solution.

If you’re using SwiftUI to build your app, you can set up the agent as follows: 

struct MyApp: App { 
    init() { 
        var config = AgentConfiguration() 
        config.collectorAddress = "127.0.0.1"  
        config.collectorPort = 8200  
        config.collectorTLS = false  
        config.secretToken = "<secret token>"  
        Agent.start(with: config) 
    } 

 Read up more on configuration in the “Set up the Agent” doc.

The agent also captures any data recorded through the OpenTelementry-Swift APIs, including traces and metrics. Here’s an example on how to start a simple trace:

let instrumentationLibraryName = "SimpleExporter" 
let instrumentationLibraryVersion = "semver:0.1.0" 
var instrumentationLibraryInfo = InstrumentationLibraryInfo(name: instrumentationLibraryName, version: instrumentationLibraryVersion) 
var tracer = OpenTelemetrySDK.instance.tracerProvider.get(instrumentationName: instrumentationLibraryName, instrumentationVersion: instrumentationLibraryVersion) as! TracerSdk 
func simpleSpan() { 
    let span = tracer.spanBuilder(spanName: "SimpleSpan").setSpanKind(spanKind: .client).startSpan() 
    span.setAttribute(key: sampleKey, value: sampleValue) 
    Thread.sleep(forTimeInterval: 0.5) 
    span.end() 
} 

You can find more examples on how to use the OTel API at OpenTelementry-Swift examples.

If you decide to go this route, you may have to add OpenTelemetry-Swift as a dependency to your project as well. 

Summary and future

We would be thrilled to receive your feedback in our discussion forum or in our GitHub repository. Please keep in mind that the current release is a preview and we may introduce breaking changes. We are excited to be launching this mobile offering and already have many ideas for what comes next, but we want the community to help guide our direction. Check out our CONTRIBUTING.md and let the PRs fly!

]]>
https://www.elastic.co/blog/elastic-apm-ios-agent-technical-preview-releasedhttps://www.elastic.co/blog/elastic-apm-ios-agent-technical-preview-releasedThu, 23 Sep 2021 18:00:00 GMT
<![CDATA[Elastic APM iOS agent technical preview released]]>https://www.elastic.co/blog/elastic-apm-ios-agent-technical-preview-releasedhttps://www.elastic.co/blog/elastic-apm-ios-agent-technical-preview-releasedThu, 23 Sep 2021 18:00:00 GMT<![CDATA[How the French Ministry of Agriculture deploys Elastic to monitor the commercial fishing industry]]>Within the French Ministry of Agriculture and Food (the Ministry), our team of architects in the Methods, Support and Quality office (BMSQ) evaluate and supply software solutions to resolve issues encountered by project teams that affect various disciplines.

As data specialists, one area we’ve been involved in includes reconfiguring the traceability of activities for the commercial fishing industry. The aim is to improve the quality, speed and precision of how we collect and analyze large volumes of data connected to the industry — from declared fish hauls, harbor exit manifests, to GPS data tracking vessel locations.

The challenge we have is to provide up-to-date information that is verified, complete, dynamic and can be viewed in various formats depending on the target audience, and Elastic is the solution. Elastic handles ingesting all of our data and render visualizations to make it easy to share and use real-time information about commercial fishing activity. With this integrated data, we can take enforcement action, stop illegal fishing, and negotiate fishing rights with our neighboring countries.

Why we choose Elastic over Splunk and Graylog

The first stage of the project was to perform a proof of concept with a solution capable of storing, extracting, and presenting the data related to fishing activities in real time.

We benchmarked the Elastic Stack against other tools, such as Graylog and Splunk. We were ultimately won over by the Elastic Stack’s speed and ease of use, along with its power and scalability. The data presentation and visualization tools, Canvas, and Kibana, have also played crucial roles in this project, enabling us to efficiently provide information for our end users in the context of increasingly strict protective fishing regulations and measures. In addition, our Elastic subscription, reduces our development time and allows us to focus on our real work, thanks to the support team at Elastic.

Casting a wide regulatory fishing net with Elastic

With GPS systems required on fishing vessels larger than 12 meters (39 feet), we have been able to track boats while at the same time indexing that monitoring data into Elasticsearch, where it is visualized in either Kibana or Canvas. This enables us to help Ministry officials on several enforcement levels:

  • Locating activity zones of boats and areas of intense fishing
  • Monitoring fishing quotas in FAO (Food and Agriculture Organization) zones 
  • Flagging infringements of the law

kibana-presentation-areas-intense-fishing.jpg

Sample Kibana presentation to locate areas of intense fishing  

With Canvas, we can refine the granularity, quality, and format to ensure the fishery data is presented in the most suitable format for the audience, especially a non-technical audience.

infographic-quotas-de-peche.png

Example of the infographics generated using Canvas to monitor fishing data of cod, sardine, and tuna

We could not render presentations like this with our legacy tools, which were conventional databases and a Java application. They were at their limits in terms of the required performance due to the number of filter fields, 300 and counting. Now, once the data has been processed in Logstash, stored and indexed in Elasticsearch, it can be filtered, cross-referenced and correlated in real time. 

Elastic gives us the ability to verify the precision and compliance of statements declared by boats compared with actual recorded events.

We are storing our raw data for 10 years. This amounts to 135 million records in Elasticsearch. In addition, each record contains more than 300 filter fields. We receive raw ERS (Electronic Reporting System) data in XML format, as issued by the boats using onboard software or GPS, and we model this data as it flows in so we can integrate it into our Elasticsearch cluster.

diagram-ers-data-xml-format-boats-gps-model.png

How Elastic fits in the architectural layout of the French Ministry of Agriculture

This sea of information allows us to pinpoint quantities and species fished, rejections of protected species, type of boat plus its flag, registration and equipment, fishing quotas per territorial area, satellite operator depending on the region of the globe, and much more. 

Real-time information detailed by region is the basis for consolidated analyses and discussion to facilitate immediate remedying of any infringements of the law, rapid reaction to media controversy relating to protected marine species, and even the renegotiation of quotas each year within the European Union.

Expanding to political, economical and environmental use cases 

The Ministry is continuing to closely monitor new releases of the Elastic Stack. We are anticipating the availability of a French version of Kibana, which would expand the solution’s user group. The most recent functionalities provided in versions 7.11 and 7.12 of Elastic are being tested with great interest — in particular the addition of the tracks layer in the Maps application. This feature takes an index of point locations, ordered by time, and displays them as a line, enabling us to track the route taken by boats, as shown below:

dashboard-test-trace.jpg

We have been grateful for the ease of use, flexibility, and creativity of Kibana and Canvas in enabling stakeholders to remain well informed and to react rapidly to increasingly stringent protective regulations and measures. Moreover, the fishing data provided in XML format is modeled by batch, which enables requests to be processed via Elasticsearch on a continuously growing volume of data, without having to wait for all 10 years of the current stored data to be processed. Elastic also enables the indexation process to be repeated each time new data is added to existing fields. 

In the short term, the Ministry’s aim is to conclude the roll-out of this solution and open it up to a growing group of users.

Beyond fishing data, we are going to need to store, process, and analyze increasing volumes of tracking data, particularly in regard to food — from farm to table. All of which means that Elastic could be of use for these new development projects with high political, economical and environmental stakes.



Sébastien Arnaud — Exchanges & Data architect at the French Ministry of Agriculture

Following initial training in the field of networks and IT security, he has worked on complex data exchange and transformation solutions. He likes to design and integrate innovative architectures for processing, storing and evaluating increasingly large volumes of data.

]]>
https://www.elastic.co/blog/how-the-french-ministry-of-agriculture-deploys-elastic-to-monitor-the-commercial-fishing-industryhttps://www.elastic.co/blog/how-the-french-ministry-of-agriculture-deploys-elastic-to-monitor-the-commercial-fishing-industryThu, 23 Sep 2021 15:00:00 GMT
<![CDATA[How the French Ministry of Agriculture deploys Elastic to monitor the commercial fishing industry]]>https://www.elastic.co/blog/how-the-french-ministry-of-agriculture-deploys-elastic-to-monitor-the-commercial-fishing-industryhttps://www.elastic.co/blog/how-the-french-ministry-of-agriculture-deploys-elastic-to-monitor-the-commercial-fishing-industryThu, 23 Sep 2021 15:00:00 GMT<![CDATA[Elastic 7.15: Create powerful, personalized search experiences in seconds]]>We are pleased to announce the general availability of Elastic 7.15, a release that brings a broad set of new capabilities to the Elastic Search Platform (including Elasticsearch and Kibana) and its three built-in solutions — Elastic Enterprise Search, Elastic Observability, and Elastic Security.

With Elastic 7.15 comes the general availability of the Elastic App Search web crawler and tighter integrations with Google Cloud — enabling our customers and community to more quickly create powerful new web search experiences, to ingest data more quickly and securely, and to more easily put their data to work with the power of search.

In addition, with Elastic Observability’s new APM correlations feature, DevOps teams can accelerate root cause analysis and reduce mean time to resolution (MTTR) by automatically surfacing attributes correlated with high-latency or erroneous transactions.

And, as the saying goes, if you’re going to observe... why not (also) protect?

To this end, with Elastic 7.15, Elastic Security enhances Limitless XDR (extended detection and response) with both malicious behavior protection for (nearly) every OS and one-click host isolation for cloud-native Linux environments.

Elastic 7.15 is available now on Elastic Cloud — the only hosted Elasticsearch offering to include all of the new features in this latest release. You can, of course, also download the Elastic Stack and our cloud orchestration products, Elastic Cloud Enterprise and Elastic Cloud for Kubernetes, for a self-managed experience.

Elastic Enterprise Search

Create powerful new web search experiences in seconds with the general availability of the Elastic App Search web crawler

With 7.15, Enterprise Search makes it faster than ever for organizations to get up and running with web search — freeing up technical teams to focus on other important projects. The Elastic App Search web crawler, now generally available, makes implementing search and ingesting website content nearly effortless. In addition to a number of web crawler improvements that make setup a snap, like automatic crawling controls, content extraction tools, and the ability to natively analyze logs and metrics in Kibana, the web crawler now enables customers to use a single platform to search all of their organization’s data — even websites.

Generally available with Elastic 7.15, the Elastic App Search Web Crawler makes it easy to ingest website content

Generally available with Elastic 7.15, the Elastic App Search web crawler makes it easy to ingest website content

To learn more visit the Elastic Enterprise Search 7.15 blog.

Elastic Observability

Automate root cause analysis for faster application troubleshooting

DevOps teams and site reliability engineers are constantly challenged by the need to sift through overwhelming amounts of data to keep modern applications performant and error-free. More often than not, this is a manual and time-consuming effort. To effectively resolve complex problems, these users need the ability to collect, unify, and analyze an increasing volume of telemetry data and quickly distill meaningful insights. Automation and machine intelligence have become essential components of the troubleshooter’s toolkit.

With Elastic 7.15, we’re excited to announce the general availability of Elastic Observability’s APM correlations feature. This new capability will help DevOps teams and site reliability engineers to accelerate root cause analysis by automatically surfacing attributes of the APM data set that are correlated with high-latency or erroneous transactions.Elastic APM correlations, now generally available, accelerate root cause analysis to free up DevOps and SRE teams

Elastic APM correlations, now generally available, accelerate root cause analysis to free up DevOps and SRE teams

Streamline monitoring of Google Cloud Platform services with frictionless log ingestion

Elastic’s new Google Cloud Dataflow integration drives efficiency with the frictionless ingestion of log data directly from the Google Cloud Platform (GCP) console. This agentless approach provides an “easy button” for customers — eliminating the cost and hassle of administrative overhead and further extending Elastic’s ability to more easily monitor native GCP services.

To learn more visit the Elastic Observability 7.15 blog.

Elastic Security

With Elastic 7.15, Elastic Security augments extended detection and response by equipping Elastic Agent to end threats at the endpoint, with new layers of prevention for every OS and host isolation for cloud-native Linux environments.

Elastic Security 7.15 powers extended detection and response (XDR) with malicious behavior protection for every OS and host isolation for cloud-native Linux environments

Elastic Security 7.15 powers extended detection and response (XDR) with malicious behavior protection for every OS and host isolation for cloud-native Linux environments

Stop advanced threats at the endpoint with malicious behavior protection for Linux, Windows, and macOS hosts

Malicious behavior protection, new in version 7.15, arms Elastic Agent to stop advanced threats at the endpoint. It provides a new layer of protection for Linux, Windows, and macOS hosts, powered by analytics that prevent attack techniques leveraged by known threats. This capability buttresses existing malware and ransomware prevention with dynamic prevention of post-execution behavior. Prevention is achieved by pairing post-execution analytics with response actions tailored to disrupt the adversary early in the attack, such as killing a process to stop a payload from being downloaded.

Contain attacks with one-click host isolation from within Kibana

In addition to malicious behavior protection, with the release of Elastic 7.15, Elastic Security enables analysts to quickly and easily quarantine Linux hosts via a remote action from Kibana. With (just) one click, analysts can respond to malicious activity by isolating a host from a network, containing the attack and preventing lateral movements. While host isolation was introduced for Windows and macOS in version 7.14, it is now available on every OS protected by Elastic Agent.

We’re implementing this capability on Linux systems via extended Berkeley Packet Filter (eBPF) technology, a reflection of our commitment to technologies that enable users to observe and protect modern cloud-native systems in the most frictionless way possible.

For more information on our continuing efforts in the realm of cloud security, check out our recent announcements on Elastic joining forces with build.security and Cmd.

To learn more about what’s new with Elastic Security in 7.15, visit the Elastic Security 7.15 blog.

Elastic Cloud

Whether customers are looking to quickly find information, gain insights, or protect their technology investments (or all of the above), Elastic Cloud is the best way to experience the Elastic Search Platform. And we continue to improve that experience with new integrations that let customers ingest data into Elastic Cloud even more quickly and securely.

Ingest data faster with Google Cloud Dataflow

With Elastic 7.15, we’re pleased to announce the first-ever native Google Cloud data source integration to Elastic Cloud — Google Cloud Dataflow. This integration enables users to ship Pub/Sub, Big Query, and Cloud Storage data directly into their Elastic Cloud deployments without having to set up an extra intermediary data shipper, utilizing Google Cloud’s native serverless ETL service. The integration simplifies data architectures and helps users ingest data into Elastic Cloud faster.

Ensure data privacy with the general availability of Google Cloud Private Service Connect

We’re also excited to announce that support for Google Private Service Connect is now generally available. Google Private Service Connect provides private connectivity from Google Cloud virtual private clouds (VPCs) to Elastic Cloud deployments. The traffic between Google Cloud and Elastic Cloud deployments on Google Cloud travels only within the Google Cloud network, utilizing Private Service Connect endpoints and ensuring that customer data stays off the (public) internet.

Google Private Service Connect provides easy and private access to Elastic Cloud deployment endpoints while keeping all traffic within the Google network

Google Private Service Connect provides easy and private access to Elastic Cloud deployment endpoints while keeping all traffic within the Google network

To learn more about what’s new with Elastic Cloud, visit the Elastic Platform 7.15 blog.

Read more in our latest release blogs

Test our mettle

Existing Elastic Cloud customers can access many of these features directly from the Elastic Cloud console. If you’re new to Elastic Cloud, take a look at our Quick Start guides (bite-sized training videos to get you started quickly) or our free fundamentals training courses. You can always get started for free with a free 14-day trial of Elastic Cloud. Or download the self-managed version of the Elastic Stack for free.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

]]>
https://www.elastic.co/blog/whats-new-elastic-7-15-0https://www.elastic.co/blog/whats-new-elastic-7-15-0Wed, 22 Sep 2021 16:04:00 GMT
<![CDATA[Elastic 7.15: Create powerful, personalized search experiences in seconds]]>https://www.elastic.co/blog/whats-new-elastic-7-15-0https://www.elastic.co/blog/whats-new-elastic-7-15-0Wed, 22 Sep 2021 16:04:00 GMT<![CDATA[What's new in Elastic Enterprise Search 7.15: Web crawler GA and personalized Workplace Search]]>Elastic Enterprise Search 7.15 introduces general availability for App Search’s web crawler making it quick and effortless to spin up powerful, new search experiences for every use case.

We’re also adding countless ways to personalize Workplace Search to meet the unique needs of your organization with the ability to add custom branding, schedule sync frequency, and configure automatic filter detection.

These updates help teams launch search faster and tailor the search experiences they create:

  • Take the headache out of data ingestion and make your website content instantly searchable with a sophisticated, easy-to-use web crawler
  • Apply your organization’s branding across all your mission-critical productivity tools
  • Schedule sync frequency in line with infrastructure demands
  • Define custom filters specific to your business so your team can search naturally
  • Create search integrations where your teams spend the most time, and deliver results from any source from Google Drive to Slack, and everything in between.
Elastic Enterprise Search 7.15 is available now on Elastic Cloud — the only hosted Elasticsearch offering to include all of the new features in this latest release. You can also download the Elastic Stack and our cloud orchestration products, Elastic Cloud Enterprise and Elastic Cloud for Kubernetes, for a self-managed experience.

Set up new search experiences in no time with App Search’s web crawler

With 7.15, Elastic Enterprise Search brings general availability to the native web crawler in App Search. One common hurdle customers face when setting up website and application search is data indexing. No more! With the web crawler, it’s simple to ingest web content and get new search experiences up and running in no time. And we’ve added features like adding automatic crawling controls and content extraction tools that streamline implementation and free up technical teams. Now you can also analyze crawler logs with Kibana visualizations and Elastic observability tools — so you can use one platform for all of your search data.

7-15-elastic-co-starting-a-crawl.gif

Here’s everything new in the crawler including loads of performance and stability optimizations:

  • Robots.txt support: Follows the robots exclusion standard, so it knows what pages not to crawl
  • Sitemap support: Uses your website’s XML blueprint to efficiently locate and crawl your most important content
  • Persistent crawling: Continues web crawling progress even in the instance of a failure or restart
  • Content extraction utilities: Lets you identify the exact content you want the web crawler to extract from each page it visits. Also covers:
    • Meta tag and data-attribute rules
    • Include/exclude rules in the document body
  • Domain validation: Checks that a domain is valid and can be reached without indexing restrictions to prevent issues with starting a crawl
  • Deduplication control: Ensures that only the best version of each page appears in your search engine index
  • Automatic crawling controls: Allows you to define how frequently you want to perform automatic crawls
  • Process crawls: Allows you to remove documents on-demand from your index according to crawl rules
  • URL debugging API: A comprehensive way to troubleshoot problematic URLs, allowing you to understand what the web crawler encounters when it visits a given page

Make your mark on Workplace Search

Personalize internal search with your very own branding assets so you can have a consistent look and feel across all of your organization’s essential applications. Make unified search your own and give it instant credibility with the team when you add your organization’s branding without having to build a custom interface. All it takes is a simple .png upload.

Your timetable, your data, your way with Workplace Search

Now you can also schedule Workplace Search sync frequency according to your organization’s needs. When you use Workplace Search’s enhanced sync configurability, you can ensure that computing resources are on par with infrastructure demands. What’s more, you can get real-time results when syncs correspond to your org’s data refresh patterns. No one will miss the team’s latest and greatest content when it’s instantly indexed. Customers on Elastic’s Platinum tier also get added convenience with the ability to schedule syncs by content source and by using the scheduling API.

Get instant recognition with configurable automatic filter detection in Workplace Search

Natural language queries are at the heart of making search experiences intuitive and effective. But how do you also capture the terms and phrases essential to an organization’s communal intelligence? You need configurable filters, of course. Let your team search naturally and find information faster with filters defined for your organization. Take these examples:

  • Pull requests from last week
  • Product team notes updated by me
  • Monthly board presentations in Google Drive

Deliver relevant results to everyone on the team when you create custom filters using natural language queries that get automatically recognized. No need for anyone to pick up a complex query language just to find your latest presentation deck.

Present common search experiences in Workplace Search

Workplace Search offers the convenience of a fully featured desktop and mobile search experience, but also provides all the necessary tools and endpoints for designing and developing bespoke search integrations embedded within high-traffic applications like intranets and workflow applications. Several improvements to the Search API endpoints allow for a more consistent experience across data sources. Slack and Gmail are now available for custom search experience development along with SharePoint Online, Google Drive and more than a dozen more native data integrations. No matter what information is most relevant to your team, you can design an immersive experience without constraints.

Try it out

Existing Elastic Cloud customers can access many of these features directly from the Elastic Cloud console. If you’re new to Elastic Cloud, take a look at our Quick Start guides (bite-sized training videos to get you started quickly, including the Web crawler Quick Start) or our free fundamentals training courses (including the App Search Web Crawler fundamentals course). You can always get started for free with a free 14-day trial of Elastic Enterprise Search. Or download the self-managed version of the Elastic Stack for free.

Read about these capabilities and more in the release notes, and other Elastic Stack highlights in the Elastic 7.15 announcement post.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

]]>
https://www.elastic.co/blog/whats-new-elastic-enterprise-search-7-15-0https://www.elastic.co/blog/whats-new-elastic-enterprise-search-7-15-0Wed, 22 Sep 2021 16:03:00 GMT
<![CDATA[What's new in Elastic Enterprise Search 7.15: Web crawler GA and personalized Workplace Search]]>https://www.elastic.co/blog/whats-new-elastic-enterprise-search-7-15-0https://www.elastic.co/blog/whats-new-elastic-enterprise-search-7-15-0Wed, 22 Sep 2021 16:03:00 GMT<![CDATA[What’s new in Elasticsearch, Kibana, and Elastic Cloud for 7.15]]>Elastic Cloud customers can now ingest data more simply, quickly, and securely, and the latest updates to the core Elastic Stack provide users with new tools for maximizing performance and exploring their data.

The 7.15 release of Elastic Cloud brings new integrations with Google Cloud that allow customers to ingest Google Cloud services data directly into their Elastic Cloud deployments and take advantage of additional network security with Google Cloud Private Service Connect. Plus, the Elastic Stack brings enhancements to Elasticsearch and Kibana including improved data transfer, better resiliency, and more flexible data ingest and analysis. 

Ready to roll up your sleeves and get started? We have the links you need:

What’s new in Elastic Cloud for 7.15

Check out the new Google Cloud Dataflow native integration in Elastic Cloud

Google Cloud Dataflow native integration

Introducing the first ever native Google Cloud data source integration for Elastic Cloud — Google Cloud Dataflow. This integration allows customers to ship Pub/Sub, Big Query, and Cloud Storage data directly into Elastic Cloud deployments without having to set up an extra intermediary data shipper, utilizing Google Cloud’s native serverless extract, transform, load (ETL) service. Customers benefit from simplified data architecture and increased speed when ingesting data into Elastic Cloud. Read our blog post on these integrations to learn more.

Google Cloud Private Service Connect

We’re excited to announce that support for Google Private Service Connect is now generally available. Google Private Service Connect provides private connectivity from Google Cloud virtual private cloud (VPC) to Elastic Cloud deployments. The traffic between Google Cloud and Elastic Cloud deployments on Google Cloud travels only within the Google Cloud network, utilizing Private Service Connect endpoints and ensuring that customer data stays off the Internet. Read the blog post to learn more.

ARM-based (Graviton2) instances on AWS 

Soon, customers will be able to leverage Amazon Web Services (AWS) ARM-based Graviton2 virtual machines (VMs) for Elastic Cloud deployments running on AWS. VMs running on Graviton2 hardware provide up to 40% better price performance compared to previous generation x86-based instances. Check out the blog post to learn more.

What’s new in Elasticsearch 7.15

Improved data resiliency and reduced data transfer traffic

Since the inception of Elasticsearch, we’ve been on a mission to be the best and fastest search engine around. To further this mission, we’ve lowered the costs of storing and searching data, improved cluster resiliency and search performance, lowered memory heap usage, improved storage efficiency, and introduced faster aggregations in multiple Elasticsearch releases. In this release, we not only improve data resiliency but also reduce data transfer traffic — a change designed specifically to lower our customers’ Elastic Cloud bills. 

Improved data resiliency and reduced data transfer traffic with Elasticsearch

By compressing specific inter-node traffic and using snapshot storage to shortcut relocating shards between nodes, we have reduced the amount of network traffic that traverses across the cluster, resulting in a reduction in Data Transfer and Storage (DTS) cost. This change will be most prominent for Elastic Cloud customers with heavy indexing or data migration between tiers. 

New APIs to help optimize and improve Elasticsearch performance

The best decisions are always data driven. Three new experimental APIs in 7.15 give you the tools to help analyze how you are using Elasticsearch usage and ultimately drive improved performance. 

The field usage API helps you decide how to index a field based on usage statistics. For example, if a field is used frequently, it should be created with schema on write or at ingest time by using a mapping. If the field is used infrequently, consider defining it at query time with runtime fields. Changing a text field with an inverted_index.term_frequencies of zero and low inverted_index.positions to match_only_text (added in 7.14) can save around 10% of disk.

With the index disk usage API you can see how much disk space is consumed by every field in your index. Knowing what fields take up disk, you can decide which indexing option or field type is best. For example, keyword or match_only_text may be better than text for certain fields where scoring and positional information is not important. Or, use runtime fields to create a keyword at query time for flexibility and saving space.

Finally, the vector tiles API provides a huge performance and scalability improvement when searching geo_points and geo_shapes drawn to a map (through use of vector tiles). Offloading these calculations to the local GPU significantly improves performance while also lowering costs by reducing network traffic both within the cluster and to the client. 

losangeles.gif

Composite runtime fields

Elastic 7.15 continues to evolve the implementation of runtime fields in Elasticsearch and Kibana. 

In Elasticsearch, composite runtime fields enable users to streamline field creation using one Painless script to emit multiple fields, with added efficiencies for field management. Use patterns like grok or dissect to emit multiple fields using one script instead of creating and maintaining multiple scripts. Using existing grok patterns also makes it faster to create new runtime fields and reduces the time and complexity of creating and maintaining regex expressions. This development makes it easier and more intuitive for users to ingest new data like custom logs. See more on runtime fields in Kibana 7.15 below.

What’s new in Kibana 7.15

Runtime fields editor preview pane

Combined with the introduction of composite fields for Elasticsearch (above), a new preview pane in the runtime fields editor in Kibana 7.15 makes it even easier to create fields on the fly. The preview pane empowers users to test and preview new fields before creating them — for example, by evaluating a new script against documents to check accuracy in Index Patterns, Discover, or Lens. In addition, pinning specific fields in the preview pane simplifies script creation. This enhancement also includes better error handling for the editor, all to help streamline the field creation process and allow users to create runtime fields more quickly. More developments for runtime fields are on the horizon as we continue to make previously ingested data easier to parse from Kibana.

Use the Kibana runtime fields editor preview pane to evaluate sample documents before creating a new field.

Other updates across the Elastic Stack and Elastic Cloud

Elastic Cloud

  • Leverage more cost-effective hardware options on GCP: Google Compute Engine’s (GCE) N2 VMs for Elastic Cloud deployments running on Google Cloud offer up to 20% better CPU performance compared to the previous generation N1 machine types. Learn more in the blog post.

Elasticsearch

  • Build complex flows with API keys: Search and pagination for API keys allow you to build complex management flows for keys, based on your own metadata.

Kibana

  • Sync across time and (dashboard) space with your cursor: A new hover feature in Kibana charts that highlights corresponding data across multiple charts makes it easier for users to home in on specific time periods to observe and explore trends. In addition to time series, this will also highlight the same non-time data on multiple dashboard panels.
  • Customize charts with legend-ary updates: Legends inside charts (great for busy dashboards) and multi-line series names in legends make it easier for teams to follow the data story on a dashboard.
  • Get a head start on Maps exploration: Metadata for points and shapes is now auto-generated in Elastic Maps when a user creates an index and explores with edit tools. The user and timestamp data is saved for further exploration and management. Also, a new layer action allows users to view only the specific layer they are interested in.
  • Learn more in the Kibana docs.

Machine learning

  • Monitor machine learning jobs easily: Operational alerts for machine learning jobs simplify the process of managing machine learning jobs and models, and alerts in Kibana make it easier to track and follow up on errors. 
  • Adjust and reset models without the fuss: The reset jobs API makes working with models much easier across Kibana, from the Logs app to Elastic Security.
  • Reuse and scale machine learning jobs: Jobs can now be imported and exported, allowing users to reuse jobs created in lab environments or in multiple-cluster environments. Sharing jobs across deployments makes jobs more consistent and easier to scale.
  • Investigate transaction latency: Elastic APM correlations, powered by machine learning, streamline root cause analysis. The Elasticsearch significant terms aggregation was enhanced with a p_value scoring heuristic, and Kibana’s new transaction investigation page for APM aids analysts in a holistic exploration of transaction data. To learn more, read the Observability 7.15 blog.
  • Learn more in the Kibana and Elasticsearch docs.

Integrations

  • Run Elastic Package Registry (EPR) as a Docker image: now you can run your own EPR to provide information on external data sources to air-gapped environments. By using the EPR Docker image, you can integrate, collect and visualize data using Elastic Agents. For more information, please refer to this Elastic guide.

Try it out

Existing Elastic Cloud customers can access many of these features directly from the Elastic Cloud console. If you’re new to Elastic Cloud, take a look at our Quick Start guides (bite-sized training videos to get you started quickly) or our free fundamentals training courses. You can always get started for free with a free 14-day trial of Elastic Cloud. Or download the self-managed version of the Elastic Stack for free.

Read about these capabilities and more in the 7.15 release notes (Elasticsearch, Kibana, Elastic Cloud, Elastic Cloud Enterprise, Elastic Cloud on Kubernetes), and other Elastic 7.15 highlights in the Elastic 7.15 announcement post.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

]]>
https://www.elastic.co/blog/whats-new-elasticsearch-kibana-cloud-7-15-0https://www.elastic.co/blog/whats-new-elasticsearch-kibana-cloud-7-15-0Wed, 22 Sep 2021 16:02:00 GMT
<![CDATA[What’s new in Elasticsearch, Kibana, and Elastic Cloud for 7.15]]>https://www.elastic.co/blog/whats-new-elasticsearch-kibana-cloud-7-15-0https://www.elastic.co/blog/whats-new-elasticsearch-kibana-cloud-7-15-0Wed, 22 Sep 2021 16:02:00 GMT<![CDATA[Elastic Observability 7.15: Automated correlations, frictionless log ingestion from Google Cloud]]>Elastic Observability 7.15 introduces the general availability of automated correlations, unified views across application service logs and dependencies, and agentless log ingestion from Google Cloud Platform (GCP), accelerating troubleshooting of root causes of application issues and making it even easier to ingest telemetry from cloud services. 

These new features allow customers to:

  • Automatically surface attributes of the APM data set that are correlated with high-latency or erroneous transactions
  • Effortlessly troubleshoot application issues by viewing all associated application or service logs from within the APM user interface 
  • Seamlessly ingest log data into Elastic from within the Google Cloud console and extend monitoring to native Google Cloud services

Elastic Observability 7.15 is available now on Elastic Cloud — the only hosted Elasticsearch offering to include all of the new features in this latest release. You can also download the Elastic Stack and our cloud orchestration products, Elastic Cloud Enterprise and Elastic Cloud for Kubernetes, for a self-managed experience.

Automated root cause analysis with APM correlations is now GA

DevOps and SRE teams are constantly challenged with an overwhelming amount of data and dependencies to sift through to keep modern applications performant and error-free. As such, automation and machine learning have become essential components of the troubleshooter’s toolkit. Elastic APM correlations accelerate root cause analysis by automatically surfacing attributes of the APM data set (such as infrastructure components, versions, locations, and custom metadata) that are correlated with high-latency or erroneous transactions and have the most significant impact on overall service performance. Visualize the latency distribution of any attribute compared to overall latency and use these attributes to filter and isolate the root causes of performance problems.

animation-apm-latency-correlations.gif

Unified observability for APM troubleshooting across logs, third-party dependencies, and backend services

Elastic is the only observability solution built on a search platform that natively ingests high dimensionality and cardinality telemetry data of any type or source, adds context, and correlates it for fast, relevant analysis. Over the last twelve months we have reworked almost the entire user experience within the APM user interface and will continue to deliver visualization and workflow improvements for unified visibility and analysis across the entire application ecosystem. 

Two new troubleshooting views have been added in 7.15. Logs are now available on any level, at the top level for the service, as well as at the level of specific transactions and container or pod instances. We're now also able to show external dependencies, such as backends, caches, and databases, including how they are performing, their upstream dependencies, and how they have changed over time.

screenshot-apm-service-logs.png

Get an integrated roll-up view of application logs across application services running on ephemeral infrastructure to quickly find errors and other causes of application issues.


screenshot-dependencies-redis.png

Identify issues with third-party and backend service dependencies, and leverage detailed drilldowns for comparing historical performance and impact on upstream services.

We’ve also enhanced the existing transaction latency distribution chart and trace selection with more granular buckets and the flexibility to drag-select all application traces that fall within a desired range of latencies.

Agentless ingestion of logs from Google Cloud Platform (GCP) for frictionless observability  

Elastic’s new GCP Dataflow integration drives efficiency with frictionless ingestion of log data directly from the Google Cloud console. The agentless approach provides an “easy button” option for customers who want to avoid the cost and hassle of managing and maintaining agents, and further extends monitoring to native GCP services. 

blog-gcp-integration-pubsub-1.png

The Google and Elastic teams worked together to develop an out-of-the-box Dataflow template that a user can select to push logs and events from Pub/Sub to Elastic.

Additional data sources: JVM metrics support for JRuby, Azure Spring Cloud logs integration, and Osquery metrics in host details panel

With the 7.15 release, we have also enhanced our application and cloud data collection for JRuby and Azure. Now you can get visibility into system and JVM metrics for JRuby applications and continuously monitor and quickly debug issues encountered in Spring boot applications running on Azure (beta). 

Osquery provides a flexible and powerful way to collect any data from a target host it's installed on. The Osquery integration with the Elastic Agent, introduced in 7.13, opened up a spectrum of capabilities to support troubleshooting of security and observability use cases. Previously, Osquery could be used via Kibana to perform live and scheduled queries, with the query results stored in a dedicated data stream. With 7.15, Osquery is now directly integrated into the enhanced host details panel and delivers ad hoc querying capabilities on the target host.

Self-managed version of Elastic Package Registry (EPR) now available for air-gapped deployments

If you host your Elastic Stack in an air-gapped environment and want to take advantage of the recently GA Elastic Agent and Fleet, we have good news for you. Elastic Package Registry (EPR) is now available as a Docker image that can be run and hosted in any infrastructure setting of your choice. In environments where network traffic restrictions are mandatory deploying your own instance of EPR enables Kibana to download package metadata and content in order to access all available integrations and deliver the relevant out-of-box components and documentation. Currently, the EPR Docker image is a beta standalone server that will continue to grow and evolve. For more information, check out the Elastic guide for running EPR in air-gapped environments

Try it out

Existing Elastic Cloud customers can access many of these features directly from the Elastic Cloud console, or, if you'd prefer, you can download the latest version.

If you’re new to Elastic Cloud, take a look at our Quick Start guides (bite-sized training videos to get you started quickly) or our free fundamentals training courses. You can always get started for free with a free 14-day trial of Elastic Cloud

Read about these capabilities and more in the Elastic Observability 7.15 release notes, and other Elastic Stack highlights in the Elastic 7.15 announcement post.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all. 

]]>
https://www.elastic.co/blog/whats-new-elastic-observability-7-15-0https://www.elastic.co/blog/whats-new-elastic-observability-7-15-0Wed, 22 Sep 2021 16:01:00 GMT
<![CDATA[Elastic Observability 7.15: Automated correlations, frictionless log ingestion from Google Cloud]]>https://www.elastic.co/blog/whats-new-elastic-observability-7-15-0https://www.elastic.co/blog/whats-new-elastic-observability-7-15-0Wed, 22 Sep 2021 16:01:00 GMT<![CDATA[What’s new in Elastic Security 7.15: End threats at the endpoint…and beyond]]>Elastic Security 7.15 further arms the SOC to achieve extended detection and response (XDR).

Malicious behavior protection applies behavior analytics to prevent attack techniques often leveraged by named threats by performing dynamic, stateful correlation of on-host events, and then reacting instantly to disrupt attacks before they cause damage.

Memory threat protection now safeguards Windows hosts, stopping attacks designed to evade most other defenses.

To accelerate response and prevent damage and loss, 7.15 adds host isolation for cloud-native Linux systems, making the capability now available on every OS protected by Elastic Agent.

Let’s dive in.

End threats at the endpoint

Malicious behavior protection for Linux, Windows, and macOS

Malicious behavior protection, new in 7.15, buttresses existing malware and ransomware prevention with dynamic prevention of post-execution behavior. This new layer of prevention equips Elastic Agent to protect Linux, Windows, and macOS hosts from a broad range of attack techniques often leveraged by named threats. Prevention is achieved by pairing post-execution analytics with response actions tailored to stop the adversary at the initial stages of attack.

Each protection is mapped to the MITRE ATT&CK® framework, addressing attack techniques related to Initial Access, Execution, Persistence, and later tactics. The release protects against advanced attack techniques with prebuilt correlation rules for stopping methods such as:

  • Credential theft via memory dump
  • Phishing techniques
  • Living off the land (LotL) techniques, which hijack built-in OS capabilities to conduct attacks while avoiding detection and other methods of defense evasion
  • Advanced persistence techniques, which adversaries use to maintain a foothold across restarts, changed credentials, and other interruptions

Endpoint security screenshot

Memory threat protection for Windows endpoints

Memory protection stops many of the techniques used for process injection via shellcode, whereby an attacker attempts to avoid detection by executing an attack from memory instead of from on-disk, where most detection tools operate. Adversaries leverage in-memory attacks to evade common defensive technologies, and it has long been a key method for several named threat groups, including APT28, which used it to attack the Democratic National Committee.

Elastic Security 7.15 prevents memory manipulation via shellcode, stopping sub-techniques such as thread execution hijacking, asynchronous procedure call, process hollowing, and process doppelgänging, providing organizations another layer of prevention against attacks engineered to evade less sophisticated security technologies.

Host isolation for cloud-native Linux environments

Elastic Security 7.15 enables analysts to quarantine Linux hosts via a remote action invoked from Kibana. With one click, analysts can respond to malicious activity by isolating a host from a network, containing the attack and preventing lateral movements. Host isolation was introduced for Windows and macOS in version 7.14, and is now available on every OS protected by Elastic Agent.

We’re implementing this capability on Linux systems with the cloud-native extended Berkeley Packet Filter (eBPF) technology, a reflection of our commitment to technologies that enable users to observe and protect modern cloud-native systems in the most frictionless way possible. More on our continuing efforts in cloud security can be found in our recent announcements on joining forces with build.security and Cmd.

Ingest and analyze data from across your org

Our latest release helps security teams achieve visibility across their attack surface via new and enhanced integrations with Carbon Black EDR, CrowdStrike Falcon, Cloudflare, Hashicorp Vault, and Palo Alto Networks Cortex XDR.

Carbon Black EDR integration

Integration with VMware Carbon Black EDR enables organizations to easily ingest endpoint activity. As with all Elastic-built connectors, logs are formatted for Elastic Common Schema (ECS), enabling immediate use with Elastic Security dashboards, detection content, and more.

Carbon Black EDR integration

CrowdStrike Falcon integration

The new CrowdStrike Falcon integration for Elastic Agent goes beyond collecting Falcon platform alerts. Built on the Falcon Data Replicator, it delivers deep visibility by enabling the granular collection and long-term analysis of raw endpoint events and platform audit data.

Cloudflare integration

Elastic Security has a new integration with Cloudflare, one of the largest content delivery networks (CDN) in the world that handles 5-10% of global internet traffic. The integration makes it easy to ingest logs, enabling cross-environment monitoring of network-borne threats and end-to-end detection and response.

Hashicorp Vault integration

7.15 enables the ingestion of logs from Hashicorp Vault, a tool for securely storing and using API keys, passwords, certificates, and other secrets. With the integration, Elastic Agent collects audit logs, operational logs, and telemetry data. The audit logs reveal which users are accessing which secrets, without revealing the secrets themselves. To enable easy monitoring, data can be visualized on a prebuilt Kibana dashboard.

Hashicorp Elastic integration dashboard

Palo Alto Networks Cortex XDR

The new Palo Alto Network Cortex XDR integration supports ingestion of alerts and associated events from Cortex XDR, with data mapped to ECS. This data enables SecOps teams to leverage findings across cloud, network, and endpoint data sources from Cortex XDR during incident response and threat hunting within Elastic Security.

Triage alerts faster

Atop the updated Alerts page, an Alert status filter (Open, Acknowledged, Closed) enables quick filtering of every widget, including the Trend chart, the new Count view, and the Alerts table.

Filter Elastic Security 7.15 based on alert status

The Alerts table has been improved in several ways. Of greatest note, it now features the Reason field, which conveys why an alert has triggered. An updated Fields browser gives analysts greater control over the columns in the Alerts table, organizing fields by category and providing descriptions alongside field names to help practitioners leverage the full power of Elastic Common Schema (ECS).

Elastic Security 7.15 ECS Fields browser

With an updated Alerts flyout, analysts can view a summary of the alert and quickly understand why it was generated. Hover actions provide new ways to interact with the alert table or an investigation timeline. The flyout also provides several ways to take action on an alert, such as adding the alert to a new or existing case, changing the alert status, isolating an alerting host (for hosts running Agent with the endpoint security integration), and initiating an investigation.

Elastic Security 7.15 alert flyout

Inspect hosts with osquery on Elastic Agent

Osquery Manager delivers several exciting enhancements in the 7.15 release:

Standardize scheduled query results with ECS

When defining scheduled queries, you can now map query results to ECS fields to standardize your osquery data for use across detections, machine learning, and any other areas that rely on ECS-compliant data. This capability greatly increases the value of the queries you run by making those results more readily usable across the Elastic Stack.

osquery Elastic scheduled query

Access controls for osquery

7.15 gives security teams more control over who can access osquery and view results. Previously, only superusers could use osquery, but access to this feature can now be granted as needed, empowering administrators to delegate who can run, save, or schedule queries:

  • With a free and open Basic license, organizations can grant All, Read, or No access to superusers and non-superusers alike — improving administrative control and enabling non-superusers access to osquery.
  • With a Gold license, organizations can achieve finer-grained control. For example, they can constrain who's allowed to edit scheduled queries, or allow certain users to run saved queries but prevent ad-hoc queries.

Scheduled query status at a glance

Scheduled query groups now show the status of individual queries within a group, allowing analysts to understand at a glance whether there are any results to review or issues to address. Surfacing this information can also help analysts tune queries and resolve any errors.

osquery scheduled searches

Try it out

Existing Elastic Cloud customers can access many of these features directly from the Elastic Cloud console. If you’re new to Elastic Cloud, take a look at our Quick Start guides (bite-sized training videos to get you started quickly) or our free fundamentals training courses. You can always get started with a free 14-day trial of Elastic Security. Or download the self-managed version of the Elastic Stack for free.

Read about these capabilities and more in the Elastic Security 7.15 release notes, and other Elastic Stack highlights in the Elastic 7.15 announcement post.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

]]>
https://www.elastic.co/blog/whats-new-elastic-security-7-15-0https://www.elastic.co/blog/whats-new-elastic-security-7-15-0Wed, 22 Sep 2021 16:00:00 GMT
<![CDATA[What’s new in Elastic Security 7.15: End threats at the endpoint…and beyond]]>https://www.elastic.co/blog/whats-new-elastic-security-7-15-0https://www.elastic.co/blog/whats-new-elastic-security-7-15-0Wed, 22 Sep 2021 16:00:00 GMT