Inputs that contain dynamic lookups need to be installed on search heads because they feed results back into the input directly from the search. Add-ons that contain inputs belong on forwarders, and in some select cases also on search heads.Unless you are certain of where the data manipulation functions of the add-on occur, install it across all tiers of your architecture. Add-ons that contain data manipulation functionality, usually in nf and nf files, should be installed on search heads, indexers, and forwarders, because that data manipulation could apply at various phases in the data pipeline: parsing, indexing, or search.Add-ons that contain search-time functionality, such as dashboards, prebuilt panels, saved searches, macros, tags, data models, and lookups, need to be installed on your search heads.Each add-on differs depending on what it contains, as shown in the diagram below. If you prefer to install add-ons only to the locations they are required, consult the installation instructions for each individual add-on, which indicate where your add-on must be installed in order to work in a distributed architecture. Add-on packages do not take up significant room on disk, so you can safely install them across your architecture.īe sure to follow the specific configuration instructions for each individual add-on, and be aware of any limitations regarding using a deployment server. Splunk recommends installing Splunk-supported add-ons across your entire Splunk platform deployment, then enabling and configuring inputs only where they are required.įor example, if you install an add-on to your indexer tier, but the add-on does not have any index-time functionality, it does no harm to have it there. Unless otherwise noted, you can install any add-on to all tiers of your Splunk platform architecture – search tier, indexer tier, forwarder tier – without any negative impact. Be aware of that and understand what it is and plan to mitigate it appropriately.Where to install Splunk add-ons Best practice No matter what you do in this case you are adding some risk. You'll notice that I put "may come with added security risk" as a con for both of these solutions. There are other ways besides these two for solving this problem, but these are two of the broadest brush strokes that you could consider. You've added additional Splunk infrastructure to support.This probably means using client certs to authenticate your forwarders. You will need to use SSL with Splunk and do it 100% correctly to not put data at risk.Requires no substantial networking expertise.The DMZ deployment server provides configuration information to your AWS forwarders and you can manage them centrally. The AWS forwarders send data to the DMZ heavies, who parse it and send it onward to your indexers. These are more-or-less exposed to the Internet (maybe you can firewall filter down to just known AWS IP spaces), and act as reverse proxies for getting data into Splunk. In this scenario, you take your existing DMZ and put up (say) a deployment server and a couple of heavy forwarders. Solution 2 - you stand up some Splunk services in your DMZ and configure them to act as reverse proxies into your existing Splunk infrastructure You may not have expertise to support this.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |