The Anodot Agent is used to stream records into Anodot via Anodot’s REST API v2.0 and is composed of the following configuration structure:
- Source - where you want your data to be pulled from (such as Mongo or MySQL).
- Destination - where you want to put your data (available destinations: http client - Anodot REST API endpoint).
- Pipeline - pipelines connect sources and destinations with data processing and transformation stages; you can have multiple pipelines connecting your source and destination.
The following is a list of current Anodot Agents:
Apache Kafka |
Cacti | ClickHouse |
Coralogix |
Databricks |
Directory (Files) |
Elasticsearch | InfluxDB |
MongoDB |
MySQL |
Observium |
Oracle |
PostgreSQL |
PromQL (Prometheus) |
SNMP |
SolarWinds |
Splunk |
VictoriaMetrics |
Zabbix |
Microsoft SQL Server |
RRD |
Events-Directory (Files) |
Topology |
PRTG |
Note: Agents are displayed as read-only in the Data Streams window.
For further details on the installation, configuration, and source code for Anodot Agents, see the Anodot Github Repo.