Anodot Mongo Agent Collector

Overview & Main Concepts
    What pipelines do
    Basic Flow
Anodot Agent Installation
Anodot Agent Configuration
Anodot Agent: Available Commands Reference


Use the Anodot MongoDB to Metric 2.0 agent to stream mongoDB records to Anodot via Anodot’s REST API v2.0.

  • Source - Where you want your data to be pulled from. Available sources: mongodb.
  • Destination - Where you want to put your data. Available destinations: http client - Anodot rest api endpoint.
  • Pipeline - pipelines connect sources and destinations with data processing and transformation stages.

What Pipelines Do

  • Take data from source.
  • If destination is http client - every record is transformed to JSON object according to specs of anodot metric 2.0 protocol.
  • Values are converted to floating point numbers.
  • Timestamps are converted to unix timestamp in seconds.

Basic Flow

  1. Add an anodot api token.
  2. Create a source.
  3. Create a pipeline.
  4. Run the pipeline.
  5. Monitor the pipeline status.


  1. Docker & docker-compose.
  2. A mongoDB database with the data documents; the data source.
  3. An active Anodot account; the data destination.
  4. Persistent volumes: 250Kb for every pipeline


  1. Make sure docker is running.
  2. Save the text below as docker-compose.yaml at the destination folder:
    version: '3.1'

        image: anodot/streamsets:latest
        restart: on-failure
          - sdc-data:/data

        image: anodot/daria:latest
        container_name: anodot-agent
        restart: always
          STREAMSETS_URL: 'http://dc:18630'
          ANODOT_API_URL: ''
        stdin_open: true
        tty: true
          - dc

          - sdc-data:/sdc-data
          - agent-data:/usr/src/app/data

  3. Important: Make sure to provide persistent volumes to the agent. Persistent volumes will be used to recover from server restart and resume pipeline work from last offset.
    • The storage needed is 200K per pipeline defined in the agent.

  4. From the destination folder run:
$ docker-compose up -d

4. Access the agent:

$ docker attach anodot-agent


1. Configure Anodot as the destination. Copy and paste the Anodot token:

     # agent token

2. Configure mongoDB as the source:

# agent source create
  • Connection string - database connection string e.g. mongodb://mongo:27017
  • Username
  • Password
  • Authentication Source
    • Leave blank for normal authentication.
    • For delegated authentication, specify alternate database 
  • Database
  • Collection
  • Is collection capped
  • Initial offset - Date or id from which to pull data from
  • Offset type - OBJECTID, STRING or DATE
  • Offset field
  • Batch size - how many records to send to further pipeline stages
  • Max batch wait time (seconds) - maximal waiting time (in seconds) until batch will reach batch size

3. Configure the pipeline:

# agent pipeline create
  • Pipeline ID - unique pipeline identifier (use human-readable name so you could easily use it further)
  • Measurement name - what do you measure (this will be the value of what property in anodot 0 metric protocol)
  • Value type - column or constant
  • Value - if type column - enter column name, if type constant - enter value
  • Target type - represents how samples of the same metric are aggregated. Valid values are: gauge (average aggregation), counter (sum aggregation)
  • Timestamp column name
      - string
    (must specify format)
      - datetime (if column has database specific datetime type like Date in mongo)
      - unix_ms (unix timestamp in milliseconds)
    - unix (unix timestamp in seconds)
  • Timestamp format string - if timestamp column type is string - specify format according to the Java Simple Date Format spec.
  • Required dimensions - Names of columns delimited by space. If these fields are missing in a record, it goes to error stage.
  • Optional dimensions - Names of columns delimited by space. These fields may be missing in a record.

4. Start the pipeline:

# agent pipeline start PIPELINE_ID


  • List available commands
    • agent --help
    • agent source --help
    • agent pipeline --help
  • Add anodot api token agent token
  • Create source agent source create
  • List sources agent source list
  • Delete source agent source delete
  • Create pipelines agent pipeline create.
    • There is also '-a' (--advanced) option for advanced configuration
  • List pipelines agent pipeline list
  • Start pipeline agent pipeline start PIPELINE_ID
  • Stop pipeline agent pipeline stop PIPELINE_ID
  • Delete pipeline agent pipeline delete PIPELINE_ID
  • Pipeline info agent pipeline info PIPELINE_ID
    • Shows current pipeline status, amount of records worked, issues with pipeline configuration if any and history of execution
  • Pipeline logs pipeline logs --help
  • Reset pipeline offset pipeline reset PIPELINE_ID


If errors occur - check this troubleshooting section

  • fix errors
  • Stop the pipeline
  • Reset pipeline origin
  • Run pipeline again

Pipelines may not work as expected for several reasons; for example wrong configuration, or some issues connecting to destination etc. Look for errors in three locations:

  • Pipeline info PIPELINE_ID This command will show some issues if pipeline is misconfigured
  • Pipeline logs -s ERROR PIPELINE_ID shows error logs if any
  • Records may not reach destination because errors happened in one of data processing and transformation. In that case you can find them in error files which are placed at /sdc-data directory and named with pattern error-pipelineid-sdcid (pipeline id without spaces).
    For example to see the last ten records for a specific pipeline id use this command:
bash tail $(ls -t /sdc-data/error-pipelineid* | head -1)


Was this article helpful?
0 out of 0 found this helpful