In this post I will describe one way how the Topics data can get updated and thus aggregated. The updating is done strictly on an on demand bases. This was done to allow flexible configuration choices depending on the needs of the user. To update the data all one needs to do is make a HTTP PUT request to the updateData end point of the Topic. After receiving this HTTP PUT request then the DataGator web service will make a request for data from the broker that was configured when the Topic was created. After the data is received from the broker it is then aggregated by the Topic using the Aggregate(s) that have been configured for it. This is described more in a post below titled Aggregates.

Here is a simple example of getting the Topic data to update using curl.

curl -X PUT 'http://api.datagator.tech/Topics/588b99fbfba65a25640ccc58/updateData?access_token=TOKEN'

A sample response after making the above request would look similar to this.

{
"updated": {
"Open@3600": "Aggregated 1859 observations in 0.00978148 seconds with 0 bad observations. ",
"High@3600": "Aggregated 1859 observations in 0.0095574 seconds with 0 bad observations. ",
"Low@3600": "Aggregated 1859 observations in 0.0108046 seconds with 0 bad observations. ",
"Close@3600": "Aggregated 1859 observations in 0.0123292 seconds with 0 bad observations. ",
"Total": "7436 aggregate operations performed in 0.0424727 seconds. (175077 agg ops per sec)"
}
}

Some status and timings are currently in the results mainly just to get a relative idea of the impact that any changes have made. The timings are of just opening a connection to where the Aggregate data is stored, aggregating the data and storing it in its data base. Each Aggregate uses a separate and segregated data store.

There are various way of making the HTTP PUT request that will update the Topic data. I am currently using a Node-Red flow that will periodically make the request.  Using a Webhook service triggered by just about anything is also a possibility. Ideally the IoT gateway that collects the data should have a good idea of when it would be a good time to have it aggregated.

Later I will discuss visualizing the aggregated data.