InfluxDB

La version courante est la version 0.10

Vers la page de la version 0.8 InfluxDB 0.8

http://influxdb.com/

https://github.com/influxdb/influxdb

open-source distributed time series database for storing metrics, events, and analytics.

developped in Go language


 * retention policy : data is automatically deleted after the retention period
 * sharding :
 * replication :

Voir https://speakerdeck.com/pauldix/introducing-influxdb-an-open-source-distributed-time-series-database

Demo sur http://play.influxdb.com/ (source code)

=InfluxDB @ AIR=
 * SmartCampus

=Installation=

http://influxdb.com/download/

On Linux (Debian)
wget https://dl.influxdata.com/influxdb/releases/influxdb_0.13.0_amd64.deb sudo dpkg -i influxdb_0.13.0_amd64.deb sudo service influxdb start sudo service influxdb status
 * 1) for 64-bit systems

On OS X
brew update brew install influxdb

With Docker
See https://hub.docker.com/r/tutum/influxdb/

=Démarrage=

On Linux Debian
Configurer Influxdb si besoin en éditant /etc/influxdb/influxdb.conf

sudo service influxdb start ps wwwax | grep influxdb

L'arrêt se fait avec sudo service influxdb stop

On MacOS X
sudo /etc/init.d/influxdb start

Naviguez sur http://localhost:8083/ username : root & password : root

Pensez à changer ces valeurs dans /usr/local/etc/influxdb.conf quand vous mettez influxdb en production

Le fichier de configuration peut être généré au moyen de : /opt/influxdb/influxd config > /etc/influxdb/influxdb.generated.conf

Remarque: les interfaces pour UDP, Graphite, ... peuvent être activées.

Remarque: en production, il convient de sécuriser les ports (ie HTTPS, ...) si le serveur n'est pas derrière un firewall.

=Schema Design= Le schéma de la base est constituée de time series contenant des datapoints.

Un datapoint est un n-uplets: timeserie,tag=value,tag=value, measure=value,measure=value timestamp

Les datapoints sont indexés sur le timestamp (primaire) et sur les tags (secondaires)

=Premiers Pas=

Via le shell
influx influx -host localhost

Créer la base CREATE DATABASE mydb SHOW DATABASES USE mydb

Insérer des points INSERT cpu,host=serverA,region=us_west value=0.64 INSERT cpu,host=serverA,region=us_east value=0.70 INSERT cpu,host=serverA,region=us_west value=0.70 INSERT cpu,host=serverA,region=us_east value=0.90

INSERT payment,device=mobile,product=Notepad,method=credit billed=33,licenses=3i 1434067467100293230

INSERT stock,symbol=AAPL bid=127.46,ask=127.48

INSERT temperature,machine=unit42,type=assembly external=25,internal=37 1434067467000000000

Consulter la base show series

SELECT * FROM cpu SELECT * FROM cpu WHERE value >= 0.7 SELECT * FROM cpu LIMIT 1 SELECT value from cpu WHERE time > now - 7d SELECT mean(value) FROM cpu WHERE time > now - 1h GROUP BY time(10m);

SELECT * FROM temperature

Consulter la base en continue (Continuous Queries) TODO

CREATE CONTINUOUS QUERY response_times_percentile ON mydb BEGIN SELECT percentile(value, 95) INTO "response_times.percentiles.5m.95" FROM response_times GROUP BY time(5m) END
 * 1) Create a CQ to sample the 95% value from 5 minute buckets of the response_times measurement

Lister et supprimer des requêtes en continu SHOW CONTINUOUS QUERIES

DROP CONTINUOUS QUERY response_times_percentile ON mydb

Changer la rétention de la base mydb (ie les données ne sont conservées qu'un jour) ALTER RETENTION POLICY monitor ON mydb DURATION 1d REPLICATION 1 DEFAULT

Consulter les statistiques SHOW STATS

Consulter le diagnostique SHOW DIAGNOSTICS

Consulter le cluster SHOW SERVERS

Consulter la base interne _internal USE _internal show measurements show series limit 10 select * from httpd limit 10 select * from runtime limit 10

Via l'interface REST
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode \ "q=SELECT value FROM cpu WHERE region='us_west'"

Requêtes multiples curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode \ "q=SELECT value FROM cpu WHERE region='us_west';SELECT count(value) FROM cpu WHERE region='us_west'"

Affichage en seconde curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" \ --data-urlencode "epoch=s" --data-urlencode "q=SELECT value FROM cpu WHERE region='us_west'"

Affichage d'un 'chunk' de 20000 curl curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" \ --data-urlencode "chunk_size=20000" --data-urlencode "q=SELECT * FROM cpu"

Via l'interface Web
Depuis l'interface web http://localhost:8083/, Créez une base : mydb

=Clustering & Replication= Influxdb utilise le protocole Raft pour l'élection d'un leader : il faut configurer au minimum 3 machines avec des noeuds meta. Remarque : une machine pour être un noeud meta seul, un noeud data seul, ou bien les deux. Le nombre de noeuds data dépend du niveau de réplication désiré.

https://docs.influxdata.com/influxdb/v0.10/guides/clustering/

Vérifier la configuration sous le shell influx

SHOW SERVERS

=Backup= Manuel

influxd backup -database telegraf -retention default -since 2016-02-01T00:00:00Z /tmp/backup/telegraf-2016-02-01.db.backup

=Restore= sudo service influxdb stop influxd restore -database telegraf -datadir /var/lib/influxdb/data /tmp/backup/telegraf-2016-02-01.db.backup sudo service influxdb start influx -execute 'show databases'

Telegraf
https://github.com/influxdb/telegraf ''agent written in Go for collecting metrics from the system it's running on, or from other services, and writing them into InfluxDB. ''

Chronograf
Chronograf is a single binary web application that you can deploy behind your firewall to do ad hoc exploration of your time series data in InfluxDB. (lien)

Installation et utilisation détaillées ici.

Kapacitor
Kapacitor is a data processing engine. It can process both stream (subscribe realtime) and batch (bulk query) data from InfluxDB. Kapacitor lets you define custom logic to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, etc. (lien)

=Extra=
 * Jmxtrans
 * Grafana