Difference between revisions of "InfluxDB"
Line 158: | Line 158: | ||
''agent written in Go for collecting metrics from the system it's running on, or from other services, and writing them into InfluxDB. '' |
''agent written in Go for collecting metrics from the system it's running on, or from other services, and writing them into InfluxDB. '' |
||
− | =Chronograf= |
+ | =[[Chronograf]]= |
''Chronograf is a single binary web application that you can deploy behind your firewall to do ad hoc exploration of your time series data in InfluxDB.'' |
''Chronograf is a single binary web application that you can deploy behind your firewall to do ad hoc exploration of your time series data in InfluxDB.'' |
||
+ | ([https://influxdata.com/time-series-platform/chronograf/ lien]) |
||
==Installation== |
==Installation== |
||
<pre> |
<pre> |
Revision as of 10:46, 21 January 2016
Vers la page de la version 0.8 InfluxDB 0.8
https://github.com/influxdb/influxdb
open-source distributed time series database for storing metrics, events, and analytics.
developped in Go language
- retention policy : data is automatically deleted after the retention period
- sharding :
- replication :
Voir https://speakerdeck.com/pauldix/introducing-influxdb-an-open-source-distributed-time-series-database
Demo sur http://play.influxdb.com/ (source code)
InfluxDB @ AIR
Installation
On Linux (Debian)
# for 64-bit systems wget http://influxdb.s3.amazonaws.com/influxdb_0.9.3_amd64.deb sudo dpkg -i influxdb_0.9.3_amd64.deb
On OS X
brew update brew install influxdb
Démarrage
sudo /etc/init.d/influxdb start
Naviguez sur http://localhost:8083/ username : root & password : root
Pensez à changer ces valeurs dans /usr/local/etc/influxdb.conf quand vous mettez influxdb en production
Le fichier de configuration peut être généré au moyen de :
/opt/influxdb/influxd config > /etc/influxdb/influxdb.generated.conf
Schema Design
Le schéma de la base est constituée de time series contenant des datapoints.
Un datapoint est un n-uplets:
timeserie,tag=value,tag=value, measure=value,measure=value timestamp
Les datapoints sont indexés sur le timestamp (primaire) et sur les tags (secondaires)
Premiers Pas
Via le shell
influx
Créer la base
CREATE DATABASE mydb SHOW DATABASES USE mydb
Insérer des points
INSERT cpu,host=serverA,region=us_west value=0.64 INSERT cpu,host=serverA,region=us_east value=0.70 INSERT cpu,host=serverA,region=us_west value=0.70 INSERT cpu,host=serverA,region=us_east value=0.90 INSERT payment,device=mobile,product=Notepad,method=credit billed=33,licenses=3i 1434067467100293230 INSERT stock,symbol=AAPL bid=127.46,ask=127.48 INSERT temperature,machine=unit42,type=assembly external=25,internal=37 1434067467000000000
Consulter la base
show series SELECT * FROM cpu SELECT * FROM cpu WHERE value >= 0.7 SELECT * FROM cpu LIMIT 1 SELECT value from cpu WHERE time > now() - 7d SELECT mean(value) FROM cpu WHERE time > now() - 1h GROUP BY time(10m); SELECT * FROM temperature
Consulter la base en continue (Continuous Queries)
TODO # Create a CQ to sample the 95% value from 5 minute buckets of the response_times measurement CREATE CONTINUOUS QUERY response_times_percentile ON mydb BEGIN SELECT percentile(value, 95) INTO "response_times.percentiles.5m.95" FROM response_times GROUP BY time(5m) END
Lister et supprimer des requêtes en continu
SHOW CONTINUOUS QUERIES DROP CONTINUOUS QUERY response_times_percentile ON mydb
Via l'interface REST
curl -G 'http://localhost:8086/query?pretty=true' --data-rlencode "db=mydb" --data-urlencode "q=SELECT value FROM cpu WHERE region='us_west'"
Requêtes multiples
curl -G 'http://localhost:8086/query?pretty=true' --data-rlencode "db=mydb" --data-urlencode "q=SELECT value FROM cpu WHERE region='us_west';SELECT count(value) FROM cpu WHERE region='us_west'"
Affichage en seconde
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "epoch=s" --data-urlencode "q=SELECT value FROM cpu WHERE region='us_west'"
Affichage d'un 'chunk' de 20000
curl -G 'http://localhost:8086/query' --data-urlencode "db=mydb" --data-urlencode "chunk_size=20000" --data-urlencode "q=SELECT * FROM cpu"
Via l'interface Web
Depuis l'interface web http://localhost:8083/, Créez une base : mydb
Clustering & Replication
avec Raft
https://influxdb.com/docs/v0.9/guides/clustering.html
Telegraf
https://github.com/influxdb/telegraf agent written in Go for collecting metrics from the system it's running on, or from other services, and writing them into InfluxDB.
Chronograf
Chronograf is a single binary web application that you can deploy behind your firewall to do ad hoc exploration of your time series data in InfluxDB. (lien)
Installation
wget https://s3.amazonaws.com/get.influxdb.org/chronograf/chronograf_0.1.0_amd64.deb sudo dpkg -i chronograf_0.1.0_amd64.deb sudo service chronograf start